File size: 939 Bytes
534b903
 
 
 
e47ded8
 
 
 
 
 
 
 
 
534b903
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
library_name: transformers
tags: [Danish, Mixed Tokenization, CerebrasGPT]
---
```
 _______       ___      .___  ___.   ______   .______      .______    __    __  
|       \     /   \     |   \/   |  /  __  \  |   _  \     |   _  \  |  |  |  | 
|  .--.  |   /  ^  \    |  \  /  | |  |  |  | |  |_)  |    |  |_)  | |  |__|  | 
|  |  |  |  /  /_\  \   |  |\/|  | |  |  |  | |      /     |   ___/  |   __   | 
|  '--'  | /  _____  \  |  |  |  | |  `--'  | |  |\  \----.|  |      |  |  |  | 
|_______/ /__/     \__\ |__|  |__|  \______/  | _| `._____|| _|      |__|  |__| 
                                                                               
```

### DA-MIXED-CEREBRAS

This is an experimental Danish language model fine-tuned on a combination of tokenizers, including both morphological and BPE approaches. Built on the CerebrasGPT-111M architecture, it explores how mixed tokenization strategies affect Danish text generation.