Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ taiwan_llm_tokenizer = LlamaTokenizer.from_pretrained("yentinglin/Taiwan-LLM_v3_
|
|
31 |
original_llama_tokenizer = LlamaTokenizer.from_pretrained("NousResearch/Llama-2-7b-hf")
|
32 |
```
|
33 |
|
34 |
-
Once the tokenizer is loaded, you can use it to tokenize both English and Traditional
|
35 |
|
36 |
```python
|
37 |
text_en = """During the recent GTC (GPU Technology Conference), Nvidia CEO Jensen Huang took time out of his busy schedule to dine with the Taiwanese community in Silicon Valley. In his speech at the gathering, Huang referred to himself as a "great ambassador for Taiwan," expressing his gratitude for the island nation's role in Nvidia's growth and success."""
|
@@ -47,28 +47,28 @@ print(f"English text:")
|
|
47 |
print(f"Taiwan-LLM_v3_tokenizer: {len(taiwan_llm_tokens_en)} tokens")
|
48 |
print(f"Original LLaMA tokenizer: {len(original_llama_tokens_en)} tokens")
|
49 |
|
50 |
-
print(f"\nTraditional
|
51 |
print(f"Taiwan-LLM_v3_tokenizer: {len(taiwan_llm_tokens_zh)} tokens")
|
52 |
print(f"Original LLaMA tokenizer: {len(original_llama_tokens_zh)} tokens")
|
53 |
```
|
54 |
|
55 |
## Training Data
|
56 |
|
57 |
-
The
|
58 |
|
59 |
- Wikipedia articles
|
60 |
- Legal documents
|
61 |
- Online forum discussions
|
62 |
- Cultural and historical texts
|
63 |
|
64 |
-
This ensures that the tokenizer is well-suited for a wide range of Traditional
|
65 |
|
66 |
## Tokenizer Merging Process
|
67 |
|
68 |
The tokenizer was created by following these steps:
|
69 |
|
70 |
1. Load and preprocess the Traditional Mandarin text data
|
71 |
-
2. Train a
|
72 |
3. Merge the Mandarin SentencePiece model with the LLaMA tokenizer
|
73 |
|
74 |
## Acknowledgements
|
|
|
31 |
original_llama_tokenizer = LlamaTokenizer.from_pretrained("NousResearch/Llama-2-7b-hf")
|
32 |
```
|
33 |
|
34 |
+
Once the tokenizer is loaded, you can use it to tokenize both English and Traditional Mandarin text:
|
35 |
|
36 |
```python
|
37 |
text_en = """During the recent GTC (GPU Technology Conference), Nvidia CEO Jensen Huang took time out of his busy schedule to dine with the Taiwanese community in Silicon Valley. In his speech at the gathering, Huang referred to himself as a "great ambassador for Taiwan," expressing his gratitude for the island nation's role in Nvidia's growth and success."""
|
|
|
47 |
print(f"Taiwan-LLM_v3_tokenizer: {len(taiwan_llm_tokens_en)} tokens")
|
48 |
print(f"Original LLaMA tokenizer: {len(original_llama_tokens_en)} tokens")
|
49 |
|
50 |
+
print(f"\nTraditional Mandarin text:")
|
51 |
print(f"Taiwan-LLM_v3_tokenizer: {len(taiwan_llm_tokens_zh)} tokens")
|
52 |
print(f"Original LLaMA tokenizer: {len(original_llama_tokens_zh)} tokens")
|
53 |
```
|
54 |
|
55 |
## Training Data
|
56 |
|
57 |
+
The Mandarin SentencePiece model used in this tokenizer was trained on a diverse set of Traditional Mandarin text data, including:
|
58 |
|
59 |
- Wikipedia articles
|
60 |
- Legal documents
|
61 |
- Online forum discussions
|
62 |
- Cultural and historical texts
|
63 |
|
64 |
+
This ensures that the tokenizer is well-suited for a wide range of Traditional Mandarin language applications.
|
65 |
|
66 |
## Tokenizer Merging Process
|
67 |
|
68 |
The tokenizer was created by following these steps:
|
69 |
|
70 |
1. Load and preprocess the Traditional Mandarin text data
|
71 |
+
2. Train a Mandarin SentencePiece model using the preprocessed text data
|
72 |
3. Merge the Mandarin SentencePiece model with the LLaMA tokenizer
|
73 |
|
74 |
## Acknowledgements
|