Commit
·
53fa066
1
Parent(s):
8c47543
Update README.md
Browse files
README.md
CHANGED
|
@@ -23,18 +23,84 @@ extra_gated_fields:
|
|
| 23 |
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
|
| 24 |
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
|
| 25 |
---
|
|
|
|
| 26 |
|
| 27 |
-
# Taiwan
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
# 🌟 Checkout New [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
-
|
|
|
|
|
|
|
| 38 |
|
| 39 |
-
|
| 40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
|
| 24 |
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
|
| 25 |
---
|
| 26 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 27 |
|
| 28 |
+
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
|
| 29 |
|
| 30 |
+
# Model Card for Taiwan LLM 13B v2.0 base
|
| 31 |
|
| 32 |
+
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
|
| 33 |
+
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
|
| 34 |
+
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
|
| 35 |
+
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
|
| 36 |
+
For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
|
| 37 |
|
|
|
|
| 38 |
|
| 39 |
+
## Model description
|
| 40 |
|
| 41 |
+
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
|
| 42 |
+
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
|
| 43 |
+
- **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/yentinglin/meta-llama/Llama-2-13b-hf)
|
| 44 |
|
| 45 |
+
### Model Sources
|
| 46 |
|
| 47 |
+
<!-- Provide the basic links for the model. -->
|
| 48 |
+
|
| 49 |
+
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
|
| 50 |
+
- **Demo:** https://twllm.com/
|
| 51 |
+
|
| 52 |
+
## Performance
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+
## Intended uses
|
| 58 |
+
|
| 59 |
+
You should fine-tuned this model for instruction-following / chat application.
|
| 60 |
+
|
| 61 |
+
### Training hyperparameters
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
|
| 70 |
+
The following hyperparameters were used during training:
|
| 71 |
+
- learning_rate: 5e-05
|
| 72 |
+
- distributed_type: multi-GPU
|
| 73 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 74 |
+
- lr_scheduler_type: cosine
|
| 75 |
+
- lr_scheduler_warmup_ratio: 0.03
|
| 76 |
+
- num_epochs: 5.0
|
| 77 |
+
|
| 78 |
+
## Citation
|
| 79 |
+
|
| 80 |
+
If you find Taiwan LLM is useful in your work, please cite it with:
|
| 81 |
+
|
| 82 |
+
```
|
| 83 |
+
@inproceedings{lin-chen-2023-llm,
|
| 84 |
+
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
|
| 85 |
+
author = "Lin, Yen-Ting and Chen, Yun-Nung",
|
| 86 |
+
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
|
| 87 |
+
month = jul,
|
| 88 |
+
year = "2023",
|
| 89 |
+
address = "Toronto, Canada",
|
| 90 |
+
publisher = "Association for Computational Linguistics",
|
| 91 |
+
url = "https://aclanthology.org/2023.nlp4convai-1.5",
|
| 92 |
+
pages = "47--58"
|
| 93 |
+
}
|
| 94 |
+
|
| 95 |
+
@misc{taiwanllama,
|
| 96 |
+
author={Lin, Yen-Ting and Chen, Yun-Nung},
|
| 97 |
+
title={Language Models for Taiwanese Culture},
|
| 98 |
+
year={2023},
|
| 99 |
+
url={https://github.com/MiuLab/Taiwan-LLaMa},
|
| 100 |
+
note={Code and models available at https://github.com/MiuLab/Taiwan-LLaMa},
|
| 101 |
+
}
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
# Acknowledgement
|
| 105 |
+
|
| 106 |
+
Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable compute resources for the project.
|