Commit
·
c30b57f
1
Parent(s):
43767bd
Update README.md
Browse files
README.md
CHANGED
@@ -15,4 +15,17 @@ Existing research has demonstrated that refining large language models (LLMs) th
|
|
15 |
|
16 |
<h1>Please follow our Github: <a href="https://github.com/WangRongsheng/Aurora">https://github.com/WangRongsheng/Aurora</a></h1>
|
17 |
|
18 |
-

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
<h1>Please follow our Github: <a href="https://github.com/WangRongsheng/Aurora">https://github.com/WangRongsheng/Aurora</a></h1>
|
17 |
|
18 |
+

|
19 |
+
|
20 |
+
## Citation
|
21 |
+
If you find our work helpful, feel free to give us a cite.
|
22 |
+
```bib
|
23 |
+
@misc{wang2023auroraactivating,
|
24 |
+
title={Aurora:Activating Chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning},
|
25 |
+
author={Rongsheng Wang and Haoming Chen and Ruizhe Zhou and Yaofei Duan and Kunyan Cai and Han Ma and Jiaxi Cui and Jian Li and Patrick Cheong-Iao Pang and Yapeng Wang and Tao Tan},
|
26 |
+
year={2023},
|
27 |
+
eprint={2312.14557},
|
28 |
+
archivePrefix={arXiv},
|
29 |
+
primaryClass={cs.CL}
|
30 |
+
}
|
31 |
+
```
|