Update README.md
Browse files
README.md
CHANGED
|
@@ -288,10 +288,12 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
| 288 |
|
| 289 |
## Citation
|
| 290 |
If you use MicroLlama in your research or work, please cite the project using the following reference:
|
|
|
|
| 291 |
APA:
|
| 292 |
```
|
| 293 |
Wang, Z. K. (2024). MicroLlama: A 300M-parameter language model trained from scratch. GitHub & Hugging Face. https://github.com/keeeeenw/MicroLlama, https://huggingface.co/keeeeenw/MicroLlama
|
| 294 |
```
|
|
|
|
| 295 |
BibTeX:
|
| 296 |
```
|
| 297 |
@misc{wang2024microllama,
|
|
@@ -302,4 +304,5 @@ BibTeX:
|
|
| 302 |
note = {GitHub and Hugging Face repositories}
|
| 303 |
}
|
| 304 |
```
|
|
|
|
| 305 |
🙏 Please cite this work if you find it useful.
|
|
|
|
| 288 |
|
| 289 |
## Citation
|
| 290 |
If you use MicroLlama in your research or work, please cite the project using the following reference:
|
| 291 |
+
|
| 292 |
APA:
|
| 293 |
```
|
| 294 |
Wang, Z. K. (2024). MicroLlama: A 300M-parameter language model trained from scratch. GitHub & Hugging Face. https://github.com/keeeeenw/MicroLlama, https://huggingface.co/keeeeenw/MicroLlama
|
| 295 |
```
|
| 296 |
+
|
| 297 |
BibTeX:
|
| 298 |
```
|
| 299 |
@misc{wang2024microllama,
|
|
|
|
| 304 |
note = {GitHub and Hugging Face repositories}
|
| 305 |
}
|
| 306 |
```
|
| 307 |
+
|
| 308 |
🙏 Please cite this work if you find it useful.
|