LG-AI-EXAONE commited on
Commit
07ca1b9
·
1 Parent(s): 37be960

Update arXiv link & citation

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -25,7 +25,7 @@ library_name: transformers
25
 
26
  We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
27
 
28
- For more details, please refer to our [documentation](https://lgresearch.ai/data/upload/EXAONE_Deep__Model_Card.pdf), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep).
29
 
30
  <p align="center">
31
  <img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
@@ -104,7 +104,7 @@ else:
104
 
105
  ## Evaluation
106
 
107
- The following table shows the evaluation results of reasoning tasks such as math and coding. The full evaluation results can be found in the [documentation](https://lgresearch.ai/data/upload/EXAONE_Deep__Model_Card.pdf).
108
 
109
  <table>
110
  <tr>
@@ -269,7 +269,14 @@ The model is licensed under [EXAONE AI Model License Agreement 1.1 - NC](./LICEN
269
 
270
  ## Citation
271
 
272
- TBU
 
 
 
 
 
 
 
273
 
274
  ## Contact
275
  LG AI Research Technical Support: [email protected]
 
25
 
26
  We introduce EXAONE Deep, which exhibits superior capabilities in various reasoning tasks including math and coding benchmarks, ranging from 2.4B to 32B parameters developed and released by LG AI Research. Evaluation results show that 1) EXAONE Deep **2.4B** outperforms other models of comparable size, 2) EXAONE Deep **7.8B** outperforms not only open-weight models of comparable scale but also a proprietary reasoning model OpenAI o1-mini, and 3) EXAONE Deep **32B** demonstrates competitive performance against leading open-weight models.
27
 
28
+ For more details, please refer to our [documentation](https://arxiv.org/abs/2503.12524), [blog](https://www.lgresearch.ai/news/view?seq=543) and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-Deep).
29
 
30
  <p align="center">
31
  <img src="assets/exaone_deep_overall_performance.png", width="100%", style="margin: 40 auto;">
 
104
 
105
  ## Evaluation
106
 
107
+ The following table shows the evaluation results of reasoning tasks such as math and coding. The full evaluation results can be found in the [documentation](https://arxiv.org/abs/2503.12524).
108
 
109
  <table>
110
  <tr>
 
269
 
270
  ## Citation
271
 
272
+ ```
273
+ @article{exaone-deep,
274
+ title={EXAONE Deep: Reasoning Enhanced Language Models},
275
+ author={{LG AI Research}},
276
+ journal={arXiv preprint arXiv:2503.12524},
277
+ year={2025}
278
+ }
279
+ ```
280
 
281
  ## Contact
282
  LG AI Research Technical Support: [email protected]