RichardErkhov commited on
Commit
9d2e1ca
Β·
verified Β·
1 Parent(s): b72e4d8

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +203 -0
README.md ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Apollo-7B - GGUF
11
+ - Model creator: https://huggingface.co/FreedomIntelligence/
12
+ - Original model: https://huggingface.co/FreedomIntelligence/Apollo-7B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Apollo-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q2_K.gguf) | Q2_K | 3.24GB |
18
+ | [Apollo-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
19
+ | [Apollo-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.IQ3_S.gguf) | IQ3_S | 3.71GB |
20
+ | [Apollo-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
21
+ | [Apollo-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.IQ3_M.gguf) | IQ3_M | 3.82GB |
22
+ | [Apollo-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q3_K.gguf) | Q3_K | 4.07GB |
23
+ | [Apollo-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
24
+ | [Apollo-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
25
+ | [Apollo-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
26
+ | [Apollo-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q4_0.gguf) | Q4_0 | 4.67GB |
27
+ | [Apollo-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
28
+ | [Apollo-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
29
+ | [Apollo-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q4_K.gguf) | Q4_K | 4.96GB |
30
+ | [Apollo-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
31
+ | [Apollo-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q4_1.gguf) | Q4_1 | 5.12GB |
32
+ | [Apollo-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q5_0.gguf) | Q5_0 | 5.57GB |
33
+ | [Apollo-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
34
+ | [Apollo-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q5_K.gguf) | Q5_K | 5.72GB |
35
+ | [Apollo-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
36
+ | [Apollo-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q5_1.gguf) | Q5_1 | 6.02GB |
37
+ | [Apollo-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q6_K.gguf) | Q6_K | 6.53GB |
38
+ | [Apollo-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/FreedomIntelligence_-_Apollo-7B-gguf/blob/main/Apollo-7B.Q8_0.gguf) | Q8_0 | 8.45GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ ---
47
+ # Multilingual Medicine: Model, Dataset, Benchmark, Code
48
+
49
+ Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far
50
+
51
+
52
+ <p align="center">
53
+ πŸ‘¨πŸ»β€πŸ’»<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> β€’πŸ“ƒ <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> β€’ 🌐 <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> β€’ πŸ€— <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> β€’ πŸ€— <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
54
+ <br> <a href="./README_zh.md"> δΈ­ζ–‡ </a> | <a href="./README.md"> English
55
+ </p>
56
+
57
+ ![Apollo](assets/apollo_medium_final.png)
58
+
59
+ ## 🌈 Update
60
+
61
+ * **[2024.04.25]** [MedJamba](https://huggingface.co/FreedomIntelligence/Apollo-MedJamba) released, train and evaluation code refer to [repo](https://github.com/FreedomIntelligence/MedJamba).
62
+ * **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released.
63
+ * **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is publishedοΌπŸŽ‰
64
+ * **[2024.01.23]** Apollo repo is publishedοΌπŸŽ‰
65
+
66
+
67
+ ## Results
68
+ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-34B" target="_blank">Apollo-34B</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-72B" target="_blank">Apollo-72B</a>
69
+
70
+ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-MedJamba" target="_blank">MedJamba</a>
71
+
72
+ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> β€’ πŸ€— <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a>
73
+
74
+
75
+
76
+ ![Apollo](assets/result.png)
77
+
78
+
79
+ ## Usage Format
80
+
81
+ User:{query}\nAssistant:{response}<|endoftext|>
82
+
83
+
84
+
85
+ ## Dataset & Evaluation
86
+
87
+ - Dataset
88
+ πŸ€— <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a>
89
+
90
+ <details><summary>Click to expand</summary>
91
+
92
+ ![Apollo](assets/dataset.png)
93
+
94
+ - [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip)
95
+ - [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train)
96
+ - Pretrain:
97
+ - data item:
98
+ - json_name: {data_source}_{language}_{data_type}.json
99
+ - data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki
100
+ - language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi)
101
+ - data_type: qa(generated qa from text)
102
+ - data_type==text: list of string
103
+ ```
104
+ [
105
+ "string1",
106
+ "string2",
107
+ ...
108
+ ]
109
+ ```
110
+ - data_type==qa: list of qa pairs(list of string)
111
+ ```
112
+ [
113
+ [
114
+ "q1",
115
+ "a1",
116
+ "q2",
117
+ "a2",
118
+ ...
119
+ ],
120
+ ...
121
+ ]
122
+ ```
123
+ - SFT:
124
+ - json_name: {data_source}_{language}.json
125
+ - data_type: code, general, math, medicalExam, medicalPatient
126
+ - data item: list of qa pairs(list of string)
127
+ ```
128
+ [
129
+ [
130
+ "q1",
131
+ "a1",
132
+ "q2",
133
+ "a2",
134
+ ...
135
+ ],
136
+ ...
137
+ ]
138
+ ```
139
+
140
+
141
+ </details>
142
+
143
+
144
+
145
+ - Evaluation
146
+ πŸ€— <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a>
147
+
148
+ <details><summary>Click to expand</summary>
149
+
150
+ - EN:
151
+ - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
152
+ - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test)
153
+ - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper.
154
+ - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu)
155
+ - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
156
+ - ZH:
157
+ - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test)
158
+ - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper
159
+ - Randomly sample 2,000 multiple-choice questions with single answer.
160
+ - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu)
161
+ - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
162
+ - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper
163
+ - Randomly sample 2,000 multiple-choice questions
164
+
165
+
166
+ - ES: [Head_qa](https://huggingface.co/datasets/head_qa)
167
+ - FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA)
168
+ - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic)
169
+ - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
170
+ - AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi)
171
+ - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
172
+
173
+
174
+ </details>
175
+
176
+
177
+ ## Results reproduction
178
+ <details><summary>Click to expand</summary>
179
+
180
+ **Waiting for Update**
181
+
182
+
183
+
184
+ </details>
185
+
186
+
187
+
188
+
189
+ ## Citation
190
+ Please use the following citation if you intend to use our dataset for training or evaluation:
191
+
192
+ ```
193
+ @misc{wang2024apollo,
194
+ title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People},
195
+ author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang},
196
+ year={2024},
197
+ eprint={2403.03640},
198
+ archivePrefix={arXiv},
199
+ primaryClass={cs.CL}
200
+ }
201
+ ```
202
+
203
+