File size: 1,474 Bytes
2ea2611
 
 
 
 
 
 
 
 
 
a1ed590
 
2ea2611
 
 
 
a1ed590
905ebc0
5679363
905ebc0
 
5679363
905ebc0
5679363
 
 
2ea2611
be78fc6
2ea2611
be78fc6
2ea2611
be78fc6
2ea2611
be78fc6
 
 
 
2ea2611
 
 
be78fc6
2ea2611
be78fc6
2ea2611
be78fc6
2ea2611
 
be78fc6
 
 
 
 
 
 
2ea2611
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
inference: false
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
- biology
- medical
---

## Model Summary

MedPhi-2 is a Phi-2, **2.7 billion** parameters, further trained for the biomedical domain. 
It was proposed in MedExQA paper.

<h1>  🧑‍⚕️ MedExQA  </h1>
<h3> Medical Question Answering Benchmark with Multiple Explanations </h3>

<p>
 📄 <a href="https://arxiv.org/abs/2406.06331" target="_blank">Paper</a> • ⏬ <a href="https://huggingface.co/datasets/bluesky333/MedExQA" target="_blank">Dataset</a>  • ⚕️ <a href="https://huggingface.co/bluesky333/medphi2" target="_blank">MedPhi2</a><br>
</p>


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Model type:** Clinical LLM (Large Language Model)
- **Language(s) (NLP):** English
- **License:**  [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE)
- **Finetuned from model [optional]:** Phi-2



## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@article{kim2024medexqa,
  title={MedExQA: Medical Question Answering Benchmark with Multiple Explanations},
  author={Kim, Yunsoo and Wu, Jinge and Abdulle, Yusuf and Wu, Honghan},
  journal={arXiv e-prints},
  pages={arXiv--2406},
  year={2024}
}
```