ragerri commited on
Commit
498ded5
·
verified ·
1 Parent(s): 5447702

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -2
README.md CHANGED
@@ -140,12 +140,27 @@ For **MedExpQA** benchmarking we have added the following elements in the data:
140
  1. **clinical_case_options/MedCorp/RRF-2**: 32 snippets extracted from the MedCorp corpus using the combination of _clinical case_ and _options_ as a
141
  2. query during the retrieval process. These 32 snippets are the resulting RRF combination of 32 separately retrieved snippets using BM25 and MedCPT.
142
 
143
- ## Results
 
 
 
 
 
 
 
144
 
145
  <p align="center">
146
- <img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/benchmark.pdf?raw=true" style="height: 650px;">
147
  </p>
148
 
 
 
 
 
 
 
 
 
149
 
150
  ## Citation
151
 
 
140
  1. **clinical_case_options/MedCorp/RRF-2**: 32 snippets extracted from the MedCorp corpus using the combination of _clinical case_ and _options_ as a
141
  2. query during the retrieval process. These 32 snippets are the resulting RRF combination of 32 separately retrieved snippets using BM25 and MedCPT.
142
 
143
+
144
+ ## MedExpQA Benchmark Overview
145
+
146
+ <p align="center">
147
+ <img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/overall_system.png?raw=true" style="height: 350px;">
148
+ </p>
149
+
150
+ ## Prompt Example for LLMs
151
 
152
  <p align="center">
153
+ <img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/prompt_en.png?raw=true" style="height: 350px;">
154
  </p>
155
 
156
+ ## Benchmark Results (averaged per type of external knowledge for grounding)
157
+
158
+ <p align="center">
159
+ <img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/benchmark.png?raw=true" style="height: 350px;">
160
+ </p>
161
+
162
+
163
+
164
 
165
  ## Citation
166