Update README.md
Browse files
README.md
CHANGED
|
@@ -3,8 +3,8 @@ license: mit
|
|
| 3 |
---
|
| 4 |
# FSL_ECG_QA_Dataset
|
| 5 |
|
| 6 |
-
FSL_ECG_QA_Dataset is a benchmark dataset designed for
|
| 7 |
-
|
| 8 |
## Dataset Highlights
|
| 9 |
|
| 10 |
- Multimodal: Each sample pairs an auscultation audio clip with metadata and diagnostic question-answer pairs.
|
|
@@ -13,7 +13,7 @@ FSL_ECG_QA_Dataset is a benchmark dataset designed for open-ended diagnostic rea
|
|
| 13 |
|
| 14 |
## Source Datasets
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
- **ICBHI Respiratory Sound Dataset**
|
| 19 |
download from the official ICBHI challenge website: [ICBHI 2017 Respiratory Sound Dataset](https://bhichallenge.med.auth.gr)
|
|
@@ -32,6 +32,12 @@ CaReSound is curated from several open-access medical audio datasets:
|
|
| 32 |
- **ZCHSound Pediatric Heart Sound Dataset**
|
| 33 |
download from [ZCHSound](http://zchsound.ncrcch.org.cn)
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
## QA Generation
|
| 37 |
|
|
|
|
| 3 |
---
|
| 4 |
# FSL_ECG_QA_Dataset
|
| 5 |
|
| 6 |
+
**FSL_ECG_QA_Dataset** is a **benchmark dataset** specifically designed to accompany the paper *"Electrocardiogram–Language Model for Few-Shot Question Answering with Meta Learning"* (**arXiv:2410.14464v1**). It supports research in combining **electrocardiogram (ECG) signals** with **natural language question answering (QA)**, particularly in **few-shot** and **meta-learning** scenarios.
|
| 7 |
+
|
| 8 |
## Dataset Highlights
|
| 9 |
|
| 10 |
- Multimodal: Each sample pairs an auscultation audio clip with metadata and diagnostic question-answer pairs.
|
|
|
|
| 13 |
|
| 14 |
## Source Datasets
|
| 15 |
|
| 16 |
+
The dataset is a structured reorganization of the existing ECG-QA dataset, adapted to suit meta-learning tasks. It draws samples from ECG sources such as PTB-XL and MIMIC-IV-ECG, and organizes them into diverse task sets based on question types (e.g., binary, multiple-choice, and query-based) and clinical attributes (e.g., SCP codes, noise type, axis deviation). This structure enables models to rapidly adapt to new diagnostic tasks with limited annotated examples.
|
| 17 |
|
| 18 |
- **ICBHI Respiratory Sound Dataset**
|
| 19 |
download from the official ICBHI challenge website: [ICBHI 2017 Respiratory Sound Dataset](https://bhichallenge.med.auth.gr)
|
|
|
|
| 32 |
- **ZCHSound Pediatric Heart Sound Dataset**
|
| 33 |
download from [ZCHSound](http://zchsound.ncrcch.org.cn)
|
| 34 |
|
| 35 |
+
|
| 36 |
+
To utilize this dataset, the authors propose a novel multimodal meta-learning framework that integrates a frozen ECG encoder, a frozen language model (e.g., LLaMA or Gemma), and a trainable cross-modal fusion module. This setup effectively aligns ECG signals with natural language queries to enable accurate clinical question answering.
|
| 37 |
+
|
| 38 |
+
Experimental results demonstrate that under a 5-way 5-shot setting, the proposed method consistently outperforms conventional supervised baselines across different question types, showing strong generalization capabilities.
|
| 39 |
+
|
| 40 |
+
In summary, FSL_ECG_QA_Dataset serves as a powerful benchmark for developing robust and generalizable ECG-based QA systems in data-scarce clinical environments.
|
| 41 |
|
| 42 |
## QA Generation
|
| 43 |
|