nielsr HF Staff commited on
Commit
827e8c0
·
verified ·
1 Parent(s): 7835ee7

Improve dataset card: Add task category, paper link, and GitHub link

Browse files

This PR improves the dataset card by:
- Adding `audio-text-to-text` to the `task_categories` metadata to better reflect the multimodal and generative nature of the dataset.
- Adding a prominent link to the Hugging Face paper page: https://huggingface.co/papers/2407.21054.
- Adding a direct link to the associated GitHub repository: https://github.com/leduckhai/Sentiment-Reasoning.
- Updating the "Dataset and Pre-trained Models" section to clarify that this repository is the dataset itself.

Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -1,16 +1,17 @@
1
  ---
2
- license: mit
3
- task_categories:
4
- - text-generation
5
- - text-classification
6
- - audio-classification
7
- - automatic-speech-recognition
8
  language:
9
  - vi
10
  - en
11
  - de
12
  - zh
13
  - fr
 
 
 
 
 
 
 
14
  tags:
15
  - medical
16
  ---
@@ -30,6 +31,9 @@ tags:
30
  </p>
31
  <p align="center"><em>Sentiment Reasoning pipeline</em></p>
32
 
 
 
 
33
  * **Abstract:**
34
  Transparency in AI healthcare decision-making is crucial. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)’s reasoning to make better decision. In this work, we introduce a new task - **Sentiment Reasoning** - for both speech and text modalities, and our proposed multimodal multitask framework and **the world's largest multimodal sentiment analysis dataset**. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (**+2% increase in both accuracy and macro-F1**) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (**five languages - Vietnamese, English, Chinese, German, and French**) and models are published online.
35
 
@@ -47,7 +51,7 @@ Please cite this paper: [https://arxiv.org/abs/2407.21054](https://arxiv.org/abs
47
  ```
48
 
49
  ## Dataset and Pre-trained Models:
50
- [🤗 HuggingFace Dataset]()
51
 
52
  [🤗 HuggingFace Models]()
53
 
 
1
  ---
 
 
 
 
 
 
2
  language:
3
  - vi
4
  - en
5
  - de
6
  - zh
7
  - fr
8
+ license: mit
9
+ task_categories:
10
+ - text-generation
11
+ - text-classification
12
+ - audio-classification
13
+ - automatic-speech-recognition
14
+ - audio-text-to-text
15
  tags:
16
  - medical
17
  ---
 
31
  </p>
32
  <p align="center"><em>Sentiment Reasoning pipeline</em></p>
33
 
34
+ * **Paper:** [Sentiment Reasoning for Healthcare](https://huggingface.co/papers/2407.21054)
35
+ * **Code:** [https://github.com/leduckhai/Sentiment-Reasoning](https://github.com/leduckhai/Sentiment-Reasoning)
36
+
37
  * **Abstract:**
38
  Transparency in AI healthcare decision-making is crucial. By incorporating rationales to explain reason for each predicted label, users could understand Large Language Models (LLMs)’s reasoning to make better decision. In this work, we introduce a new task - **Sentiment Reasoning** - for both speech and text modalities, and our proposed multimodal multitask framework and **the world's largest multimodal sentiment analysis dataset**. Sentiment Reasoning is an auxiliary task in sentiment analysis where the model predicts both the sentiment label and generates the rationale behind it based on the input transcript. Our study conducted on both human transcripts and Automatic Speech Recognition (ASR) transcripts shows that Sentiment Reasoning helps improve model transparency by providing rationale for model prediction with quality semantically comparable to humans while also improving model's classification performance (**+2% increase in both accuracy and macro-F1**) via rationale-augmented fine-tuning. Also, no significant difference in the semantic quality of generated rationales between human and ASR transcripts. All code, data (**five languages - Vietnamese, English, Chinese, German, and French**) and models are published online.
39
 
 
51
  ```
52
 
53
  ## Dataset and Pre-trained Models:
54
+ This repository contains the Hugging Face Dataset for Sentiment Reasoning for Healthcare.
55
 
56
  [🤗 HuggingFace Models]()
57