Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
License:
added dataset
Browse files- README.md +161 -0
- earthquake/dev.json +0 -0
- earthquake/test.json +0 -0
- earthquake/train.json +0 -0
- fire/dev.json +0 -0
- fire/test.json +0 -0
- fire/train.json +0 -0
- flood/dev.json +0 -0
- flood/test.json +0 -0
- flood/train.json +0 -0
- hurricane/dev.json +0 -0
- hurricane/test.json +0 -0
- hurricane/train.json +0 -0
README.md
ADDED
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
+
task_categories:
|
4 |
+
- text-classification
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- Disaster
|
9 |
+
- Crisis Informatics
|
10 |
+
pretty_name: 'HumAID: Human-Annotated Disaster Incidents Data from Twitter -- Event type dataset'
|
11 |
+
size_categories:
|
12 |
+
- 10K<n<100K
|
13 |
+
dataset_info:
|
14 |
+
- config_name: flood
|
15 |
+
splits:
|
16 |
+
- name: train
|
17 |
+
num_examples: 7815
|
18 |
+
- name: dev
|
19 |
+
num_examples: 1137
|
20 |
+
- name: test
|
21 |
+
num_examples: 2214
|
22 |
+
- config_name: fire
|
23 |
+
splits:
|
24 |
+
- name: train
|
25 |
+
num_examples: 7792
|
26 |
+
- name: dev
|
27 |
+
num_examples: 1134
|
28 |
+
- name: test
|
29 |
+
num_examples: 2207
|
30 |
+
- config_name: earthquake
|
31 |
+
splits:
|
32 |
+
- name: train
|
33 |
+
num_examples: 6250
|
34 |
+
- name: dev
|
35 |
+
num_examples: 909
|
36 |
+
- name: test
|
37 |
+
num_examples: 1773
|
38 |
+
configs:
|
39 |
+
- config_name: flood
|
40 |
+
data_files:
|
41 |
+
- split: train
|
42 |
+
path: flood/train.json
|
43 |
+
- split: dev
|
44 |
+
path: flood/dev.json
|
45 |
+
- split: test
|
46 |
+
path: flood/test.json
|
47 |
+
- config_name: fire
|
48 |
+
data_files:
|
49 |
+
- split: train
|
50 |
+
path: fire/train.json
|
51 |
+
- split: dev
|
52 |
+
path: fire/dev.json
|
53 |
+
- split: test
|
54 |
+
path: fire/test.json
|
55 |
+
- config_name: earthquake
|
56 |
+
data_files:
|
57 |
+
- split: train
|
58 |
+
path: earthquake/train.json
|
59 |
+
- split: dev
|
60 |
+
path: earthquake/dev.json
|
61 |
+
- split: test
|
62 |
+
path: earthquake/test.json
|
63 |
+
- config_name: hurricane
|
64 |
+
data_files:
|
65 |
+
- split: train
|
66 |
+
path: hurricane/train.json
|
67 |
+
- split: dev
|
68 |
+
path: hurricane/dev.json
|
69 |
+
- split: test
|
70 |
+
path: hurricane/test.json
|
71 |
+
---
|
72 |
+
# HumAID: Human-Annotated Disaster Incidents Data from Twitter
|
73 |
+
|
74 |
+
|
75 |
+
## Dataset Description
|
76 |
+
|
77 |
+
- **Homepage:** https://crisisnlp.qcri.org/humaid_dataset
|
78 |
+
- **Repository:** https://crisisnlp.qcri.org/data/humaid/humaid_data_all.zip
|
79 |
+
- **Paper:** https://ojs.aaai.org/index.php/ICWSM/article/view/18116/17919
|
80 |
+
|
81 |
+
### Dataset Summary
|
82 |
+
|
83 |
+
The HumAID Twitter dataset consists of several thousands of manually annotated tweets that has been collected during 19 major natural disaster events including earthquakes, hurricanes, wildfires, and floods, which happened from 2016 to 2019 across different parts of the World. The annotations in the provided datasets consists of following humanitarian categories. The dataset consists only english tweets and it is the largest dataset for crisis informatics so far.
|
84 |
+
** Humanitarian categories **
|
85 |
+
- Caution and advice
|
86 |
+
- Displaced people and evacuations
|
87 |
+
- Dont know cant judge
|
88 |
+
- Infrastructure and utility damage
|
89 |
+
- Injured or dead people
|
90 |
+
- Missing or found people
|
91 |
+
- Not humanitarian
|
92 |
+
- Other relevant information
|
93 |
+
- Requests or urgent needs
|
94 |
+
- Rescue volunteering or donation effort
|
95 |
+
- Sympathy and support
|
96 |
+
|
97 |
+
The resulting annotated dataset consists of 11 labels.
|
98 |
+
|
99 |
+
### Supported Tasks and Benchmark
|
100 |
+
The dataset can be used to train a model for multiclass tweet classification for disaster response. The benchmark results can be found in https://ojs.aaai.org/index.php/ICWSM/article/view/18116/17919.
|
101 |
+
|
102 |
+
Dataset is also released with event-wise and JSON objects for further research.
|
103 |
+
Full set of the dataset can be found in https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/A7NVF7
|
104 |
+
|
105 |
+
### Languages
|
106 |
+
|
107 |
+
English
|
108 |
+
|
109 |
+
## Dataset Structure
|
110 |
+
|
111 |
+
### Data Instances
|
112 |
+
|
113 |
+
```
|
114 |
+
{
|
115 |
+
|
116 |
+
"tweet_text": "@RT_com: URGENT: Death toll in #Ecuador #quake rises to 233 \u2013 President #Correa #1 in #Pakistan",
|
117 |
+
|
118 |
+
"class_label": "injured_or_dead_people"
|
119 |
+
|
120 |
+
}
|
121 |
+
```
|
122 |
+
### Data Fields
|
123 |
+
|
124 |
+
* tweet_text: corresponds to the tweet text.
|
125 |
+
* class_label: corresponds to a label assigned to a given tweet text
|
126 |
+
|
127 |
+
|
128 |
+
### Data Splits
|
129 |
+
|
130 |
+
* Train
|
131 |
+
* Development
|
132 |
+
* Test
|
133 |
+
|
134 |
+
## Dataset Creation
|
135 |
+
Tweets has been collected during several disaster events.
|
136 |
+
|
137 |
+
|
138 |
+
### Annotations
|
139 |
+
AMT has been used to annotate the dataset. Please check the paper for a more detail.
|
140 |
+
|
141 |
+
#### Who are the annotators?
|
142 |
+
- crowdsourced
|
143 |
+
|
144 |
+
### Licensing Information
|
145 |
+
|
146 |
+
- cc-by-nc-4.0
|
147 |
+
|
148 |
+
### Citation Information
|
149 |
+
|
150 |
+
```
|
151 |
+
@inproceedings{humaid2020,
|
152 |
+
Author = {Firoj Alam, Umair Qazi, Muhammad Imran, Ferda Ofli},
|
153 |
+
booktitle={Proceedings of the Fifteenth International AAAI Conference on Web and Social Media},
|
154 |
+
series={ICWSM~'21},
|
155 |
+
Keywords = {Social Media, Crisis Computing, Tweet Text Classification, Disaster Response},
|
156 |
+
Title = {HumAID: Human-Annotated Disaster Incidents Data from Twitter},
|
157 |
+
Year = {2021},
|
158 |
+
publisher={AAAI},
|
159 |
+
address={Online},
|
160 |
+
}
|
161 |
+
```
|
earthquake/dev.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
earthquake/test.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
earthquake/train.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
fire/dev.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
fire/test.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
fire/train.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
flood/dev.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
flood/test.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
flood/train.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
hurricane/dev.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
hurricane/test.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
hurricane/train.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|