mfromm commited on
Commit
c0a5498
·
verified ·
1 Parent(s): f7c42c5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +267 -0
README.md ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - feature-extraction
4
+ pretty_name: HPLT2-embeddings
5
+ size_categories:
6
+ - n>1T
7
+ language:
8
+ - sq
9
+ - bg
10
+ - ca
11
+ - cs
12
+ - da
13
+ - de
14
+ - es
15
+ - et
16
+ - el
17
+ - eu
18
+ - fi
19
+ - fr
20
+ - gl
21
+ - ga
22
+ - hr
23
+ - hu
24
+ - hy
25
+ - is
26
+ - it
27
+ - lv
28
+ - lt
29
+ - mk
30
+ - nl
31
+ - pl
32
+ - pt
33
+ - ro
34
+ - sl
35
+ - sk
36
+ - sr
37
+ - tr
38
+ - sv
39
+ - nb
40
+ - nn
41
+ configs:
42
+ - config_name: als_Latn
43
+ data_files:
44
+ - split: train
45
+ path: als_Latn/*
46
+ - config_name: bul_Cyrl
47
+ data_files:
48
+ - split: train
49
+ path: bul_Cyrl/*
50
+ - config_name: cat_Latn
51
+ data_files:
52
+ - split: train
53
+ path: cat_Latn/*
54
+ - config_name: ces_Latn
55
+ data_files:
56
+ - split: train
57
+ path: ces_Latn/*
58
+ - config_name: dan_Latn
59
+ data_files:
60
+ - split: train
61
+ path: dan_Latn/*
62
+ - config_name: deu_Latn
63
+ data_files:
64
+ - split: train
65
+ path: deu_Latn/*
66
+ - config_name: ekk_Latn
67
+ data_files:
68
+ - split: train
69
+ path: ekk_Latn/*
70
+ - config_name: ell_Grek
71
+ data_files:
72
+ - split: train
73
+ path: ell_Grek/*
74
+ - config_name: eus_Latn
75
+ data_files:
76
+ - split: train
77
+ path: eus_Latn/*
78
+ - config_name: fin_Latn
79
+ data_files:
80
+ - split: train
81
+ path: fin_Latn/*
82
+ - config_name: fra_Latn
83
+ data_files:
84
+ - split: train
85
+ path: fra_Latn/*
86
+ - config_name: gle_Latn
87
+ data_files:
88
+ - split: train
89
+ path: gle_Latn/*
90
+ - config_name: glg_Latn
91
+ data_files:
92
+ - split: train
93
+ path: glg_Latn/*
94
+ - config_name: hrv_Latn
95
+ data_files:
96
+ - split: train
97
+ path: hrv_Latn/*
98
+ - config_name: hun_Latn
99
+ data_files:
100
+ - split: train
101
+ path: hun_Latn/*
102
+ - config_name: hye_Armn
103
+ data_files:
104
+ - split: train
105
+ path: hye_Armn/*
106
+ - config_name: isl_Latn
107
+ data_files:
108
+ - split: train
109
+ path: isl_Latn/*
110
+ - config_name: ita_Latn
111
+ data_files:
112
+ - split: train
113
+ path: ita_Latn/*
114
+ - config_name: lit_Latn
115
+ data_files:
116
+ - split: train
117
+ path: lit_Latn/*
118
+ - config_name: lvs_Latn
119
+ data_files:
120
+ - split: train
121
+ path: lvs_Latn/*
122
+ - config_name: mkd_Cyrl
123
+ data_files:
124
+ - split: train
125
+ path: mkd_Cyrl/*
126
+ - config_name: nld_Latn
127
+ data_files:
128
+ - split: train
129
+ path: nld_Latn/*
130
+ - config_name: nno_Latn
131
+ data_files:
132
+ - split: train
133
+ path: nno_Latn/*
134
+ - config_name: nob_Latn
135
+ data_files:
136
+ - split: train
137
+ path: nob_Latn/*
138
+ - config_name: pol_Latn
139
+ data_files:
140
+ - split: train
141
+ path: pol_Latn/*
142
+ - config_name: por_Latn
143
+ data_files:
144
+ - split: train
145
+ path: por_Latn/*
146
+ - config_name: ron_Latn
147
+ data_files:
148
+ - split: train
149
+ path: ron_Latn/*
150
+ - config_name: slk_Latn
151
+ data_files:
152
+ - split: train
153
+ path: slk_Latn/*
154
+ - config_name: slv_Latn
155
+ data_files:
156
+ - split: train
157
+ path: slv_Latn/*
158
+ - config_name: spa_Latn
159
+ data_files:
160
+ - split: train
161
+ path: spa_Latn/*
162
+ - config_name: srp_Cyrl
163
+ data_files:
164
+ - split: train
165
+ path: srp_Cyrl/*
166
+ - config_name: swe_Latn
167
+ data_files:
168
+ - split: train
169
+ path: swe_Latn/*
170
+ - config_name: tur_Latn
171
+ data_files:
172
+ - split: train
173
+ path: tur_Latn/*
174
+ - config_name: ukr_Cyrl
175
+ data_files:
176
+ - split: train
177
+ path: ukr_Cyrl/*
178
+ ---
179
+ # HPLT2-embeddings
180
+
181
+ ## Dataset summary
182
+
183
+ HPLT2-embeddings is an extension of the [**HPLT2**](https://hplt-project.org/datasets/v2.0) dataset, annotated with **document-level** [**Snowflake's Arctic-embed-m-v2.0**](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) **embeddings** for **35 languages**, making the dataset **useful for a variety of tasks**, including document clustering, filtering, and other multilingual research.
184
+
185
+ Snowflake-arctic-embed-m-v2.0 has a sequence length limit of 8192 tokens, each document's embeddings are obtained by using the CLS token to embed each document.
186
+
187
+ The embeddings were computed as part of our [**🦊 JQL: Judging Quality across Languages**](https://huggingface.co/spaces/JQL-AI/JQL) project and will be the basis for an upcoming high-quality subset of HPLT2.
188
+ We believe that they can be useful for other multilingual research and applications.
189
+
190
+ For more details, see our paper [Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models](https://arxiv.org/abs/2505.22232).
191
+
192
+
193
+ ## Usage
194
+
195
+ You can load the dataset in Python using e.g.pandas:
196
+
197
+ ```python
198
+ import h5py
199
+ import pandas as pd
200
+
201
+ # Path to your .h5 file
202
+ file_path = "000_001_00000.h5" # <-- Replace with your actual file path
203
+
204
+ # Open the HDF5 file and load data
205
+ with h5py.File(file_path, "r") as f:
206
+ # Load the embeddings and document IDs from the "train" group
207
+ embeddings = f["train/embeddings"][:]
208
+ document_ids = f["train/document_id"][:]
209
+
210
+ # Convert document IDs from bytes (if needed)
211
+ if isinstance(document_ids[0], bytes):
212
+ document_ids = [doc_id.decode("utf-8") for doc_id in document_ids]
213
+
214
+ # Optionally: create a DataFrame (only if embeddings aren't too large for RAM)
215
+ df = pd.DataFrame(embeddings)
216
+ df.insert(0, "document_id", document_ids) # Add document_id as the first column
217
+
218
+ # Preview the DataFrame
219
+ print(df.head())
220
+ print(f"Loaded {len(df)} rows with shape {embeddings.shape[1]}-dimensional embeddings.")
221
+
222
+ ```
223
+
224
+
225
+
226
+
227
+ ## Origin of the Dataset
228
+
229
+ This dataset, derived from HPLT2, includes web content collected from 2013 to 2024. As HPLT2 is sourced from the broader internet, it may contain some personally identifiable information (PII), despite efforts to anonymize email addresses and public IP addresses during processing.
230
+
231
+
232
+ ## Considerations for Data Usage
233
+
234
+ For information on social impact, potential biases, and known limitations, please refer to the [HPLT2 documentation](https://hplt-project.org/datasets/v2.0).
235
+
236
+
237
+ ## Citation information
238
+ If you use this dataset in your research or applications, please use the following citation:
239
+ ```
240
+ @article{ali2025judging,
241
+ title = {Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models},
242
+ author = {
243
+ Mehdi Ali,
244
+ Manuel Brack,
245
+ Max Lübbering,
246
+ Elias Wendt,
247
+ Abbas Goher Khan,
248
+ Richard Rutmann,
249
+ Alex Jude,
250
+ Maurice Kraus,
251
+ Alexander Arno Weber,
252
+ Felix Stollenwerk,
253
+ David Kaczér,
254
+ Florian Mai,
255
+ Lucie Flek,
256
+ Rafet Sifa,
257
+ Nicolas Flores-Herr,
258
+ Joachim Köhler,
259
+ Patrick Schramowski,
260
+ Michael Fromm,
261
+ Kristian Kersting
262
+ },
263
+ year = {2025},
264
+ journal = {arXiv preprint arXiv:2505:22232}
265
+ }
266
+
267
+ ```