update readme
Browse files- .gitignore +1 -0
- README.md +105 -0
.gitignore
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
.venv
|
README.md
CHANGED
@@ -1,3 +1,108 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pretty_name: HackerNews stories dataset
|
6 |
+
dataset_info:
|
7 |
+
config_name: default
|
8 |
+
features:
|
9 |
+
- name: id
|
10 |
+
dtype: int64
|
11 |
+
- name: url
|
12 |
+
dtype: string
|
13 |
+
- name: title
|
14 |
+
dtype: string
|
15 |
+
- name: author
|
16 |
+
dtype: string
|
17 |
+
- name: markdown
|
18 |
+
dtype: string
|
19 |
+
- name: downloaded
|
20 |
+
dtype: bool
|
21 |
+
- name: meta_extracted
|
22 |
+
dtype: bool
|
23 |
+
- name: parsed
|
24 |
+
dtype: bool
|
25 |
+
- name: description
|
26 |
+
dtype: string
|
27 |
+
- name: filedate
|
28 |
+
dtype: string
|
29 |
+
- name: date
|
30 |
+
dtype: string
|
31 |
+
- name: image
|
32 |
+
dtype: string
|
33 |
+
- name: pagetype
|
34 |
+
dtype: string
|
35 |
+
- name: hostname
|
36 |
+
dtype: string
|
37 |
+
- name: sitename
|
38 |
+
dtype: string
|
39 |
+
- name: tags
|
40 |
+
dtype: string
|
41 |
+
- name: categories
|
42 |
+
dtype: string
|
43 |
+
configs:
|
44 |
+
- config_name: default
|
45 |
+
data_files:
|
46 |
+
- split: train
|
47 |
+
path: data/*.jsonl.zst
|
48 |
---
|
49 |
+
# A HackerNews Stories dataset
|
50 |
+
|
51 |
+
This dataset is based on [nixiesearch/hackernews-comments](https://huggingface.co/datasets/nixiesearch/hackernews-comments) dataset:
|
52 |
+
|
53 |
+
* for each item of `type=story` we downloaded the target URL. Out of ~3.8M stories ~2.1M are still reachable.
|
54 |
+
* each story HTML was parsed using [trafilatura](https://trafilatura.readthedocs.io) library
|
55 |
+
* we store article text in `markdown` format along with all page-specific metadata.
|
56 |
+
|
57 |
+
## Dataset stats
|
58 |
+
|
59 |
+
* date coverage: xx.2006-09.2024, same as in upstream [nixiesearch/hackernews-comments](https://huggingface.co/datasets/nixiesearch/hackernews-comments) dataset
|
60 |
+
* total scraped pages: 2150271 (around 55% of the original dataset)
|
61 |
+
* unpacked size: ~20GB of text.
|
62 |
+
|
63 |
+
## Usage
|
64 |
+
|
65 |
+
The dataset is available as a set of JSONL-formatted files with ZSTD compression:
|
66 |
+
|
67 |
+
```json
|
68 |
+
{
|
69 |
+
"id": 8961943,
|
70 |
+
"url": "https://www.eff.org/deeplinks/2015/01/internet-sen-ron-wyden-were-counting-you-oppose-fast-track-tpp",
|
71 |
+
"title": "Digital Rights Groups to Senator Ron Wyden: We're Counting on You to Oppose Fast Track for the TPP",
|
72 |
+
"author": "Maira Sutton",
|
73 |
+
"markdown": "Seven leading US digital rights and access to knowledge groups, ...",
|
74 |
+
"downloaded": true,
|
75 |
+
"meta_extracted": true,
|
76 |
+
"parsed": true,
|
77 |
+
"description": "Seven leading US digital rights and access to knowledge groups, and over 7,550 users, have called on Sen. Wyden today to oppose any new version of Fast Track (aka trade promotion authority) that does not fix the secretive, corporate-dominated process of trade negotiations. In particular, we urge...",
|
78 |
+
"filedate": "2024-10-13",
|
79 |
+
"date": "2015-01-27",
|
80 |
+
"image": "https://www.eff.org/files/issues/fair-use-og-1.png",
|
81 |
+
"pagetype": "article",
|
82 |
+
"hostname": "eff.org",
|
83 |
+
"sitename": "Electronic Frontier Foundation",
|
84 |
+
"categories": null,
|
85 |
+
"tags": null
|
86 |
+
}
|
87 |
+
```
|
88 |
+
|
89 |
+
The `id` field matches the `id` field from the upstream [nixiesearch/hackernews-comments](https://huggingface.co/datasets/nixiesearch/hackernews-comments) dataset.
|
90 |
+
|
91 |
+
You can also use this dataset using Huggingface datasets library:
|
92 |
+
|
93 |
+
```shell
|
94 |
+
pip install datasets zstandard
|
95 |
+
```
|
96 |
+
|
97 |
+
and then:
|
98 |
+
|
99 |
+
```python
|
100 |
+
from datasets import load_dataset
|
101 |
+
|
102 |
+
stories = load_dataset("nixiesearch/hackernews-stories", split="train")
|
103 |
+
print(stories[0])
|
104 |
+
```
|
105 |
+
|
106 |
+
## License
|
107 |
+
|
108 |
+
Apache License 2.0
|