Datasets:
metadata
license: apache-2.0
language:
- en
pretty_name: HackerNews comments dataset
dataset_info:
config_name: default
features:
- name: id
dtype: int64
- name: deleted
dtype: bool
- name: type
dtype: string
- name: by
dtype: string
- name: time
dtype: int64
- name: text
dtype: string
- name: dead
dtype: bool
- name: parent
dtype: int64
- name: poll
dtype: int64
- name: kids
sequence: int64
- name: url
dtype: string
- name: score
dtype: int64
- name: title
dtype: string
- name: parts
sequence: int64
- name: descendants
dtype: int64
configs:
- config_name: default
data_files:
- split: train
path: items/*.jsonl.zst
Hackernews Comments Dataset
A dataset of all HN API items from id=0
till id=41422887
(so from 2006 till 02 Sep 2024). The dataset is build by scraping the HN API according to its official schema and docs. Scraper code is also available on github: nixiesearch/hnscrape
Dataset contents
No cleaning, validation or filtering was performed. The resulting data files are raw JSON API response dumps in zstd-compressed JSONL files. An example payload:
{
"by": "goldfish",
"descendants": 0,
"id": 46,
"score": 4,
"time": 1160581168,
"title": "Rentometer: Check How Your Rent Compares to Others in Your Area",
"type": "story",
"url": "http://www.rentometer.com/"
}
Usage
You can directly load this dataset with a Huggingface Datasets library.
pip install datasets zstandard
from datasets import load_dataset
ds = load_dataset("nixiesearch/hackernews-comments", split="train")
print(ds.features)
License
Apache License 2.0.