Datasets:
license: cc-by-nc-sa-4.0
HypotheSAEs
HypotheSAEs is a method which hypothesizes interpretable relationships in text datasets by training Sparse Autoencoders (SAEs) on foundation model representations.
Paper: https://arxiv.org/abs/2502.04382
Code: https://github.com/rmovva/HypotheSAEs
This data repo contains all of the datasets used for the experiments in the paper.
Overview
In total, we use five datasets to evaluate HypotheSAEs.
We evaluate with two synthetic datasets, where the goal is to recover a list of frequent topics whose documents are pseudo-labeled as '1', while all other documents are labeled '0'. Topics are granular and annotated by humans.
- Wiki (from Merity et al., 2016): a dataset of Wikipedia articles.
- Bills (from Hoyle et al., 2022): a dataset of Congressional bill texts.
We evaluate with three real-world datasets, where the goal is to generate interpretable hypotheses which predict the target variable:
- Headlines (from Upworthy Research Archive): a dataset of article headlines and their corresponding online engagement levels. Each example is a pair of headlines which were randomized against each other; the target variable is whether headline A received more clicks than headline B.
- Yelp (from Yelp Open Dataset): a dataset of restaurant reviews; the target variable is the rating score (1-5).
- Congress (from Gentzkow and Shapiro, 2010): a dataset of U.S. Congressional speeches; the target variable is political party (Republican or Democrat).
Preprocessing Information
Wiki
We use the Wikipedia dataset processed by Pham et al. (2024), derived from WikiText (Merity et al., 2016). Wikipedia articles are categorized into supercategories, categories, and subcategories (e.g., Media and Drama > Television > The Simpsons Episodes). We focus on predicting subcategories, as they are the most specific and challenging to recover. After removing duplicates and infrequent subtopics (<100 articles), the dataset contains 11,979 articles spanning 69 subcategories.
- Wiki-5: Articles in the 5 most common subcategories are labeled as positives (1,771 articles, 14.8%) in
label_synthetic
column, rest are negatives. - Wiki-15: Similar, but for the 15 most common subcategories (4,190 positives, 35.0%).
Bills
We use the Congressional bills dataset collected by Hoyle et al. (2022) and processed by Pham et al. (2024), originally sourced from GovTrack. The dataset consists of bills from the 110th-114th U.S. Congresses (2007-2017), each annotated with a topic and subtopic.
- Removed short bill texts (possible transcription errors) and duplicate bill texts.
- Excluded bills without a subtopic or with infrequent subtopics (<100 occurrences of the subtopic).
- Final dataset: 20,834 bills spanning 70 subtopics.
- The top 5 most common subtopics are labeled as positives in
label_synthetic
column (24.2%), rest are negatives.
Headlines
We use the Upworthy Research Archive (Matias, 2021), which contains web traffic data from Upworthy.com, a digital media platform focused on high-engagement articles. The dataset includes thousands of A/B tests, where multiple headline variations were tested for the same article.
- Grouped headlines by
test_id
, ensuring only pairs of headlines randomized against each other are included. - Constructed 14,128 headline pairs with click-through rate (CTR) labels.
- Binary
label_pairwise
column indicates whether headline A had a higher CTR than headline B.
Yelp
We extract 4.72M restaurant reviews from the Yelp Open Dataset and filter for businesses tagged as "Restaurant".
- We sampled 200K training, 10K validation, 10K heldout samples.
- The target variable (
stars
) is star rating (1-5).
Congress
We use the Congressional speech dataset from the 109th U.S. Congress (2005-2007) (Gentzkow & Shapiro, 2010), containing speech transcripts labeled by speaker party (Republican or Democrat).
- Removed short speeches (<250 characters). Long speeches were split into chunks of ten sentences each.
- The heldout set includes at most one chunk per speech, to increase speech diversity when evaluating hypothesis generalization.
- Final dataset: 114K training, 16K validation, 12K heldout.