File size: 4,525 Bytes
89b917c 0a5a7d4 89b917c d6a3882 89b917c 0a5a7d4 89b917c 0a5a7d4 89b917c 0a5a7d4 89b917c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
---
license: cc
language:
- en
---
# Dataset Card for PS-Eval Dataset
## Dataset Summary
The **PS-Eval Dataset** is a suite of polysemous and monosemous contexts extracted and filtered from the WiC dataset. It aims to evaluate the ability of Sparse Autoencoders (SAEs) to disentangle polysemantic activations into monosemantic features within large language models (LLMs). The dataset contains **1,112 samples** balanced between two classes:
- **Poly-contexts**: Target words with different meanings across two contexts (Label: 0).
- **Mono-contexts**: Target words with the same meaning across two contexts (Label: 1).
Each sample includes two sentences (contexts) containing the target word, along with a label indicating whether the target word's meaning is the same or different.
This dataset is particularly useful for evaluating methods and models that address polysemy in LLMs, such as feature-based interpretability techniques.
## Supported Tasks and Leaderboards
- **Polysemy Detection**: Classify whether the target word has the same or different meaning across contexts.
- **Feature Interpretability**: Evaluate whether Sparse Autoencoders (SAEs) can map polysemantic activations into monosemantic features.
This dataset can also serve as a benchmark for **context-sensitive word representations**.
## Languages
The dataset is in **English**.
## Dataset Structure
### Data Instances
Each instance in the dataset is stored in JSON format with the following structure:
```json
{
"id": "EN_22",
"context_1": "They stopped at an open space in the jungle.",
"context_2": "The astronauts walked in outer space without a tether.",
"target_word": "space",
"pos": "N",
"target_word_location_1": {
"char_start": 24,
"char_end": 29
},
"target_word_location_2": {
"char_start": 31,
"char_end": 36
},
"language": "EN",
"label": 0
}
```
### Data Fields
- **`target_word`** (*string*): The polysemous or monosemous word shared across the two contexts.
- **`context1`** (*string*): The first sentence containing the target word.
- **`context2`** (*string*): The second sentence containing the target word.
- **`label`** (*integer*): Binary label where:
- `0` = Different meanings (**poly-contexts**).
- `1` = Same meaning (**mono-contexts**).
### Data Splits
The dataset is provided as a single split with **1,112 samples**:
- **Poly-contexts (Label 0)**: 556 samples
- **Mono-contexts (Label 1)**: 556 samples
## Dataset Creation
### Source Data
The PS-Eval dataset is built on top of the **WiC Dataset** (Word-in-Context) – a rich resource for polysemous words originally introduced in [Pilehvar and Camacho-Collados (2019)](https://arxiv.org/abs/1808.09121).
### Filtering Process
We carefully selected instances from WiC where the target word is tokenized as a **single token** in GPT-2-small. This ensures consistency when analyzing activations in Sparse Autoencoders.
### Annotations
Labels are derived from the WiC dataset:
- **Different meanings**: Target words in poly-contexts (Label 0).
- **Same meaning**: Target words in mono-contexts (Label 1).
## Dataset Usage
### Intended Use
This dataset is designed for evaluating models and methods that:
- Analyze polysemantic and monosemantic activations in LLMs.
- Detect context-sensitive meanings of polysemous words.
- Test Sparse Autoencoders (SAEs) for interpretability.
### Example Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("gouki510/wic_eval_data")
# Inspect a sample
print(dataset["train"][0])
```
### Metrics
The dataset supports evaluation metrics such as:
- **Accuracy**
- **Precision**
- **Recall**
- **F1 Score**
- **Specificity**
These metrics are particularly important for evaluating polysemy detection models and Sparse Autoencoders.
For implementation details of the evaluation metrics, please refer to the GitHub repository: **[link_to_your_repo]**.
## Dataset Curators
This dataset was curated by **Gouki Minegishi** as part of research on polysemantic activation analysis in Sparse Autoencoders and interpretability for large language models.
## Citation
If you use the PS-Eval Dataset in your work, please cite:
```
@inproceedings{minegishi2024ps-eval,
title={Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words},
author={Gouki Minegishi, Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo},
year={2024},
url={hoge}
}
```
|