Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ license: cc
|
|
3 |
language:
|
4 |
- en
|
5 |
---
|
6 |
-
|
7 |
# Dataset Card for PS-Eval Dataset
|
8 |
|
9 |
## Dataset Summary
|
@@ -64,7 +64,7 @@ The dataset is provided as a single split with **1,112 samples**:
|
|
64 |
|
65 |
## Dataset Creation
|
66 |
### Source Data
|
67 |
-
The PS-Eval dataset is built on top of the **WiC Dataset** (Word-in-Context) – a rich resource for polysemous words originally introduced in
|
68 |
|
69 |
### Filtering Process
|
70 |
We carefully selected instances from WiC where the target word is tokenized as a **single token** in GPT-2-small. This ensures consistency when analyzing activations in Sparse Autoencoders.
|
@@ -100,7 +100,9 @@ The dataset supports evaluation metrics such as:
|
|
100 |
- **F1 Score**
|
101 |
- **Specificity**
|
102 |
|
103 |
-
These metrics are particularly important for evaluating polysemy detection models and Sparse Autoencoders.
|
|
|
|
|
104 |
|
105 |
## Dataset Curators
|
106 |
This dataset was curated by **Gouki Minegishi** as part of research on polysemantic activation analysis in Sparse Autoencoders and interpretability for large language models.
|
@@ -109,11 +111,10 @@ This dataset was curated by **Gouki Minegishi** as part of research on polyseman
|
|
109 |
If you use the PS-Eval Dataset in your work, please cite:
|
110 |
|
111 |
```
|
112 |
-
@inproceedings{
|
113 |
-
title={
|
114 |
-
author={
|
115 |
-
|
116 |
-
|
117 |
}
|
118 |
```
|
119 |
-
|
|
|
3 |
language:
|
4 |
- en
|
5 |
---
|
6 |
+
|
7 |
# Dataset Card for PS-Eval Dataset
|
8 |
|
9 |
## Dataset Summary
|
|
|
64 |
|
65 |
## Dataset Creation
|
66 |
### Source Data
|
67 |
+
The PS-Eval dataset is built on top of the **WiC Dataset** (Word-in-Context) – a rich resource for polysemous words originally introduced in [Pilehvar and Camacho-Collados (2019)](https://arxiv.org/abs/1808.09121).
|
68 |
|
69 |
### Filtering Process
|
70 |
We carefully selected instances from WiC where the target word is tokenized as a **single token** in GPT-2-small. This ensures consistency when analyzing activations in Sparse Autoencoders.
|
|
|
100 |
- **F1 Score**
|
101 |
- **Specificity**
|
102 |
|
103 |
+
These metrics are particularly important for evaluating polysemy detection models and Sparse Autoencoders.
|
104 |
+
|
105 |
+
For implementation details of the evaluation metrics, please refer to the GitHub repository: **[link_to_your_repo]**.
|
106 |
|
107 |
## Dataset Curators
|
108 |
This dataset was curated by **Gouki Minegishi** as part of research on polysemantic activation analysis in Sparse Autoencoders and interpretability for large language models.
|
|
|
111 |
If you use the PS-Eval Dataset in your work, please cite:
|
112 |
|
113 |
```
|
114 |
+
@inproceedings{minegishi2024ps-eval,
|
115 |
+
title={Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words},
|
116 |
+
author={Gouki Minegishi, Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo},
|
117 |
+
year={2024},
|
118 |
+
url={hoge}
|
119 |
}
|
120 |
```
|
|