Bravansky commited on
Commit
7586836
·
1 Parent(s): 3cac2f2

Dataset Featurization Added

Browse files
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretty_name: Dataset Featurization
2
+ language:
3
+ - en
4
+ license:
5
+ - mit
6
+ task_categories:
7
+ - feature-extraction
8
+ task_ids:
9
+ - language-modeling
10
+ configs:
11
+ - config_name: nyt
12
+ data_files:
13
+ - split: train
14
+ path: data/nyt/samples.csv
15
+ - config_name: nyt-evaluation-0
16
+ data_files:
17
+ - split: train
18
+ path: data/evaluation/evaluation_df_group_0.csv
19
+ - config_name: nyt-evaluation-1
20
+ data_files:
21
+ - split: train
22
+ path: data/evaluation/evaluation_df_group_1.csv
23
+ - config_name: nyt-evaluation-2
24
+ data_files:
25
+ - split: train
26
+ path: data/evaluation/evaluation_df_group_2.csv
27
+ - config_name: amazon
28
+ data_files:
29
+ - split: train
30
+ path: data/amazon/samples.csv
31
+ - config_name: amazon-evaluation-0
32
+ data_files:
33
+ - split: train
34
+ path: data/evaluation/evaluation_df_group_0.csv
35
+ - config_name: amazon-evaluation-1
36
+ data_files:
37
+ - split: train
38
+ path: data/evaluation/evaluation_df_group_1.csv
39
+ - config_name: amazon-evaluation-2
40
+ data_files:
41
+ - split: train
42
+ path: data/evaluation/evaluation_df_group_2.csv
43
+ - config_name: dbpedia
44
+ data_files:
45
+ - split: train
46
+ path: data/dbpedia/samples.csv
47
+ - config_name: dbpedia-evaluation-0
48
+ data_files:
49
+ - split: train
50
+ path: data/evaluation/evaluation_df_group_0.csv
51
+ - config_name: dbpedia-evaluation-1
52
+ data_files:
53
+ - split: train
54
+ path: data/evaluation/evaluation_df_group_1.csv
55
+ - config_name: dbpedia-evaluation-2
56
+ data_files:
57
+ - split: train
58
+ path: data/evaluation/evaluation_df_group_2.csv
59
+
60
+ # Dataset Featurization: Experiments
61
+
62
+ This repository contains datasets used in evaluating **Dataset Featurization** against the prompting baseline. For datasets used in the case studies, please refer to [Compositional Preference Modeling](https://huggingface.co/datasets/Bravansky/compositional-preference-modeling) and [Compact Jailbreaks](https://huggingface.co/datasets/Bravansky/compact-jailbreaks).
63
+
64
+ The evaluation focuses on three datasets: The [New York Times Annotated Corpus (NYT)](https://catalog.ldc.upenn.edu/docs/LDC2008T19/new_york_times_annotated_corpus.pdf), [Amazon Reviews (Amazon)](https://amazon-reviews-2023.github.io/), and [DBPEDIA](https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes). For each dataset, we sample 15 different categories and construct three separate subsets, each containing 5 categories with 1000 samples per category. We evaluate the featurization method's performance on each subset.
65
+
66
+ ### NYT
67
+
68
+ From the NYT corpus, we utilize manually reviewed tags from the NYT taxonomy classifier, specifically focusing on articles under "Features" and "News" categories, to construct a dataset of texts with their assigned categories. Below is how to access the input datasets and the proposed features with their assignments from the evaluation stage:
69
+
70
+ ```python
71
+ import datasets
72
+ text_df = load_dataset("Bravansky/compositional-preference-modeling", "nyt", split="train").to_pandas()
73
+ evaluation_df_0 = load_dataset("Bravansky/compositional-preference-modeling", "nyt-evaluation-0", split="train").to_pandas()
74
+ evaluation_df_1 = load_dataset("Bravansky/compositional-preference-modeling", "nyt-evaluation-1", split="train").to_pandas()
75
+ evaluation_df_2 = load_dataset("Bravansky/compositional-preference-modeling", "nyt-evaluation-2", split="train").to_pandas()
76
+ ```
77
+
78
+ ### Amazon
79
+
80
+ Using a dataset of half a million customer reviews, we focus on identifying high-level item categories (e.g., Books, Fashion, Beauty), excluding reviews labeled "Unknown". The input datasets and the proposed features with their assignments from the evaluation stage can be accessed as follows:
81
+
82
+ ```python
83
+ import datasets
84
+ text_df = load_dataset("Bravansky/compositional-preference-modeling", "amazon", split="train").to_pandas()
85
+ evaluation_df_0 = load_dataset("Bravansky/compositional-preference-modeling", "amazon-evaluation-0", split="train").to_pandas()
86
+ evaluation_df_1 = load_dataset("Bravansky/compositional-preference-modeling", "amazon-evaluation-1", split="train").to_pandas()
87
+ evaluation_df_2 = load_dataset("Bravansky/compositional-preference-modeling", "amazon-evaluation-2", split="train").to_pandas()
88
+ ```
89
+
90
+ ### DBPEDIA
91
+
92
+ Using the pre-processed DBPEDIA dataset, we focus on reconstructing categories labeled as level `l2`:
93
+
94
+ ```python
95
+ import datasets
96
+ text_df = load_dataset("Bravansky/compositional-preference-modeling", "dbpedia", split="train").to_pandas()
97
+ evaluation_df_0 = load_dataset("Bravansky/compositional-preference-modeling", "dbpedia-evaluation-0", split="train").to_pandas()
98
+ evaluation_df_1 = load_dataset("Bravansky/compositional-preference-modeling", "dbpedia-evaluation-1", split="train").to_pandas()
99
+ evaluation_df_2 = load_dataset("Bravansky/compositional-preference-modeling", "dbpedia-evaluation-2", split="train").to_pandas()
100
+ ```
data/amazon/evaluation/evaluation_df_group_0.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/amazon/evaluation/evaluation_df_group_1.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/amazon/evaluation/evaluation_df_group_2.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/amazon/samples.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/dbpedia/evaluation/evaluation_df_group_0.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/dbpedia/evaluation/evaluation_df_group_1.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/dbpedia/evaluation/evaluation_df_group_2.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/dbpedia/samples.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/nyt/evaluation/evaluation_df_group_0.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/nyt/evaluation/evaluation_df_group_1.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/nyt/evaluation/evaluation_df_group_2.csv ADDED
The diff for this file is too large to render. See raw diff
 
data/nyt/samples.csv ADDED
The diff for this file is too large to render. See raw diff