Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,599 Bytes
4e788ea
d1e1c91
4e788ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1e1c91
 
 
 
4e788ea
 
 
cf7e0da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
license: cc-by-4.0
dataset_info:
  features:
  - name: original_id
    dtype: int32
  - name: edit_goal
    dtype: string
  - name: edit_type
    dtype: string
  - name: text
    dtype: string
  - name: food
    dtype: string
  - name: ambiance
    dtype: string
  - name: service
    dtype: string
  - name: noise
    dtype: string
  - name: counterfactual
    dtype: bool
  - name: rating
    dtype: int64
  splits:
  - name: validation
    num_bytes: 306529
    num_examples: 1673
  - name: test
    num_bytes: 309751
    num_examples: 1689
  - name: train
    num_bytes: 2282439
    num_examples: 11728
  download_size: 628886
  dataset_size: 2898719
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "CEBaB"

This is a lightly cleaned and simplified version of the CEBaB counterfactual restaurant review dataset from [this paper](https://arxiv.org/abs/2205.14140).
The most important difference from the original dataset is that the `rating` column corresponds to the _median_ rating provided by the Mechanical Turkers,
rather than the majority rating. These are the same whenever a majority rating exists, but when there is no majority rating (e.g. because there were two 1s,
two 2s, and one 3), the original dataset used a `"no majority"` placeholder whereas we are able to provide an aggregate rating for all reviews.

The exact code used to process the original dataset is provided below:
```py
from ast import literal_eval
from datasets import DatasetDict, Value, load_dataset


def compute_median(x: str):
    """Compute the median rating given a multiset of ratings."""
    # Decode the dictionary from string format
    dist = literal_eval(x)

    # Should be a dictionary whose keys are string-encoded integer ratings
    # and whose values are the number of times that the rating was observed
    assert isinstance(dist, dict)
    assert sum(dist.values()) % 2 == 1, "Number of ratings should be odd"

    ratings = []
    for rating, count in dist.items():
        ratings.extend([int(rating)] * count)

    ratings.sort()
    return ratings[len(ratings) // 2]


cebab = load_dataset('CEBaB/CEBaB')
assert isinstance(cebab, DatasetDict)

# Remove redundant splits
cebab['train'] = cebab.pop('train_inclusive')
del cebab['train_exclusive']
del cebab['train_observational']

cebab = cebab.cast_column(
    'original_id', Value('int32')
).map(
    lambda x: {
        # New column with inverted label for counterfactuals
        'counterfactual': not x['is_original'],
        # Reduce the rating multiset into a single median rating
        'rating': compute_median(x['review_label_distribution'])
    }
).map(
    # Replace the empty string and 'None' with Apache Arrow nulls
    lambda x: {
        k: v if v not in ('', 'no majority', 'None') else None
        for k, v in x.items()
    }
)

# Sanity check that all the splits have the same columns
cols = next(iter(cebab.values())).column_names
assert all(split.column_names == cols for split in cebab.values())

# Clean up the names a bit
cebab = cebab.rename_columns({
    col: col.removesuffix('_majority').removesuffix('_aspect')
    for col in cols if col.endswith('_majority')
}).rename_column(
    'description', 'text'
)

# Drop the unimportant columns
cebab = cebab.remove_columns([
    col for col in cols if col.endswith('_distribution') or col.endswith('_workers')
] + [
    'edit_id', 'edit_worker', 'id', 'is_original', 'opentable_metadata', 'review'
]).sort([
    # Make sure counterfactual reviews come immediately after each original review
    'original_id', 'counterfactual'
])
```