File size: 1,347 Bytes
a1f3eb8
 
 
 
 
 
6fa5e40
 
d3f42c8
 
4f5965b
 
bc61a51
 
a1f3eb8
 
 
 
 
 
 
 
 
 
 
 
6fa5e40
 
 
d3f42c8
 
 
4f5965b
 
 
bc61a51
 
 
 
 
a1f3eb8
 
 
a0ed40d
 
9a6ddc1
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: dev
    path: data/dev-*
  - split: test
    path: data/test-*
  - split: gen
    path: data/gen-*
  - split: train_100
    path: data/train_100-*
dataset_info:
  features:
  - name: input
    dtype: string
  - name: output
    dtype: string
  - name: domain
    dtype: string
  splits:
  - name: train
    num_bytes: 4363969
    num_examples: 24155
  - name: dev
    num_bytes: 549121
    num_examples: 3000
  - name: test
    num_bytes: 548111
    num_examples: 3000
  - name: gen
    num_bytes: 5721102
    num_examples: 21000
  - name: train_100
    num_bytes: 5592847
    num_examples: 39500
  download_size: 5220150
  dataset_size: 16775150
---
# Dataset Card for "COGS"

It contains the dataset used in the paper [COGS: A Compositional Generalization Challenge Based on Semantic Interpretation.](https://aclanthology.org/2020.emnlp-main.731.pdf)

It has four splits, where **gen** refers to the generalization split and **train_100** refers to the training version with 100 primitive exposure examples.

You can use it by calling:
```
train_data = datasets.load_dataset("Punchwe/COGS", split="train")
train100_data = datasets.load_dataset("Punchwe/COGS", split="train_100")
gen_data = datasets.load_dataset("Punchwe/COGS", split="gen")
```