Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Japanese
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,376 Bytes
a442788
 
720c14a
a442788
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
720c14a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a442788
 
 
 
 
 
 
720c14a
 
 
 
2c9fddd
 
 
 
 
 
 
 
 
 
 
 
a442788
2c9fddd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
dataset_info:
- config_name: default
  features:
  - name: premise
    dtype: large_string
  - name: hypothesis
    dtype: large_string
  - name: template_num
    dtype: int64
  - name: time_format
    dtype: large_string
  - name: time_span
    dtype: large_string
  - name: category
    dtype: large_string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': neutral
          '2': contradiction
  splits:
  - name: train
    num_bytes: 2424590
    num_examples: 9950
  - name: test
    num_bytes: 88516
    num_examples: 348
  download_size: 594545
  dataset_size: 2513106
- config_name: template
  features:
  - name: id
    dtype: int64
  - name: premise
    dtype: large_string
  - name: hypothesis
    dtype: large_string
  - name: entailment
    dtype: large_string
  - name: contradiction
    dtype: large_string
  - name: ng time unit
    dtype: large_string
  - name: test time format
    dtype: large_string
  - name: category
    dtype: large_string
  splits:
  - name: train
    num_bytes: 26196
    num_examples: 79
  download_size: 9709
  dataset_size: 26196
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
- config_name: template
  data_files:
  - split: train
    path: template/train-*
license: cc-by-sa-4.0
task_categories:
- text-classification
language:
- ja
tags:
- nli
- evaluation
- benchmark
pretty_name: >-
  Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating
  Generalization Capacity of Language Models
---

# Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models

Jamp([tomo-vv/temporalNLI_dataset](https://github.com/tomo-vv/temporalNLI_dataset)) is the Japanese temporal inference benchmark. 
This dataset consists of templates, test data, and training data. 

Template subset containing template, time format, or time span in their names are split based on tense fragment, time format, 
or time span, respectively.

## Dataset Details

### Dataset Description

- **Created by:** tomo-vv([email protected])
- **Language(s) (NLP):** Japanese
- **License:** CC BY-SA 4.0

### Dataset Sources

- **Repository:** [tomo-vv/temporalNLI_dataset](https://github.com/tomo-vv/temporalNLI_dataset)
- **Paper:** [Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models](https://aclanthology.org/2023.acl-srw.8) (Sugimoto et al., ACL 2023)

## Citation

**BibTeX:**

```
@inproceedings{sugimoto-etal-2023-jamp,
    title = "Jamp: Controlled {J}apanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models",
    author = "Sugimoto, Tomoki  and
      Onoe, Yasumasa  and
      Yanaka, Hitomi",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-srw.8",
    pages = "57--68",
}
```

**APA:**

Sugimoto, T., Onoe, Y., & Yanaka, H. (2023). Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating Generalization Capacity of Language Models. 
arXiv preprint arXiv:2306.10727.