Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
File size: 3,107 Bytes
629314b
 
 
 
64d3b33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
629314b
 
 
 
3330a52
dc501c2
629314b
b09ff9e
629314b
 
 
dc501c2
 
 
 
 
629314b
 
 
 
 
dc501c2
629314b
 
dc501c2
629314b
dc501c2
629314b
 
 
dc501c2
 
629314b
 
 
 
 
 
dc501c2
629314b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: apache-2.0
size_categories:
- 10M<n<100M
task_categories:
- text-generation
dataset_info:
  features:
  - name: code
    dtype: string
  - name: docstring
    dtype: string
  - name: _id
    dtype: string
  splits:
  - name: train
    num_bytes: 18759198502
    num_examples: 23526586
  download_size: 8549378238
  dataset_size: 18759198502
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Dataset Card for Python-Text2Code

This dataset supports the EACL paper [Text-to-Code Generation with Modality-relative Pre-training](https://aclanthology.org/2024.eacl-long.72)

- **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
- **Point of Contact:** [Fenia Christopoulou](mailto:[email protected]), [Gerasimos Lampouras](mailto:[email protected])

## Dataset Description

The data were crawled from existing, public repositories from GitHub before May 2021 and were meant to be used for 
additional model training for the task of Code Synthesis (i.e. Text-to-Code generation) in Python.

### Details
Files that met the following criteria were kept: 
(a) the file size is under 1MB; 
(b) the code is Python3 compatible, using Abstract Syntactic Tree (AST) parsing; 
(c) there are fewer than 100 characters per line on average; 
(d) and there are fewer than 1,000 characters in any single line. 

We applied AST parsing (via [Tree-sitter](https://tree-sitter.github.io/tree-sitter/)) on the remaining Python files to extract valid functions and 
their corresponding docstrings. 
Docstrings were used a "problem descriptions" and were separated from the code. Functions without a docstring were discarded.
We replaced new lines, indentation and dedentation with `<NEW_LINE>`, `<INDENT>` and `<DEDENT>`, respectively, to normalise spaces, which effectively reduced the length
of the sequences.
Finally, only instances with a maximum length of 1024 tokens (docstring+code) were kept. 

The final dataset contains 23,526,586 text-to-code pairs in Python.

Check the paper for additional details!


## Data Fields

Each instance contains 3 fields:
- `id`: Unique ID of each pair
- `code`: The python code
- `docstring`: The docstring/problem description associated with this code


## Data Splits

There is a single data split in the dataset. We randomly sampled 0.1% of the dataset to serve as validation set.


## Citation

**BibTeX:**

```html
@inproceedings{christopoulou-etal-2024-text,
    title = "Text-to-Code Generation with Modality-relative Pre-training",
    author = "Christopoulou, Fenia  and
      Zhang, Guchun  and
      Lampouras, Gerasimos",
    editor = "Graham, Yvette  and
      Purver, Matthew",
    booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = mar,
    year = "2024",
    address = "St. Julian{'}s, Malta",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.eacl-long.72",
    pages = "1194--1208"
}