Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
fenchri commited on
Commit
629314b
·
verified ·
1 Parent(s): 36d1dd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ size_categories:
6
+ - 10M<n<100M
7
+ ---
8
+
9
+ # Dataset Card for Python-Text2Code
10
+
11
+ - **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
12
+ - **Paper [optional]:** https://aclanthology.org/2024.eacl-long.72.pdf
13
+ - **Point of Contact:** [Fenia Christopoulou](mailto:[email protected])
14
+
15
+ ## Dataset Description
16
+
17
+ The data were crawled from existing, public repositories from GitHub before May 2021.
18
+ Duplicate files based on the rowKey of each file’s MD5 were removed and files that met the following criteria were kept:
19
+ (a) the file size is under 1MB;
20
+ (b) the code is Python3 compatible, using Abstract Syntactic Tree (AST) parsing;
21
+ (c) there are fewer than 100 characters per line on average;
22
+ (d) and there are fewer than 1,000 characters in any single line.
23
+
24
+ We then applied AST parsing (via [Tree-sitter](https://tree-sitter.github.io/tree-sitter/)) on the remaining Python files to extract valid functions and
25
+ their corresponding docstrings.
26
+ Docstrings were used a "problem descriptions" and were separated from the code. Functions without a docstring were discarded.
27
+ We then replaced new lines, indentation and dedentation with `<NEW_LINE>`, `<INDENT>` and `<DEDENT>`, respectively, to normalise spaces, which effectively reduced the length
28
+ of the sequences.
29
+ Finally, kept instances with a maximum length of 1024 tokens (docstring+code).
30
+
31
+ The final dataset contains 23,526,586 text-to-code pairs in Python.
32
+
33
+
34
+ ## Data Fields
35
+
36
+ Each instance contains 3 fields:
37
+ - `id`: Unique ID of each pair
38
+ - `code`: The python code
39
+ - `docstring`: The docstring/problem description
40
+
41
+
42
+ ## Data Splits
43
+
44
+ There is a single data split in the dataset. We randomly sampled 0.1% of the dataset to serve as validation set.
45
+
46
+
47
+ ## Citation
48
+
49
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
50
+
51
+ **BibTeX:**
52
+
53
+ ```html
54
+ @inproceedings{christopoulou-etal-2024-text,
55
+ title = "Text-to-Code Generation with Modality-relative Pre-training",
56
+ author = "Christopoulou, Fenia and
57
+ Zhang, Guchun and
58
+ Lampouras, Gerasimos",
59
+ editor = "Graham, Yvette and
60
+ Purver, Matthew",
61
+ booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
62
+ month = mar,
63
+ year = "2024",
64
+ address = "St. Julian{'}s, Malta",
65
+ publisher = "Association for Computational Linguistics",
66
+ url = "https://aclanthology.org/2024.eacl-long.72",
67
+ pages = "1194--1208"
68
+ }
69
+
70
+
71
+ ## Dataset Card Authors [optional]
72
+
73
+ Fenia Christopoulou