Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
fenchri commited on
Commit
dc501c2
·
verified ·
1 Parent(s): 629314b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -8,35 +8,41 @@ size_categories:
8
 
9
  # Dataset Card for Python-Text2Code
10
 
 
 
11
  - **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
12
- - **Paper [optional]:** https://aclanthology.org/2024.eacl-long.72.pdf
13
  - **Point of Contact:** [Fenia Christopoulou](mailto:[email protected])
14
 
15
  ## Dataset Description
16
 
17
- The data were crawled from existing, public repositories from GitHub before May 2021.
18
- Duplicate files based on the rowKey of each file’s MD5 were removed and files that met the following criteria were kept:
 
 
 
19
  (a) the file size is under 1MB;
20
  (b) the code is Python3 compatible, using Abstract Syntactic Tree (AST) parsing;
21
  (c) there are fewer than 100 characters per line on average;
22
  (d) and there are fewer than 1,000 characters in any single line.
23
 
24
- We then applied AST parsing (via [Tree-sitter](https://tree-sitter.github.io/tree-sitter/)) on the remaining Python files to extract valid functions and
25
  their corresponding docstrings.
26
  Docstrings were used a "problem descriptions" and were separated from the code. Functions without a docstring were discarded.
27
- We then replaced new lines, indentation and dedentation with `<NEW_LINE>`, `<INDENT>` and `<DEDENT>`, respectively, to normalise spaces, which effectively reduced the length
28
  of the sequences.
29
- Finally, kept instances with a maximum length of 1024 tokens (docstring+code).
30
 
31
  The final dataset contains 23,526,586 text-to-code pairs in Python.
32
 
 
 
33
 
34
  ## Data Fields
35
 
36
  Each instance contains 3 fields:
37
  - `id`: Unique ID of each pair
38
  - `code`: The python code
39
- - `docstring`: The docstring/problem description
40
 
41
 
42
  ## Data Splits
@@ -46,8 +52,6 @@ There is a single data split in the dataset. We randomly sampled 0.1% of the dat
46
 
47
  ## Citation
48
 
49
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
50
-
51
  **BibTeX:**
52
 
53
  ```html
@@ -67,7 +71,3 @@ There is a single data split in the dataset. We randomly sampled 0.1% of the dat
67
  pages = "1194--1208"
68
  }
69
 
70
-
71
- ## Dataset Card Authors [optional]
72
-
73
- Fenia Christopoulou
 
8
 
9
  # Dataset Card for Python-Text2Code
10
 
11
+ This dataset supports the following EACL paper [Text-to-Code Generation with Modality-relative Pre-training](https://aclanthology.org/2024.eacl-long.72)
12
+
13
  - **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
 
14
  - **Point of Contact:** [Fenia Christopoulou](mailto:[email protected])
15
 
16
  ## Dataset Description
17
 
18
+ The data were crawled from existing, public repositories from GitHub before May 2021 and were meant to be used for
19
+ additional model training for the task of Code Synthesis (i.e. Text-to-Code generation) in Python.
20
+
21
+ ### Details
22
+ Files that met the following criteria were kept:
23
  (a) the file size is under 1MB;
24
  (b) the code is Python3 compatible, using Abstract Syntactic Tree (AST) parsing;
25
  (c) there are fewer than 100 characters per line on average;
26
  (d) and there are fewer than 1,000 characters in any single line.
27
 
28
+ We applied AST parsing (via [Tree-sitter](https://tree-sitter.github.io/tree-sitter/)) on the remaining Python files to extract valid functions and
29
  their corresponding docstrings.
30
  Docstrings were used a "problem descriptions" and were separated from the code. Functions without a docstring were discarded.
31
+ We replaced new lines, indentation and dedentation with `<NEW_LINE>`, `<INDENT>` and `<DEDENT>`, respectively, to normalise spaces, which effectively reduced the length
32
  of the sequences.
33
+ Finally, only instances with a maximum length of 1024 tokens (docstring+code) were kept.
34
 
35
  The final dataset contains 23,526,586 text-to-code pairs in Python.
36
 
37
+ Check the paper for additional details!
38
+
39
 
40
  ## Data Fields
41
 
42
  Each instance contains 3 fields:
43
  - `id`: Unique ID of each pair
44
  - `code`: The python code
45
+ - `docstring`: The docstring/problem description associated with this code
46
 
47
 
48
  ## Data Splits
 
52
 
53
  ## Citation
54
 
 
 
55
  **BibTeX:**
56
 
57
  ```html
 
71
  pages = "1194--1208"
72
  }
73