Datasets:
Update model card
Browse files
README.md
CHANGED
@@ -16,4 +16,55 @@ configs:
|
|
16 |
data_files:
|
17 |
- split: train
|
18 |
path: data/train-*
|
|
|
|
|
|
|
|
|
19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
data_files:
|
17 |
- split: train
|
18 |
path: data/train-*
|
19 |
+
license: mit
|
20 |
+
task_categories:
|
21 |
+
- text-generation
|
22 |
+
pretty_name: OpenAPI Completion Refined
|
23 |
---
|
24 |
+
# Dataset Card for Dataset Name
|
25 |
+
|
26 |
+
A human-refined dataset of OpenAPI definitions based on the APIs.guru OpenAPI [directory](https://github.com/APIs-guru/openapi-directory). The dataset was used to fine-tune Code Llama for OpenAPI completion in the "Optimizing Large Language Models for OpenAPI Code Completion
|
27 |
+
" [paper](https://arxiv.org/abs/2405.15729).
|
28 |
+
|
29 |
+
## Dataset Details
|
30 |
+
|
31 |
+
### Dataset Description
|
32 |
+
|
33 |
+
The dataset was collected from the APIs.guru OpenAPI definitions [directory](https://github.com/APIs-guru/openapi-directory).
|
34 |
+
The directory contains more than 4,000 definitions in yaml format. Analysis of the repository revealed that about 75%
|
35 |
+
of the definitions in the directory are produced by a handful of major companies like Amazon, Google, and Microsoft.
|
36 |
+
To avoid the dataset bias towards a specific producer, the maximum number of definitions from a single producer was limited
|
37 |
+
to 20. Multiple versions of the same API were also excluded from the dataset as they are likely to contain very similar
|
38 |
+
definitions.
|
39 |
+
|
40 |
+
|
41 |
+
- **Curated by:** [Bohdan Petryshyn](https://huggingface.co/BohdanPetryshyn)
|
42 |
+
- **Language(s) (NLP):** [OpenAPI](https://spec.openapis.org/oas/latest.html)
|
43 |
+
- **License:** [MIT](https://opensource.org/license/mit)
|
44 |
+
|
45 |
+
### Dataset Sources
|
46 |
+
|
47 |
+
|
48 |
+
- **Repository:** https://github.com/BohdanPetryshyn/code-llama-fim-fine-tuning
|
49 |
+
- **Paper:** https://arxiv.org/abs/2405.15729
|
50 |
+
|
51 |
+
## Citation [optional]
|
52 |
+
|
53 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
54 |
+
|
55 |
+
**BibTeX:**
|
56 |
+
|
57 |
+
```
|
58 |
+
@misc{petryshyn2024optimizing,
|
59 |
+
title={Optimizing Large Language Models for OpenAPI Code Completion},
|
60 |
+
author={Bohdan Petryshyn and Mantas Lukoševičius},
|
61 |
+
year={2024},
|
62 |
+
eprint={2405.15729},
|
63 |
+
archivePrefix={arXiv},
|
64 |
+
primaryClass={id='cs.SE' full_name='Software Engineering' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers design tools, software metrics, testing and debugging, programming environments, etc. Roughly includes material in all of ACM Subject Classes D.2, except that D.2.4 (program verification) should probably have Logics in Computer Science as the primary subject area.'}
|
65 |
+
}
|
66 |
+
```
|
67 |
+
|
68 |
+
**APA:**
|
69 |
+
|
70 |
+
Petryshyn, B., & Lukoševičius, M. (2024). Optimizing Large Language Models for OpenAPI Code Completion. arXiv preprint arXiv:2405.15729.
|