File size: 2,753 Bytes
2aa9a44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
946e239
 
 
 
2aa9a44
689e35e
946e239
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ba5f9d
946e239
5ba5f9d
946e239
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
dataset_info:
  features:
  - name: file
    dtype: string
  - name: content
    dtype: string
  splits:
  - name: train
    num_bytes: 243796785.32967034
    num_examples: 990
  download_size: 43230285
  dataset_size: 243796785.32967034
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
task_categories:
- text-generation
pretty_name: OpenAPI Completion Refined
---
# Dataset Card for OpenAPI Completion Refined

A human-refined dataset of OpenAPI definitions based on the APIs.guru OpenAPI [directory](https://github.com/APIs-guru/openapi-directory). The dataset was used to fine-tune Code Llama for OpenAPI completion in the "Optimizing Large Language Models for OpenAPI Code Completion
" [paper](https://arxiv.org/abs/2405.15729).

## Dataset Details

### Dataset Description

The dataset was collected from the APIs.guru OpenAPI definitions [directory](https://github.com/APIs-guru/openapi-directory).
The directory contains more than 4,000 definitions in yaml format. Analysis of the repository revealed that about 75%
of the definitions in the directory are produced by a handful of major companies like Amazon, Google, and Microsoft.
To avoid the dataset bias towards a specific producer, the maximum number of definitions from a single producer was limited
to 20. Multiple versions of the same API were also excluded from the dataset as they are likely to contain very similar 
definitions.


- **Curated by:** [Bohdan Petryshyn](https://huggingface.co/BohdanPetryshyn)
- **Language(s) (NLP):** [OpenAPI](https://spec.openapis.org/oas/latest.html)
- **License:** [MIT](https://opensource.org/license/mit)

### Dataset Sources


- **Repository:** https://github.com/BohdanPetryshyn/code-llama-fim-fine-tuning
- **Paper:** https://arxiv.org/abs/2405.15729

## Citation

If you found the dataset or the fine-tuning code helpful, please reference the original paper:

**BibTeX:**

```
@misc{petryshyn2024optimizing,
      title={Optimizing Large Language Models for OpenAPI Code Completion}, 
      author={Bohdan Petryshyn and Mantas Lukoševičius},
      year={2024},
      eprint={2405.15729},
      archivePrefix={arXiv},
      primaryClass={id='cs.SE' full_name='Software Engineering' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers design tools, software metrics, testing and debugging, programming environments, etc. Roughly includes material in all of ACM Subject Classes D.2, except that D.2.4 (program verification) should probably have Logics in Computer Science as the primary subject area.'}
}
```

**APA:**

Petryshyn, B., & Lukoševičius, M. (2024). Optimizing Large Language Models for OpenAPI Code Completion. arXiv preprint arXiv:2405.15729.