Datasets:

Modalities:
Text
Formats:
parquet
Languages:
code
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
File size: 4,242 Bytes
3136413
ce5f814
3136413
 
 
 
 
 
ce5f814
3136413
 
 
ce5f814
3136413
 
 
ce5f814
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3136413
ce5f814
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: cc-by-4.0
---

# ComPile: A Large IR Dataset from Production Sources

## About

Utilizing the LLVM compiler infrastructur shared by a number of languages, ComPile is a large dataset of
LLVM IR. The dataset is generated from programming languages built on the shared LLVM infrastructure, including Rust,
Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's package manager or the
compiler directly to extract the dataset of intermediate representations from production grade programs using our
[dataset collection utility for the LLVM compilation infrastructure](https://doi.org/10.5281/zenodo.10155761).

For an in-depth look at the statistical properties of dataset, please have a look at our [arXiv preprint](https://arxiv.org/abs/2309.15432).

## Usage

Using ComPile is relatively simple with HuggingFace's `datasets` library. To load the dataset, you can simply
run the following in a Python interpreter or within a Python script:

```python
from datasets import load_dataset

ds = load_dataset('llvm-ml/ComPile', split='train')
```

While this will just work, the download will take quite a while as `datasets` by default will download
all 550GB+ within the dataset and cache it locally. Note that the data will be placed in the directory
specified by the environment variable `HF_DATASETS_CACHE`, which defaults to `~/.cache/huggingface`.

You can also load the dataset in a streaming format, where no data is saved locally:

```python
ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)
```

This makes experimentation much easier as no upfront large time investment is required, but is
significantly slower than loading in the dataset from the local disk. For experimentation that
requires more performance but might not require the whole dataset, you can also specify a portion
of the dataset to download. For example, the following code will only download the first 10%
of the dataset:

```python
ds = load_dataset('llvm-ml/ComPile', split='train[:10%]')
```

Once the dataset has been loaded, the individual module files can be accessed by iterating through
the dataset or accessing specific indices:

```python
# We can iterate through the dataset
next(iter(ds))
# We can also access modules at specific indices
ds[0]
```

Filtering and map operations can also be efficiently applied using primitives available within the
HuggingFace `datasets` library. More documentation is available [here](https://huggingface.co/docs/datasets/index).

## Dataset Format

Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are
six columns associated with each row:

1. `content` - This column contains the raw bitcode that composes the module. This can be written to a `.bc`
file and manipulated using the standard llvm utilities or passed in directly through stdin if using something
like Python's `subprocess`.
2. `license_expression` - This column contains the SPDX expression describing the license of the project that the
module came from.
3. `license_source` - This column describes the way the `license_expression` was determined. This might indicate
an individual package ecosystem (eg `spack`), license detection (eg `go_license_detector`), or might also indicate
manual curation (`manual`).
4. `license_files` - This column contains an array of license files. These file names map to licenses included in
`/licenses/licenses-0.parquet`.
5. `package_source` - This column contains information on the package that the module was sourced from. This is
typically a link to a tar archive or git repository from which the project was built, but might also contain a
mapping to a specific package ecosystem that provides the source, such as Spack.
6. `language` - This column indicates the source language that the module was compiled from.

## Licensing

The individual modules within the dataset are subject to the licenses of the projects that they come from. License
information is available in each row, including the SPDX license expression, the license files, and also a link to
the package source where license information can be further validated.

The curation of these modules is licensed under a CC-BY-4.0 license.