Distilabel

Distilabel is the framework for synthetic data and AI feedback for AI engineers that require high-quality outputs, full data ownership, and overall efficiency.

Distilabel pipelines can be built with any number of interconnected steps or tasks. The output of one step or task is fed as input to another. A series of steps can be chained together to build complex data processing and generation pipelines with LLMs. The input of each step is a batch of data, containing a list of dictionaries, where each dictionary represents a row of the dataset, and the keys are the column names. To feed data from and to the Hugging Face hub, we’ve defined a Distiset class as an abstraction of a datasets.DatasetDict.

What do people build with Distilabel?

The community uses Distilabel to create amazing datasets and models, and we love contributions to open-source ourselves too.

Distiset as special datasets.DatasetDict

A Pipeline in distilabel returns a special type of Hugging Face datasets.DatasetDict which is called Distiset.

The Distiset is a dictionary-like object that contains the different configurations generated by the Pipeline, where each configuration corresponds to each leaf step in the DAG built by the Pipeline. Each configuration corresponds to a different subset of the dataset. This concept is taken from 🤗 datasets that let you upload different configurations of the same dataset within the same repository and can contain different columns, i.e. different configurations, which can be seamlessly pushed to the Hugging Face Hub.

Prerequisites

First login with your Hugging Face account:

huggingface-cli login

Make sure you have distilabel installed:

pip install -U distilabel[vllm]

Load data from the Hub to a Distiset

To showcase an example of loading data from the Hub, we will reproduce the Prometheus 2 paper and use the PrometheusEval task implemented in distilabel. The Prometheus 2 and Prometheuseval task direct assessment and pairwise ranking tasks i.e. assessing the quality of a single isolated response for a given instruction with or without a reference answer, and assessing the quality of one response against another one for a given instruction with or without a reference answer, respectively. We will use these task on a dataset loaded from the Hub, which was created by the Hugging Face H4 team named HuggingFaceH4/instruction-dataset.

from distilabel.llms import vLLM
from distilabel.pipeline import Pipeline
from distilabel.steps import KeepColumns, LoadDataFromHub
from distilabel.steps.tasks import PrometheusEval

if __name__ == "__main__":
    with Pipeline(name="prometheus") as pipeline:
        load_dataset = LoadDataFromHub(
            name="load_dataset",
            repo_id="HuggingFaceH4/instruction-dataset",
            split="test",
            output_mappings={"prompt": "instruction", "completion": "generation"},
        )

        task = PrometheusEval(
            name="task",
            llm=vLLM(
                model="prometheus-eval/prometheus-7b-v2.0",
                chat_template="[INST] {{ messages[0]['content'] }}\n{{ messages[1]['content'] }}[/INST]",
            ),
            mode="absolute",
            rubric="factual-validity",
            reference=False,
            num_generations=1,
            group_generations=False,
        )

        keep_columns = KeepColumns(
            name="keep_columns",
            columns=["instruction", "generation", "feedback", "result", "model_name"],
        )

        load_dataset >> task >> keep_columns

Then we need to call pipeline.run with the runtime parameters so that the pipeline can be launched and data can be stores in the Distiset object.

distiset = pipeline.run(
    parameters={
        task.name: {
            "llm": {
                "generation_kwargs": {
                    "max_new_tokens": 1024,
                    "temperature": 0.7,
                },
            },
        },
    },
)

Push a distilabel Distiset to the Hub

Push the Distiset to a Hugging Face repository, where each one of the subsets will correspond to a different configuration:

distiset.push_to_hub(
    "my-org/my-dataset",
    commit_message="Initial commit",
    private=False,
    token=os.getenv("HF_TOKEN"),
)

📚 Resources

< > Update on GitHub