Quickstart

In this quickstart, you’ll learn how to use the Datasets Server’s REST API to:

Each feature is served through an endpoint summarized in the table below:

Endpoint Method Description Query parameters
/valid GET Get the list of datasets hosted in the Hub and supported by the datasets server. none
/is-valid GET Check whether a specific dataset is valid. dataset: name of the dataset
/splits GET Get the list of configurations and splits of a dataset. dataset: name of the dataset
/first-rows GET Get the first rows of a dataset split. - dataset: name of the dataset
- config: name of the config
- split: name of the split
/rows GET Get a slice of rows of a dataset split. - dataset: name of the dataset
- config: name of the config
- split: name of the split
- offset: offset of the slice
- length: length of the slice (maximum 100)
/parquet GET Get the list of parquet files of a dataset. dataset: name of the dataset

There is no installation or setup required to use Datasets Server.

Sign up for a Hugging Face account if you don't already have one! While you can use Datasets Server without a Hugging Face account, you won't be able to access gated datasets like CommonVoice and ImageNet without providing a user token which you can find in your user settings.

Feel free to try out the API in Postman, ReDoc or RapidAPI. This quickstart will show you how to query the endpoints programmatically.

The base URL of the REST API is:

https://datasets-server.huggingface.co

Gated datasets

For gated datasets, you’ll need to provide your user token in headers of your query. Otherwise, you’ll get an error message to retry with authentication.

Python
JavaScript
cURL
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/is-valid?dataset=mozilla-foundation/common_voice_10_0"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()

You’ll see the following error if you’re trying to access a gated dataset without providing your user token:

print(data)
{'error': 'The dataset does not exist, or is not accessible without authentication (private or gated). Please check the spelling of the dataset name or retry with authentication.'}

Check dataset validity

The /valid endpoint returns a JSON list of datasets stored on the Hub that load without any errors:

Python
JavaScript
cURL
import requests
API_URL = "https://datasets-server.huggingface.co/valid"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()

This returns a list of all the datasets that load without an error:

print(data)
{
  "valid": [
    "0n1xus/codexglue",
    "0n1xus/pytorrent-standalone",
    "0x7194633/rupile",
    "51la5/keyword-extraction",
    ...,
    ...,
  ]
}

To check whether a specific dataset is valid, for example, Rotten Tomatoes, use the /is-valid endpoint instead:

Python
JavaScript
cURL
import requests
API_URL = "https://datasets-server.huggingface.co/is-valid?dataset=rotten_tomatoes"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()

This returns whether the valid key is true or false:

print(data)
{'valid': True}

List configurations and splits

The /splits endpoint returns a JSON list of the splits in a dataset:

Python
JavaScript
cURL
import requests
API_URL = "https://datasets-server.huggingface.co/splits?dataset=rotten_tomatoes"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()

This returns the available configuration and splits in the dataset:

print(data)
{'splits': 
    [
        {'dataset': 'rotten_tomatoes', 'config': 'default', 'split': 'train'}, 
        {'dataset': 'rotten_tomatoes', 'config': 'default', 'split': 'validation'}, 
        {'dataset': 'rotten_tomatoes', 'config': 'default', 'split': 'test'}
    ], 
 'pending': [], 
 'failed': []
}

Preview a dataset

The /first-rows endpoint returns a JSON list of the first 100 rows of a dataset. It also returns the types of data features (“columns” data types). You should specify the dataset name, configuration name (you can find out the configuration name from the /splits endpoint), and split name of the dataset you’d like to preview:

Python
JavaScript
cURL
import requests
API_URL = "https://datasets-server.huggingface.co/first-rows?dataset=rotten_tomatoes&config=default&split=train"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()

This returns the first 100 rows of the dataset:

print(data)
{'dataset': 'rotten_tomatoes', 'config': 'default', 'split': 'train', 
 'features': 
    [
        {'feature_idx': 0, 'name': 'text', 'type': {'dtype': 'string', '_type': 'Value'}}, 
        {'feature_idx': 1, 'name': 'label', 'type': {'names': ['neg', 'pos'], '_type': 'ClassLabel'}}
    ], 
 'rows': 
    [
        {'row_idx': 0, 'row': {'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .', 'label': 1}, 'truncated_cells': []}, 
        {'row_idx': 1, 'row': {'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson\'s expanded vision of j . r . r . tolkien\'s middle-earth .', 'label': 1}, 'truncated_cells': []}
        ...,
        ...,
    ],
}

Download slices of a dataset

The /rows endpoint returns a JSON list of a slice of rows of a dataset at any given location (offset). It also returns the types of data features (“columns” data types). You should specify the dataset name, configuration name (you can find out the configuration name from the /splits endpoint), the split name and the offset and length of the slice you’d like to download:

Python
JavaScript
cURL
import requests
API_URL = "https://datasets-server.huggingface.co/rows?dataset=rotten_tomatoes&config=default&split=train&offset=150&length=10"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()

You can download slices of 100 rows maximum at a time.

The response looks like:

print(data)
{'features': 
    [
        {'feature_idx': 0, 'name': 'text', 'type': {'dtype': 'string', '_type': 'Value'}}, 
        {'feature_idx': 1, 'name': 'label', 'type': {'names': ['neg', 'pos'], '_type': 'ClassLabel'}}], 
 'rows': 
    [
        {'row_idx': 150, 'row': {'text': 'enormously likable , partly because it is aware of its own grasp of the absurd .', 'label': 1}, 'truncated_cells': []}, 
        {'row_idx': 151, 'row': {'text': "here's a british flick gleefully unconcerned with plausibility , yet just as determined to entertain you .", 'label': 1}, 'truncated_cells': []}, 
        {'row_idx': 152, 'row': {'text': "it's an old story , but a lively script , sharp acting and partially animated interludes make just a kiss seem minty fresh .", 'label': 1}, 'truncated_cells': []}, 
        {'row_idx': 153, 'row': {'text': 'must be seen to be believed .', 'label': 1}, 'truncated_cells': []}, 
        {'row_idx': 154, 'row': {'text': "ray liotta and jason patric do some of their best work in their underwritten roles , but don't be fooled : nobody deserves any prizes here .", 'label': 1}, 'truncated_cells': []}, 
        ...,
        ...,
    ]
}

Access Parquet files

Datasets Server converts every public dataset on the Hub to the Parquet format. The /parquet endpoint returns a JSON list of the Parquet URLs for a dataset:

Python
JavaScript
cURL
import requests
API_URL = "https://datasets-server.huggingface.co/parquet?dataset=rotten_tomatoes"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()

This returns a URL to the Parquet file for each split:

print(data)
{'parquet_files': 
    [
        {'dataset': 'rotten_tomatoes', 'config': 'default', 'split': 'test', 'url': 'https://huggingface.co/datasets/rotten_tomatoes/resolve/refs%2Fconvert%2Fparquet/default/test/0000.parquet', 'filename': '0000.parquet', 'size': 92206}, 
        {'dataset': 'rotten_tomatoes', 'config': 'default', 'split': 'train', 'url': 'https://huggingface.co/datasets/rotten_tomatoes/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet', 'filename': '0000.parquet' 'size': 698845}, 
        {'dataset': 'rotten_tomatoes', 'config': 'default', 'split': 'validation', 'url': 'https://huggingface.co/datasets/rotten_tomatoes/resolve/refs%2Fconvert%2Fparquet/default/validation/0000.parquet', 'filename': '0000.parquet' 'size': 90001}
    ], 
 'pending': [], 
 'failed': []
}