( description: str = <factory> citation: str = <factory> homepage: str = <factory> license: str = <factory> features: typing.Optional[datasets.features.features.Features] = None post_processed: typing.Optional[datasets.info.PostProcessedInfo] = None supervised_keys: typing.Optional[datasets.info.SupervisedKeysData] = None builder_name: typing.Optional[str] = None dataset_name: typing.Optional[str] = None config_name: typing.Optional[str] = None version: typing.Union[str, datasets.utils.version.Version, NoneType] = None splits: typing.Optional[dict] = None download_checksums: typing.Optional[dict] = None download_size: typing.Optional[int] = None post_processing_size: typing.Optional[int] = None dataset_size: typing.Optional[int] = None size_in_bytes: typing.Optional[int] = None )
Parameters
str
) —
A description of the dataset. str
) —
A BibTeX citation of the dataset. str
) —
A URL to the official homepage for the dataset. str
) —
The dataset’s license. It can be the name of the license or a paragraph containing the terms of the license. PostProcessedInfo
, optional) —
Information regarding the resources of a possible post-processing of a dataset. For example, it can contain the information of an index. SupervisedKeysData
, optional) —
Specifies the input feature and the label for supervised learning if applicable for the dataset (legacy from TFDS). str
, optional) —
The name of the GeneratorBasedBuilder
subclass used to create the dataset. Usually matched to the corresponding script name. It is also the snake_case version of the dataset builder class name. str
, optional) —
The name of the configuration derived from BuilderConfig. str
or Version, optional) —
The version of the dataset. dict
, optional) —
The mapping between split name and metadata. dict
, optional) —
The mapping between the URL to download the dataset’s checksums and corresponding metadata. int
, optional) —
The size of the files to download to generate the dataset, in bytes. int
, optional) —
Size of the dataset in bytes after post-processing, if any. int
, optional) —
The combined size in bytes of the Arrow tables for all splits. int
, optional) —
The combined size in bytes of all files associated with the dataset (downloaded files + Arrow files). Information about a dataset.
DatasetInfo
documents datasets, including its name, version, and features.
See the constructor arguments and properties for a full list.
Not all fields are known on construction and may be updated later.
( dataset_info_dir: str storage_options: typing.Optional[dict] = None )
Create DatasetInfo from the JSON file in dataset_info_dir
.
This function updates all the dynamically generated fields (num_examples, hash, time of creation,…) of the DatasetInfo.
This will overwrite all previous metadata.
( dataset_info_dir pretty_print = False storage_options: typing.Optional[dict] = None )
Write DatasetInfo
and license (if present) as JSON files to dataset_info_dir
.
The base class Dataset implements a Dataset backed by an Apache Arrow table.
( arrow_table: Table info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None indices_table: typing.Optional[datasets.table.Table] = None fingerprint: typing.Optional[str] = None )
A Dataset backed by an Arrow table.
( name: str column: typing.Union[list, <built-in function array>] new_fingerprint: str feature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.LargeList, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image, datasets.features.video.Video, NoneType] = None )
Add column to Dataset.
Added in 1.7
( item: dict new_fingerprint: str )
Add item to Dataset.
Added in 1.7
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> new_review = {'label': 0, 'text': 'this movie is the absolute worst thing I have ever seen'}
>>> ds = ds.add_item(new_review)
>>> ds[-1]
{'label': 0, 'text': 'this movie is the absolute worst thing I have ever seen'}
( filename: str info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None indices_filename: typing.Optional[str] = None in_memory: bool = False )
Parameters
str
) —
File name of the dataset. DatasetInfo
, optional) —
Dataset information, like description, citation, etc. NamedSplit
, optional) —
Name of the dataset split. str
, optional) —
File names of the indices. bool
, defaults to False
) —
Whether to copy the data in-memory. Instantiate a Dataset backed by an Arrow table at filename.
( buffer: Buffer info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None indices_buffer: typing.Optional[pyarrow.lib.Buffer] = None )
Instantiate a Dataset backed by an Arrow buffer.
( df: DataFrame features: typing.Optional[datasets.features.features.Features] = None info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None preserve_index: typing.Optional[bool] = None )
Parameters
pandas.DataFrame
) —
Dataframe that contains the dataset. DatasetInfo
, optional) —
Dataset information, like description, citation, etc. NamedSplit
, optional) —
Name of the dataset split. bool
, optional) —
Whether to store the index as an additional column in the resulting Dataset.
The default of None
will store the index as a column, except for RangeIndex
which is stored as metadata only.
Use preserve_index=True
to force it to be stored as a column. Convert pandas.DataFrame
to a pyarrow.Table
to create a Dataset.
The column types in the resulting Arrow Table are inferred from the dtypes of the pandas.Series
in the
DataFrame. In the case of non-object Series, the NumPy dtype is translated to its Arrow equivalent. In the
case of object
, we need to guess the datatype by looking at the Python objects in this Series.
Be aware that Series of the object
dtype don’t carry enough information to always lead to a meaningful Arrow
type. In the case that we cannot infer a type, e.g. because the DataFrame is of length 0 or the Series only
contains None/nan
objects, the type is set to null
. This behavior can be avoided by constructing explicit
features and passing it to this function.
( mapping: dict features: typing.Optional[datasets.features.features.Features] = None info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None )
Parameters
Mapping
) —
Mapping of strings to Arrays or Python lists. DatasetInfo
, optional) —
Dataset information, like description, citation, etc. NamedSplit
, optional) —
Name of the dataset split. Convert dict
to a pyarrow.Table
to create a Dataset.
( generator: typing.Callable features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False gen_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None split: NamedSplit = NamedSplit('train') **kwargs )
Parameters
Callable
):
A generator function that yields
examples. str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. dict
, optional) —
Keyword arguments to be passed to the generator
callable.
You can define a sharded dataset by passing the list of shards in gen_kwargs
and setting num_proc
greater than 1. int
, optional, defaults to None
) —
Number of processes when downloading and generating the dataset locally.
This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.
If num_proc
is greater than one, then all list values in gen_kwargs
must be the same length. These values will be split between calls to the generator. The number of shards will be the minimum of the shortest list in gen_kwargs
and num_proc
.
Added in 2.7.0
Split.TRAIN
) —
Split name to be assigned to the dataset.
Added in 2.21.0
GeneratorConfig
. Create a Dataset from a generator.
The Apache Arrow table backing the dataset.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.data
MemoryMappedTable
text: string
label: int64
----
text: [["compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .","the soundtrack alone is worth the price of admission .","rodriguez does a splendid job of racial profiling hollywood style--casting excellent latin actors of all ages--a trend long overdue .","beneath the film's obvious determination to shock at any cost lies considerable skill and determination , backed by sheer nerve .","bielinsky is a filmmaker of impressive talent .","so beautifully acted and directed , it's clear that washington most certainly has a new career ahead of him if he so chooses .","a visual spectacle full of stunning images and effects .","a gentle and engrossing character study .","it's enough to watch huppert scheming , with her small , intelligent eyes as steady as any noir villain , and to enjoy the perfectly pitched web of tension that chabrol spins .","an engrossing portrait of uncompromising artists trying to create something original against the backdrop of a corporate music industry that only seems to care about the bottom line .",...,"ultimately , jane learns her place as a girl , softens up and loses some of the intensity that made her an interesting character to begin with .","ah-nuld's action hero days might be over .","it's clear why deuces wild , which was shot two years ago , has been gathering dust on mgm's shelf .","feels like nothing quite so much as a middle-aged moviemaker's attempt to surround himself with beautiful , half-naked women .","when the precise nature of matthew's predicament finally comes into sharp focus , the revelation fails to justify the build-up .","this picture is murder by numbers , and as easy to be bored by as your abc's , despite a few whopping shootouts .","hilarious musical comedy though stymied by accents thick as mud .","if you are into splatter movies , then you will probably have a reasonably good time with the salton sea .","a dull , simple-minded and stereotypical tale of drugs , death and mind-numbing indifference on the inner-city streets .","the feature-length stretch . . . strains the show's concept ."]]
label: [[1,1,1,1,1,1,1,1,1,1,...,0,0,0,0,0,0,0,0,0,0]]
The cache files containing the Apache Arrow table backing the dataset.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.cache_files
[{'filename': '/root/.cache/huggingface/datasets/rotten_tomatoes_movie_review/default/1.0.0/40d411e45a6ce3484deed7cc15b82a53dad9a72aafd9f86f8f227134bec5ca46/rotten_tomatoes_movie_review-validation.arrow'}]
Number of columns in the dataset.
Number of rows in the dataset (same as Dataset.len()).
Names of the columns in the dataset.
Shape of the dataset (number of columns, number of rows).
( column: str ) → list
Parameters
str
) —
Column name (list all the column names with column_names). Returns
list
List of unique elements in the given column.
Return a list of the unique elements in a column.
This is implemented in the low-level backend and as such, very fast.
( new_fingerprint: typing.Optional[str] = None max_depth = 16 ) → Dataset
Flatten the table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("squad", split="train")
>>> ds.features
{'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None),
'context': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}
>>> ds.flatten()
Dataset({
features: ['id', 'title', 'context', 'question', 'answers.text', 'answers.answer_start'],
num_rows: 87599
})
( features: Features batch_size: typing.Optional[int] = 1000 keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 num_proc: typing.Optional[int] = None ) → Dataset
Parameters
str
<-> ClassLabel
you should use map() to update the Dataset. int
, defaults to 1000
) —
Number of examples per batch provided to cast.
If batch_size <= 0
or batch_size == None
then provide the full dataset as a single batch to cast. bool
, defaults to False
) —
Whether to copy the data in-memory. bool
, defaults to True
if caching is enabled) —
If a cache file storing the current computation from function
can be identified, use it instead of recomputing. str
, optional, defaults to None
) —
Provide the name of a path for the cache file. It is used to store the
results of the computation instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map(). int
, optional, defaults to None
) —
Number of processes for multiprocessing. By default it doesn’t
use multiprocessing. Returns
A copy of the dataset with casted features.
Cast the dataset to a new set of features.
Example:
>>> from datasets import load_dataset, ClassLabel, Value
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> new_features = ds.features.copy()
>>> new_features['label'] = ClassLabel(names=['bad', 'good'])
>>> new_features['text'] = Value('large_string')
>>> ds = ds.cast(new_features)
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='large_string', id=None)}
( column: str feature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.LargeList, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image, datasets.features.video.Video] new_fingerprint: typing.Optional[str] = None )
Cast column to feature for decoding.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> ds = ds.cast_column('label', ClassLabel(names=['bad', 'good']))
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='string', id=None)}
( column_names: typing.Union[str, typing.List[str]] new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
Union[str, List[str]]
) —
Name of the column(s) to remove. str
, optional) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns
A copy of the dataset object without the columns to remove.
Remove one or several column(s) in the dataset and the features associated to them.
You can also remove a column using map() with remove_columns
but the present method
doesn’t copy the data of the remaining columns and is thus faster.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds = ds.remove_columns('label')
Dataset({
features: ['text'],
num_rows: 1066
})
>>> ds = ds.remove_columns(column_names=ds.column_names) # Removing all the columns returns an empty dataset with the `num_rows` property set to 0
Dataset({
features: [],
num_rows: 0
})
( original_column_name: str new_column_name: str new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
str
) —
Name of the column to rename. str
) —
New name for the column. str
, optional) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns
A copy of the dataset with a renamed column.
Rename a column in the dataset, and move the features associated to the original column under the new column name.
( column_mapping: typing.Dict[str, str] new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
Dict[str, str]
) —
A mapping of columns to rename to their new names str
, optional) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns
A copy of the dataset with renamed columns
Rename several columns in the dataset, and move the features associated to the original columns under the new column names.
( column_names: typing.Union[str, typing.List[str]] new_fingerprint: typing.Optional[str] = None ) → Dataset
Parameters
Union[str, List[str]]
) —
Name of the column(s) to keep. str
, optional) —
The new fingerprint of the dataset after transform. If None
,
the new fingerprint is computed using a hash of the previous
fingerprint, and the transform arguments. Returns
A copy of the dataset object which only consists of selected columns.
Select one or several column(s) in the dataset and the features associated to them.
( column: str include_nulls: bool = False )
Parameters
str
) —
The name of the column to cast (list all the column names with column_names) bool
, defaults to False
) —
Whether to include null values in the class labels. If True
, the null values will be encoded as the "None"
class label.
Added in 1.14.2
Casts the given column as ClassLabel and updates the table.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("boolq", split="validation")
>>> ds.features
{'answer': Value(dtype='bool', id=None),
'passage': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None)}
>>> ds = ds.class_encode_column('answer')
>>> ds.features
{'answer': ClassLabel(num_classes=2, names=['False', 'True'], id=None),
'passage': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None)}
Number of rows in the dataset.
Iterate through the examples.
If a formatting is set with Dataset.set_format() rows will be returned with the selected format.
( batch_size: int drop_last_batch: bool = False )
Iterate through the batches of size batch_size.
If a formatting is set with [~datasets.Dataset.set_format] rows will be returned with the selected format.
( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
str
, optional) —
Output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.
None
means `getitem“ returns python objects (default). List[str]
, optional) —
Columns to format in the output.
None
means __getitem__
returns all columns (default). bool
, defaults to False
) —
Keep un-formatted columns as well in the output (as python objects). np.array
, torch.tensor
or tensorflow.ragged.constant
. To be used in a with
statement. Set __getitem__
return format (type and columns).
( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
str
, optional) —
Either output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.
None
means __getitem__
returns python objects (default). List[str]
, optional) —
Columns to format in the output.
None
means __getitem__
returns all columns (default). bool
, defaults to False
) —
Keep un-formatted columns as well in the output (as python objects). np.array
, torch.tensor
or tensorflow.ragged.constant
. Set __getitem__
return format (type and columns). The data formatting is applied on-the-fly.
The format type
(for example “numpy”) is used to format batches when using __getitem__
.
It’s also possible to use custom transforms for formatting using set_transform().
It is possible to call map() after calling set_format
. Since map
may add new columns, then the list of formatted columns
gets updated. In this case, if you apply map
on a dataset to add a new column, then this column will be formatted as:
new formatted columns = (all columns - previously unformatted columns)
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds.set_format(type='numpy', columns=['text', 'label'])
>>> ds.format
{'type': 'numpy',
'format_kwargs': {},
'columns': ['text', 'label'],
'output_all_columns': False}
( transform: typing.Optional[typing.Callable] columns: typing.Optional[typing.List] = None output_all_columns: bool = False )
Parameters
Callable
, optional) —
User-defined formatting transform, replaces the format defined by set_format().
A formatting function is a callable that takes a batch (as a dict
) as input and returns a batch.
This function is applied right before returning the objects in __getitem__
. List[str]
, optional) —
Columns to format in the output.
If specified, then the input batch of the transform only contains those columns. bool
, defaults to False
) —
Keep un-formatted columns as well in the output (as python objects).
If set to True, then the other un-formatted columns are kept with the output of the transform. Set __getitem__
return format using this transform. The transform is applied on-the-fly on batches when __getitem__
is called.
As set_format(), this can be reset using reset_format().
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
>>> def encode(batch):
... return tokenizer(batch['text'], padding=True, truncation=True, return_tensors='pt')
>>> ds.set_transform(encode)
>>> ds[0]
{'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1]),
'input_ids': tensor([ 101, 29353, 2135, 15102, 1996, 9428, 20868, 2890, 8663, 6895,
20470, 2571, 3663, 2090, 4603, 3017, 3008, 1998, 2037, 24211,
5637, 1998, 11690, 2336, 1012, 102]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0])}
Reset __getitem__
return format to python objects and all columns.
Same as self.set_format()
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds.set_format(type='numpy', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
>>> ds.format
{'columns': ['input_ids', 'token_type_ids', 'attention_mask', 'label'],
'format_kwargs': {},
'output_all_columns': False,
'type': 'numpy'}
>>> ds.reset_format()
>>> ds.format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': None}
( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
str
, optional) —
Either output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.
None
means __getitem__
returns python objects (default). List[str]
, optional) —
Columns to format in the output.
None
means __getitem__
returns all columns (default). bool
, defaults to False
) —
Keep un-formatted columns as well in the output (as python objects). np.array
, torch.tensor
or tensorflow.ragged.constant
. Set __getitem__
return format (type and columns). The data formatting is applied on-the-fly.
The format type
(for example “numpy”) is used to format batches when using __getitem__
.
It’s also possible to use custom transforms for formatting using with_transform().
Contrary to set_format(), with_format
returns a new Dataset object.
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds.format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': None}
>>> ds = ds.with_format("torch")
>>> ds.format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': 'torch'}
>>> ds[0]
{'text': 'compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .',
'label': tensor(1),
'input_ids': tensor([ 101, 18027, 16310, 16001, 1103, 9321, 178, 11604, 7235, 6617,
1742, 2165, 2820, 1206, 6588, 22572, 12937, 1811, 2153, 1105,
1147, 12890, 19587, 6463, 1105, 15026, 1482, 119, 102, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])}
( transform: typing.Optional[typing.Callable] columns: typing.Optional[typing.List] = None output_all_columns: bool = False )
Parameters
Callable
, optional
) —
User-defined formatting transform, replaces the format defined by set_format().
A formatting function is a callable that takes a batch (as a dict
) as input and returns a batch.
This function is applied right before returning the objects in __getitem__
. List[str]
, optional
) —
Columns to format in the output.
If specified, then the input batch of the transform only contains those columns. bool
, defaults to False
) —
Keep un-formatted columns as well in the output (as python objects).
If set to True
, then the other un-formatted columns are kept with the output of the transform. Set __getitem__
return format using this transform. The transform is applied on-the-fly on batches when __getitem__
is called.
As set_format(), this can be reset using reset_format().
Contrary to set_transform(), with_transform
returns a new Dataset object.
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> def encode(example):
... return tokenizer(example["text"], padding=True, truncation=True, return_tensors='pt')
>>> ds = ds.with_transform(encode)
>>> ds[0]
{'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1]),
'input_ids': tensor([ 101, 18027, 16310, 16001, 1103, 9321, 178, 11604, 7235, 6617,
1742, 2165, 2820, 1206, 6588, 22572, 12937, 1811, 2153, 1105,
1147, 12890, 19587, 6463, 1105, 15026, 1482, 119, 102]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0])}
Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).
Clean up all cache files in the dataset cache directory, excepted the currently used cache file if there is one.
Be careful when running this command that no other process is currently using other cache files.
( function: typing.Optional[typing.Callable] = None with_indices: bool = False with_rank: bool = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 drop_last_batch: bool = False remove_columns: typing.Union[str, typing.List[str], NoneType] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 features: typing.Optional[datasets.features.features.Features] = None disable_nullable: bool = False fn_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None suffix_template: str = '_{rank:05d}_of_{num_proc:05d}' new_fingerprint: typing.Optional[str] = None desc: typing.Optional[str] = None )
Parameters
Callable
) — Function with one of the following signatures:
function(example: Dict[str, Any]) -> Dict[str, Any]
if batched=False
and with_indices=False
and with_rank=False
function(example: Dict[str, Any], *extra_args) -> Dict[str, Any]
if batched=False
and with_indices=True
and/or with_rank=True
(one extra arg for each)function(batch: Dict[str, List]) -> Dict[str, List]
if batched=True
and with_indices=False
and with_rank=False
function(batch: Dict[str, List], *extra_args) -> Dict[str, List]
if batched=True
and with_indices=True
and/or with_rank=True
(one extra arg for each)For advanced usage, the function can also return a pyarrow.Table
.
Moreover if your function returns nothing (None
), then map
will run your function and return the dataset unchanged.
If no function is provided, default to identity function: lambda x: x
.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the
signature of function
should be def function(example, idx[, rank]): ...
. bool
, defaults to False
) —
Provide process rank to function
. Note that in this case the
signature of function
should be def function(example[, idx], rank): ...
. Optional[Union[str, List[str]]]
, defaults to None
) —
The columns to be passed into function
as positional arguments. If None
, a dict
mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
. int
, optional, defaults to 1000
) —
Number of examples per batch provided to function
if batched=True
.
If batch_size <= 0
or batch_size == None
, provide the full dataset as a single batch to function
. bool
, defaults to False
) —
Whether a last batch smaller than the batch_size should be
dropped instead of being processed by the function. Optional[Union[str, List[str]]]
, defaults to None
) —
Remove a selection of columns while doing the mapping.
Columns will be removed before updating the examples with the output of function
, i.e. if function
is adding
columns with names in remove_columns
, these columns will be kept. bool
, defaults to False
) —
Keep the dataset in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the current computation from function
can be identified, use it instead of recomputing. str
, optional, defaults to None
) —
Provide the name of a path for the cache file. It is used to store the
results of the computation instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. Optional[datasets.Features]
, defaults to None
) —
Use a specific Features to store the cache file
instead of the automatically generated one. bool
, defaults to False
) —
Disallow null values in the table. Dict
, optional, defaults to None
) —
Keyword arguments to be passed to function
. int
, optional, defaults to None
) —
Max number of processes when generating cache. Already cached shards are loaded sequentially. str
) —
If cache_file_name
is specified, then this suffix
will be added at the end of the base name of each. Defaults to "_{rank:05d}_of_{num_proc:05d}"
. For example, if cache_file_name
is “processed.arrow”, then for
rank=1
and num_proc=4
, the resulting file would be "processed_00001_of_00004.arrow"
for the default suffix. str
, optional, defaults to None
) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. str
, optional, defaults to None
) —
Meaningful description to be displayed alongside with the progress bar while mapping examples. Apply a function to all the examples in the table (individually or in batches) and update the table. If your function returns a column that already exists, then it overwrites it.
You can specify whether the function should be batched or not with the batched
parameter:
False
, then the function takes 1 example in and should return 1 example.
An example is a dictionary, e.g. {"text": "Hello there !"}
.True
and batch_size
is 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples.
A batch is a dictionary, e.g. a batch of 1 example is {"text": ["Hello there !"]}
.True
and batch_size
is n > 1
, then the function takes a batch of n
examples as input and can return a batch with n
examples, or with an arbitrary number of examples.
Note that the last batch may have less than n
examples.
A batch is a dictionary, e.g. a batch of n
examples is {"text": ["Hello there !"] * n}
.Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> def add_prefix(example):
... example["text"] = "Review: " + example["text"]
... return example
>>> ds = ds.map(add_prefix)
>>> ds[0:3]["text"]
['Review: compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .',
'Review: the soundtrack alone is worth the price of admission .',
'Review: rodriguez does a splendid job of racial profiling hollywood style--casting excellent latin actors of all ages--a trend long overdue .']
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
# set number of processors
>>> ds = ds.map(add_prefix, num_proc=4)
( function: typing.Optional[typing.Callable] = None with_indices: bool = False with_rank: bool = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 fn_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None suffix_template: str = '_{rank:05d}_of_{num_proc:05d}' new_fingerprint: typing.Optional[str] = None desc: typing.Optional[str] = None )
Parameters
Callable
) — Callable with one of the following signatures:
function(example: Dict[str, Any]) -> bool
if batched=False
and with_indices=False
and with_rank=False
function(example: Dict[str, Any], *extra_args) -> bool
if batched=False
and with_indices=True
and/or with_rank=True
(one extra arg for each)function(batch: Dict[str, List]) -> List[bool]
if batched=True
and with_indices=False
and with_rank=False
function(batch: Dict[str, List], *extra_args) -> List[bool]
if batched=True
and with_indices=True
and/or with_rank=True
(one extra arg for each)If no function is provided, defaults to an always True
function: lambda x: True
.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the
signature of function
should be def function(example, idx[, rank]): ...
. bool
, defaults to False
) —
Provide process rank to function
. Note that in this case the
signature of function
should be def function(example[, idx], rank): ...
. str
or List[str]
, optional) —
The columns to be passed into function
as
positional arguments. If None
, a dict
mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
. int
, optional, defaults to 1000
) —
Number of examples per batch provided to function
if
batched = True
. If batched = False
, one example per batch is passed to function
.
If batch_size <= 0
or batch_size == None
, provide the full dataset as a single batch to function
. bool
, defaults to False
) —
Keep the dataset in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the current computation from function
can be identified, use it instead of recomputing. str
, optional) —
Provide the name of a path for the cache file. It is used to store the
results of the computation instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. dict
, optional) —
Keyword arguments to be passed to function
. int
, optional) —
Number of processes for multiprocessing. By default it doesn’t
use multiprocessing. str
) —
If cache_file_name
is specified, then this suffix will be added at the end of the base name of each.
For example, if cache_file_name
is "processed.arrow"
, then for rank = 1
and num_proc = 4
,
the resulting file would be "processed_00001_of_00004.arrow"
for the default suffix (default
_{rank:05d}_of_{num_proc:05d}
). str
, optional) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. str
, optional, defaults to None
) —
Meaningful description to be displayed alongside with the progress bar while filtering examples. Apply a filter function to all the elements in the table in batches and update the table so that the dataset only includes examples according to the filter function.
( indices: typing.Iterable keep_in_memory: bool = False indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 new_fingerprint: typing.Optional[str] = None )
Parameters
range
, list
, iterable
, ndarray
or Series
) —
Range, list or 1D-array of integer indices for indexing.
If the indices correspond to a contiguous range, the Arrow table is simply sliced.
However passing a list of indices that are not contiguous creates indices mapping, which is much less efficient,
but still faster than recreating an Arrow table made of the requested rows. bool
, defaults to False
) —
Keep the indices mapping in memory instead of writing it to a cache file. str
, optional, defaults to None
) —
Provide the name of a path for the cache file. It is used to store the
indices mapping instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. str
, optional, defaults to None
) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Create a new dataset with rows selected following the list/array of indices.
( column_names: typing.Union[str, typing.Sequence[str]] reverse: typing.Union[bool, typing.Sequence[bool]] = False null_placement: str = 'at_end' keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 new_fingerprint: typing.Optional[str] = None )
Parameters
Union[str, Sequence[str]]
) —
Column name(s) to sort by. Union[bool, Sequence[bool]]
, defaults to False
) —
If True
, sort by descending order rather than ascending. If a single bool is provided,
the value is applied to the sorting of all column names. Otherwise a list of bools with the
same length and order as column_names must be provided. str
, defaults to at_end
) —
Put None
values at the beginning if at_start
or first
or at the end if at_end
or last
Added in 1.14.2
bool
, defaults to False
) —
Keep the sorted indices in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the sorted indices
can be identified, use it instead of recomputing. str
, optional, defaults to None
) —
Provide the name of a path for the cache file. It is used to store the
sorted indices instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
Higher value gives smaller cache files, lower value consume less temporary memory. str
, optional, defaults to None
) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments Create a new dataset sorted according to a single or multiple columns.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset('rotten_tomatoes', split='validation')
>>> ds['label'][:10]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> sorted_ds = ds.sort('label')
>>> sorted_ds['label'][:10]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> another_sorted_ds = ds.sort(['label', 'text'], reverse=[True, False])
>>> another_sorted_ds['label'][:10]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
( seed: typing.Optional[int] = None generator: typing.Optional[numpy.random._generator.Generator] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 new_fingerprint: typing.Optional[str] = None )
Parameters
int
, optional) —
A seed to initialize the default BitGenerator if generator=None
.
If None
, then fresh, unpredictable entropy will be pulled from the OS.
If an int
or array_like[ints]
is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state. numpy.random.Generator
, optional) —
Numpy random Generator to use to compute the permutation of the dataset rows.
If generator=None
(default), uses np.random.default_rng
(the default BitGenerator (PCG64) of NumPy). bool
, default False
) —
Keep the shuffled indices in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the shuffled indices
can be identified, use it instead of recomputing. str
, optional) —
Provide the name of a path for the cache file. It is used to store the
shuffled indices instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. str
, optional, defaults to None
) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Create a new Dataset where the rows are shuffled.
Currently shuffling uses numpy random generators. You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPy’s default random generator (PCG64).
Shuffling takes the list of indices [0:len(my_dataset)]
and shuffles it to create an indices mapping.
However as soon as your Dataset has an indices mapping, the speed can become 10x slower.
This is because there is an extra step to get the row index to read using the indices mapping, and most importantly, you aren’t reading contiguous chunks of data anymore.
To restore the speed, you’d need to rewrite the entire dataset on your disk again using Dataset.flatten_indices(), which removes the indices mapping.
This may take a lot of time depending of the size of your dataset though:
my_dataset[0] # fast
my_dataset = my_dataset.shuffle(seed=42)
my_dataset[0] # up to 10x slower
my_dataset = my_dataset.flatten_indices() # rewrite the shuffled dataset on disk as contiguous chunks of data
my_dataset[0] # fast again
In this case, we recommend switching to an IterableDataset and leveraging its fast approximate shuffling method IterableDataset.shuffle().
It only shuffles the shards order and adds a shuffle buffer to your dataset, which keeps the speed of your dataset optimal:
my_iterable_dataset = my_dataset.to_iterable_dataset(num_shards=128)
for example in enumerate(my_iterable_dataset): # fast
pass
shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed=42, buffer_size=100)
for example in enumerate(shuffled_iterable_dataset): # as fast as before
pass
Create a new Dataset that skips the first n
elements.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train")
>>> list(ds.take(3))
[{'label': 1,
'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},
{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'},
{'label': 1, 'text': 'effective but too-tepid biopic'}]
>>> ds = ds.skip(1)
>>> list(ds.take(3))
[{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'},
{'label': 1, 'text': 'effective but too-tepid biopic'},
{'label': 1,
'text': 'if you sometimes like to go to the movies to have fun , wasabi is a good place to start .'}]
Create a new Dataset with only the first n
elements.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train")
>>> small_ds = ds.take(2)
>>> list(small_ds)
[{'label': 1,
'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},
{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'}]
( test_size: typing.Union[float, int, NoneType] = None train_size: typing.Union[float, int, NoneType] = None shuffle: bool = True stratify_by_column: typing.Optional[str] = None seed: typing.Optional[int] = None generator: typing.Optional[numpy.random._generator.Generator] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None train_indices_cache_file_name: typing.Optional[str] = None test_indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 train_new_fingerprint: typing.Optional[str] = None test_new_fingerprint: typing.Optional[str] = None )
Parameters
numpy.random.Generator
, optional) —
Size of the test split
If float
, should be between 0.0
and 1.0
and represent the proportion of the dataset to include in the test split.
If int
, represents the absolute number of test samples.
If None
, the value is set to the complement of the train size.
If train_size
is also None
, it will be set to 0.25
. numpy.random.Generator
, optional) —
Size of the train split
If float
, should be between 0.0
and 1.0
and represent the proportion of the dataset to include in the train split.
If int
, represents the absolute number of train samples.
If None
, the value is automatically set to the complement of the test size. bool
, optional, defaults to True
) —
Whether or not to shuffle the data before splitting. str
, optional, defaults to None
) —
The column name of labels to be used to perform stratified split of data. int
, optional) —
A seed to initialize the default BitGenerator if generator=None
.
If None
, then fresh, unpredictable entropy will be pulled from the OS.
If an int
or array_like[ints]
is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state. numpy.random.Generator
, optional) —
Numpy random Generator to use to compute the permutation of the dataset rows.
If generator=None
(default), uses np.random.default_rng
(the default BitGenerator (PCG64) of NumPy). bool
, defaults to False
) —
Keep the splits indices in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the splits indices
can be identified, use it instead of recomputing. str
, optional) —
Provide the name of a path for the cache file. It is used to store the
train split indices instead of the automatically generated cache file name. str
, optional) —
Provide the name of a path for the cache file. It is used to store the
test split indices instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. str
, optional, defaults to None
) —
The new fingerprint of the train set after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments str
, optional, defaults to None
) —
The new fingerprint of the test set after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments Return a dictionary (datasets.DatasetDict) with two random train and test subsets (train
and test
Dataset
splits).
Splits are created from the dataset according to test_size
, train_size
and shuffle
.
This method is similar to scikit-learn train_test_split
.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds = ds.train_test_split(test_size=0.2, shuffle=True)
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 852
})
test: Dataset({
features: ['text', 'label'],
num_rows: 214
})
})
# set a seed
>>> ds = ds.train_test_split(test_size=0.2, seed=42)
# stratified split
>>> ds = load_dataset("imdb",split="train")
Dataset({
features: ['text', 'label'],
num_rows: 25000
})
>>> ds = ds.train_test_split(test_size=0.2, stratify_by_column="label")
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 20000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 5000
})
})
( num_shards: int index: int contiguous: bool = True keep_in_memory: bool = False indices_cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 )
Parameters
int
) —
How many shards to split the dataset into. int
) —
Which shard to select and return. bool
, defaults to True
):
Whether to select contiguous blocks of indices for shards. bool
, defaults to False
) —
Keep the dataset in memory instead of writing it to a cache file. str
, optional) —
Provide the name of a path for the cache file. It is used to store the
indices of each shard instead of the automatically generated cache file name. int
, defaults to 1000
) —
This only concerns the indices mapping.
Number of indices per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. Return the index
-nth shard from dataset split into num_shards
pieces.
This shards deterministically. dataset.shard(n, i)
splits the dataset into contiguous chunks,
so it can be easily concatenated back together after processing. If len(dataset) % n == l
, then the
first l
dataset each have length (len(dataset) // n) + 1
, and the remaining dataset have length (len(dataset) // n)
.
datasets.concatenate_datasets([dset.shard(n, i) for i in range(n)])
returns a dataset with the same order as the original.
Note: n should be less or equal to the number of elements in the dataset len(dataset)
.
On the other hand, dataset.shard(n, i, contiguous=False)
contains all elements of the dataset whose index mod n = i
.
Be sure to shard before using any randomizing operator (such as shuffle
).
It is best if the shard operator is used early in the dataset pipeline.
( batch_size: typing.Optional[int] = None columns: typing.Union[str, typing.List[str], NoneType] = None shuffle: bool = False collate_fn: typing.Optional[typing.Callable] = None drop_remainder: bool = False collate_fn_args: typing.Optional[typing.Dict[str, typing.Any]] = None label_cols: typing.Union[str, typing.List[str], NoneType] = None prefetch: bool = True num_workers: int = 0 num_test_batches: int = 20 )
Parameters
int
, optional) —
Size of batches to load from the dataset. Defaults to None
, which implies that the dataset won’t be
batched, but the returned dataset can be batched later with tf_dataset.batch(batch_size)
. List[str]
or str
, optional) —
Dataset column(s) to load in the tf.data.Dataset
.
Column names that are created by the collate_fn
and that do not exist in the original dataset can be used. bool
, defaults to False
) —
Shuffle the dataset order when loading. Recommended True
for training, False
for
validation/evaluation. bool
, defaults to False
) —
Drop the last incomplete batch when loading. Ensures
that all batches yielded by the dataset will have the same length on the batch dimension. Callable
, optional) —
A function or callable object (such as a DataCollator
) that will collate
lists of samples into a batch. Dict
, optional) —
An optional dict
of keyword arguments to be passed to the
collate_fn
. List[str]
or str
, defaults to None
) —
Dataset column(s) to load as labels.
Note that many models compute loss internally rather than letting Keras do it, in which case
passing the labels here is optional, as long as they’re in the input columns
. bool
, defaults to True
) —
Whether to run the dataloader in a separate thread and maintain
a small buffer of batches for training. Improves performance by allowing data to be loaded in the
background while the model is training. int
, defaults to 0
) —
Number of workers to use for loading the dataset. Only supported on Python versions >= 3.8. int
, defaults to 20
) —
Number of batches to use to infer the output signature of the dataset.
The higher this number, the more accurate the signature will be, but the longer it will take to
create the dataset. Create a tf.data.Dataset
from the underlying Dataset. This tf.data.Dataset
will load and collate batches from
the Dataset, and is suitable for passing to methods like model.fit()
or model.predict()
. The dataset will yield
dicts
for both inputs and labels unless the dict
would contain only a single key, in which case a raw
tf.Tensor
is yielded instead.
( repo_id: str config_name: str = 'default' set_default: typing.Optional[bool] = None split: typing.Optional[str] = None data_dir: typing.Optional[str] = None commit_message: typing.Optional[str] = None commit_description: typing.Optional[str] = None private: typing.Optional[bool] = False token: typing.Optional[str] = None revision: typing.Optional[str] = None create_pr: typing.Optional[bool] = False max_shard_size: typing.Union[str, int, NoneType] = None num_shards: typing.Optional[int] = None embed_external_files: bool = True )
Parameters
str
) —
The ID of the repository to push to in the following format: <user>/<dataset_name>
or
<org>/<dataset_name>
. Also accepts <dataset_name>
, which will default to the namespace
of the logged-in user. str
, defaults to “default”) —
The configuration name (or subset) of a dataset. Defaults to “default”. bool
, optional) —
Whether to set this configuration as the default one. Otherwise, the default configuration is the one
named “default”. str
, optional) —
The name of the split that will be given to that dataset. Defaults to self.split
. str
, optional) —
Directory name that will contain the uploaded data files. Defaults to the config_name
if different
from “default”, else “data”.
Added in 2.17.0
str
, optional) —
Message to commit while pushing. Will default to "Upload dataset"
. str
, optional) —
Description of the commit that will be created.
Additionally, description of the PR if a PR is created (create_pr
is True).
Added in 2.16.0
bool
, optional, defaults to False
) —
Whether the dataset repository should be set to private or not. Only affects repository creation:
a repository that already exists will not be affected by that parameter. str
, optional) —
An optional authentication token for the Hugging Face Hub. If no token is passed, will default
to the token saved locally when logging in with huggingface-cli login
. Will raise an error
if no token is passed and the user is not logged-in. str
, optional) —
Branch to push the uploaded files to. Defaults to the "main"
branch.
Added in 2.15.0
bool
, optional, defaults to False
) —
Whether to create a PR with the uploaded files or directly commit.
Added in 2.15.0
int
or str
, optional, defaults to "500MB"
) —
The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by
a unit (like "5MB"
). int
, optional) —
Number of shards to write. By default, the number of shards depends on max_shard_size
.
Added in 2.8.0
bool
, defaults to True
) —
Whether to embed file bytes in the shards.
In particular, this will do the following before the push for the fields of type:
Pushes the dataset to the hub as a Parquet dataset. The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.
The resulting Parquet files are self-contained by default. If your dataset contains Image, Audio or Video
data, the Parquet files will store the bytes of your images or audio files.
You can disable this by setting embed_external_files
to False
.
Example:
>>> dataset.push_to_hub("<organization>/<dataset_id>")
>>> dataset_dict.push_to_hub("<organization>/<dataset_id>", private=True)
>>> dataset.push_to_hub("<organization>/<dataset_id>", max_shard_size="1GB")
>>> dataset.push_to_hub("<organization>/<dataset_id>", num_shards=1024)
If your dataset has multiple splits (e.g. train/validation/test):
>>> train_dataset.push_to_hub("<organization>/<dataset_id>", split="train")
>>> val_dataset.push_to_hub("<organization>/<dataset_id>", split="validation")
>>> # later
>>> dataset = load_dataset("<organization>/<dataset_id>")
>>> train_dataset = dataset["train"]
>>> val_dataset = dataset["validation"]
If you want to add a new configuration (or subset) to a dataset (e.g. if the dataset has multiple tasks/versions/languages):
>>> english_dataset.push_to_hub("<organization>/<dataset_id>", "en")
>>> french_dataset.push_to_hub("<organization>/<dataset_id>", "fr")
>>> # later
>>> english_dataset = load_dataset("<organization>/<dataset_id>", "en")
>>> french_dataset = load_dataset("<organization>/<dataset_id>", "fr")
( dataset_path: typing.Union[str, bytes, os.PathLike] max_shard_size: typing.Union[str, int, NoneType] = None num_shards: typing.Optional[int] = None num_proc: typing.Optional[int] = None storage_options: typing.Optional[dict] = None )
Parameters
path-like
) —
Path (e.g. dataset/train
) or remote URI (e.g. s3://my-bucket/dataset/train
)
of the dataset directory where the dataset will be saved to. int
or str
, optional, defaults to "500MB"
) —
The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit
(like "50MB"
). int
, optional) —
Number of shards to write. By default the number of shards depends on max_shard_size
and num_proc
.
Added in 2.8.0
int
, optional) —
Number of processes when downloading and generating the dataset locally.
Multiprocessing is disabled by default.
Added in 2.8.0
dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.8.0
Saves a dataset to a dataset directory, or in a filesystem using any implementation of fsspec.spec.AbstractFileSystem
.
For Image, Audio and Video data:
All the Image(), Audio() and Video() data are stored in the arrow files. If you want to store paths or urls, please use the Value(“string”) type.
( dataset_path: typing.Union[str, bytes, os.PathLike] keep_in_memory: typing.Optional[bool] = None storage_options: typing.Optional[dict] = None ) → Dataset or DatasetDict
Parameters
path-like
) —
Path (e.g. "dataset/train"
) or remote URI (e.g. "s3//my-bucket/dataset/train"
)
of the dataset directory where the dataset will be loaded from. bool
, defaults to None
) —
Whether to copy the dataset in-memory. If None
, the
dataset will not be copied in-memory unless explicitly enabled by setting
datasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the
improve performance section. dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.8.0
Returns
dataset_path
is a path of a dataset directory, the dataset requested.dataset_path
is a path of a dataset dict directory, a datasets.DatasetDict
with each split.Loads a dataset that was previously saved using save_to_disk
from a dataset directory, or from a
filesystem using any implementation of fsspec.spec.AbstractFileSystem
.
( keep_in_memory: bool = False cache_file_name: typing.Optional[str] = None writer_batch_size: typing.Optional[int] = 1000 features: typing.Optional[datasets.features.features.Features] = None disable_nullable: bool = False num_proc: typing.Optional[int] = None new_fingerprint: typing.Optional[str] = None )
Parameters
bool
, defaults to False
) —
Keep the dataset in memory instead of writing it to a cache file. str
, optional, default None
) —
Provide the name of a path for the cache file. It is used to store the
results of the computation instead of the automatically generated cache file name. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. Optional[datasets.Features]
, defaults to None
) —
Use a specific Features to store the cache file
instead of the automatically generated one. bool
, defaults to False
) —
Allow null values in the table. int
, optional, default None
) —
Max number of processes when generating cache. Already cached shards are loaded sequentially str
, optional, defaults to None
) —
The new fingerprint of the dataset after transform.
If None
, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments Create and cache a new Dataset by flattening the indices mapping.
( path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO] batch_size: typing.Optional[int] = None num_proc: typing.Optional[int] = None storage_options: typing.Optional[dict] = None **to_csv_kwargs ) → int
Parameters
PathLike
or FileOrBuffer
) —
Either a path to a file (e.g. file.csv
), a remote URI (e.g. hf://datasets/username/my_dataset_name/data.csv
),
or a BinaryIO, where the dataset will be saved to in the specified format. int
, optional) —
Size of the batch to load in memory and write at once.
Defaults to datasets.config.DEFAULT_MAX_BATCH_SIZE
. int
, optional) —
Number of processes for multiprocessing. By default it doesn’t
use multiprocessing. batch_size
in this case defaults to
datasets.config.DEFAULT_MAX_BATCH_SIZE
but feel free to make it 5x or 10x of the default
value if you have sufficient compute power. dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.19.0
pandas.DataFrame.to_csv
.
Changed in 2.10.0
Now, index
defaults to False
if not specified.
If you would like to write the index, pass index=True
and also set a name for the index column by
passing index_label
.
Returns
int
The number of characters or bytes written.
Exports the dataset to csv
( batch_size: typing.Optional[int] = None batched: bool = False )
Parameters
bool
) —
Set to True
to return a generator that yields the dataset as batches
of batch_size
rows. Defaults to False
(returns the whole datasets once). int
, optional) —
The size (number of rows) of the batches if batched
is True
.
Defaults to datasets.config.DEFAULT_MAX_BATCH_SIZE
. Returns the dataset as a pandas.DataFrame
. Can also return a generator for large datasets.
( batch_size: typing.Optional[int] = None )
Returns the dataset as a Python dict. Can also return a generator for large datasets.
( path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO] batch_size: typing.Optional[int] = None num_proc: typing.Optional[int] = None storage_options: typing.Optional[dict] = None **to_json_kwargs ) → int
Parameters
PathLike
or FileOrBuffer
) —
Either a path to a file (e.g. file.json
), a remote URI (e.g. hf://datasets/username/my_dataset_name/data.json
),
or a BinaryIO, where the dataset will be saved to in the specified format. int
, optional) —
Size of the batch to load in memory and write at once.
Defaults to datasets.config.DEFAULT_MAX_BATCH_SIZE
. int
, optional) —
Number of processes for multiprocessing. By default, it doesn’t
use multiprocessing. batch_size
in this case defaults to
datasets.config.DEFAULT_MAX_BATCH_SIZE
but feel free to make it 5x or 10x of the default
value if you have sufficient compute power. dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.19.0
pandas.DataFrame.to_json
.
Default arguments are lines=True
and `orient=“records”.
Changed in 2.11.0
The parameter index
defaults to False
if orient
is "split"
or "table"
.
If you would like to write the index, pass index=True
.
Returns
int
The number of characters or bytes written.
Export the dataset to JSON Lines or JSON.
The default output format is JSON Lines.
To export to JSON, pass lines=False
argument and the desired orient
.
( path_or_buf: typing.Union[str, bytes, os.PathLike, typing.BinaryIO] batch_size: typing.Optional[int] = None storage_options: typing.Optional[dict] = None **parquet_writer_kwargs ) → int
Parameters
PathLike
or FileOrBuffer
) —
Either a path to a file (e.g. file.parquet
), a remote URI (e.g. hf://datasets/username/my_dataset_name/data.parquet
),
or a BinaryIO, where the dataset will be saved to in the specified format. int
, optional) —
Size of the batch to load in memory and write at once.
Defaults to datasets.config.DEFAULT_MAX_BATCH_SIZE
. dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.19.0
pyarrow.parquet.ParquetWriter
. Returns
int
The number of characters or bytes written.
Exports the dataset to parquet
( name: str con: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')] batch_size: typing.Optional[int] = None **sql_writer_kwargs ) → int
Parameters
str
) —
Name of SQL table. str
or sqlite3.Connection
or sqlalchemy.engine.Connection
or sqlalchemy.engine.Connection
) —
A URI string or a SQLite3/SQLAlchemy connection object used to write to a database. int
, optional) —
Size of the batch to load in memory and write at once.
Defaults to datasets.config.DEFAULT_MAX_BATCH_SIZE
. pandas.DataFrame.to_sql
.
Changed in 2.11.0
Now, index
defaults to False
if not specified.
If you would like to write the index, pass index=True
and also set a name for the index column by
passing index_label
.
Returns
int
The number of records written.
Exports the dataset to a SQL database.
( num_shards: typing.Optional[int] = 1 )
Parameters
int
, default to 1
) —
Number of shards to define when instantiating the iterable dataset. This is especially useful for big datasets to be able to shuffle properly,
and also to enable fast parallel loading using a PyTorch DataLoader or in distributed setups for example.
Shards are defined using datasets.Dataset.shard(): it simply slices the data without writing anything on disk. Get an datasets.IterableDataset from a map-style datasets.Dataset. This is equivalent to loading a dataset in streaming mode with datasets.load_dataset(), but much faster since the data is streamed from local files.
Contrary to map-style datasets, iterable datasets are lazy and can only be iterated over (e.g. using a for loop). Since they are read sequentially in training loops, iterable datasets are much faster than map-style datasets. All the transformations applied to iterable datasets like filtering or processing are done on-the-fly when you start iterating over the dataset.
Still, it is possible to shuffle an iterable dataset using datasets.IterableDataset.shuffle(). This is a fast approximate shuffling that works best if you have multiple shards and if you specify a buffer size that is big enough.
To get the best speed performance, make sure your dataset doesn’t have an indices mapping.
If this is the case, the data are not read contiguously, which can be slow sometimes.
You can use ds = ds.flatten_indices()
to write your dataset in contiguous chunks of data and have optimal speed before switching to an iterable dataset.
Example:
With lazy filtering and processing:
>>> ids = ds.to_iterable_dataset()
>>> ids = ids.filter(filter_fn).map(process_fn) # will filter and process on-the-fly when you start iterating over the iterable dataset
>>> for example in ids:
... pass
With sharding to enable efficient shuffling:
>>> ids = ds.to_iterable_dataset(num_shards=64) # the dataset is split into 64 shards to be iterated over
>>> ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer for fast approximate shuffling when you start iterating
>>> for example in ids:
... pass
With a PyTorch DataLoader:
>>> import torch
>>> ids = ds.to_iterable_dataset(num_shards=64)
>>> ids = ids.filter(filter_fn).map(process_fn)
>>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards to each worker to load, filter and process when you start iterating
>>> for example in ids:
... pass
With a PyTorch DataLoader and shuffling:
>>> import torch
>>> ids = ds.to_iterable_dataset(num_shards=64)
>>> ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating
>>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from the shuffled list of shards to each worker when you start iterating
>>> for example in ids:
... pass
In a distributed setup like PyTorch DDP with a PyTorch DataLoader and shuffling
>>> from datasets.distributed import split_dataset_by_node
>>> ids = ds.to_iterable_dataset(num_shards=512)
>>> ids = ids.shuffle(buffer_size=10_000, seed=42) # will shuffle the shards order and use a shuffle buffer when you start iterating
>>> ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating
>>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating
>>> for example in ids:
... pass
With shuffling and multiple epochs:
>>> ids = ds.to_iterable_dataset(num_shards=64)
>>> ids = ids.shuffle(buffer_size=10_000, seed=42) # will shuffle the shards order and use a shuffle buffer when you start iterating
>>> for epoch in range(n_epochs):
... ids.set_epoch(epoch) # will use effective_seed = seed + epoch to shuffle the shards and for the shuffle buffer when you start iterating
... for example in ids:
... pass
( column: str index_name: typing.Optional[str] = None device: typing.Optional[int] = None string_factory: typing.Optional[str] = None metric_type: typing.Optional[int] = None custom_index: typing.Optional[ForwardRef('faiss.Index')] = None batch_size: int = 1000 train_size: typing.Optional[int] = None faiss_verbose: bool = False dtype = <class 'numpy.float32'> )
Parameters
str
) —
The column of the vectors to add to the index. str
, optional) —
The index_name
/identifier of the index.
This is the index_name
that is used to call get_nearest_examples() or search().
By default it corresponds to column
. Union[int, List[int]]
, optional) —
If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs.
If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. str
, optional) —
This is passed to the index factory of Faiss to create the index.
Default index class is IndexFlat
. int
, optional) —
Type of metric. Ex: faiss.METRIC_INNER_PRODUCT
or faiss.METRIC_L2
. faiss.Index
, optional) —
Custom Faiss index that you already have instantiated and configured for your needs. int
) —
Size of the batch to use while adding vectors to the FaissIndex
. Default value is 1000
.Added in 2.4.0
int
, optional) —
If the index needs a training step, specifies how many vectors will be used to train the index. bool
, defaults to False
) —
Enable the verbosity of the Faiss index. data-type
) —
The dtype of the numpy arrays that are indexed.
Default is np.float32
. Add a dense index using Faiss for fast retrieval.
By default the index is done over the vectors of the specified column.
You can specify device
if you want to run it on GPU (device
must be the GPU index).
You can find more information about Faiss here:
Example:
>>> ds = datasets.load_dataset('crime_and_punish', split='train')
>>> ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))
>>> ds_with_embeddings.add_faiss_index(column='embeddings')
>>> # query
>>> scores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)
>>> # save index
>>> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')
>>> ds = datasets.load_dataset('crime_and_punish', split='train')
>>> # load index
>>> ds.load_faiss_index('embeddings', 'my_index.faiss')
>>> # query
>>> scores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)
( external_arrays: <built-in function array> index_name: str device: typing.Optional[int] = None string_factory: typing.Optional[str] = None metric_type: typing.Optional[int] = None custom_index: typing.Optional[ForwardRef('faiss.Index')] = None batch_size: int = 1000 train_size: typing.Optional[int] = None faiss_verbose: bool = False dtype = <class 'numpy.float32'> )
Parameters
np.array
) —
If you want to use arrays from outside the lib for the index, you can set external_arrays
.
It will use external_arrays
to create the Faiss index instead of the arrays in the given column
. str
) —
The index_name
/identifier of the index.
This is the index_name
that is used to call get_nearest_examples() or search(). Union[int, List[int]]
, optional) —
If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs.
If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. str
, optional) —
This is passed to the index factory of Faiss to create the index.
Default index class is IndexFlat
. int
, optional) —
Type of metric. Ex: faiss.faiss.METRIC_INNER_PRODUCT
or faiss.METRIC_L2
. faiss.Index
, optional) —
Custom Faiss index that you already have instantiated and configured for your needs. int
, optional) —
Size of the batch to use while adding vectors to the FaissIndex. Default value is 1000.Added in 2.4.0
int
, optional) —
If the index needs a training step, specifies how many vectors will be used to train the index. bool
, defaults to False) —
Enable the verbosity of the Faiss index. numpy.dtype
) —
The dtype of the numpy arrays that are indexed. Default is np.float32. Add a dense index using Faiss for fast retrieval.
The index is created using the vectors of external_arrays
.
You can specify device
if you want to run it on GPU (device
must be the GPU index).
You can find more information about Faiss here:
( index_name: str file: typing.Union[str, pathlib.PurePath] storage_options: typing.Optional[typing.Dict] = None )
Parameters
str
) — The index_name/identifier of the index. This is the index_name that is used to call .get_nearest
or .search
. str
) — The path to the serialized faiss index on disk or remote URI (e.g. "s3://my-bucket/index.faiss"
). dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.11.0
Save a FaissIndex on disk.
( index_name: str file: typing.Union[str, pathlib.PurePath] device: typing.Union[int, typing.List[int], NoneType] = None storage_options: typing.Optional[typing.Dict] = None )
Parameters
str
) — The index_name/identifier of the index. This is the index_name that is used to
call .get_nearest
or .search
. str
) — The path to the serialized faiss index on disk or remote URI (e.g. "s3://my-bucket/index.faiss"
). Union[int, List[int]]
) — If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs.
If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.11.0
Load a FaissIndex from disk.
If you want to do additional configurations, you can have access to the faiss index object by doing
.get_index(index_name).faiss_index
to make it fit your needs.
( column: str index_name: typing.Optional[str] = None host: typing.Optional[str] = None port: typing.Optional[int] = None es_client: typing.Optional[ForwardRef('elasticsearch.Elasticsearch')] = None es_index_name: typing.Optional[str] = None es_index_config: typing.Optional[dict] = None )
Parameters
str
) —
The column of the documents to add to the index. str
, optional) —
The index_name
/identifier of the index.
This is the index name that is used to call get_nearest_examples() or search().
By default it corresponds to column
. str
, optional, defaults to localhost
) —
Host of where ElasticSearch is running. str
, optional, defaults to 9200
) —
Port of where ElasticSearch is running. elasticsearch.Elasticsearch
, optional) —
The elasticsearch client used to create the index if host and port are None
. str
, optional) —
The elasticsearch index name used to create the index. dict
, optional) —
The configuration of the elasticsearch index.
Default config is: Add a text index using ElasticSearch for fast retrieval. This is done in-place.
Example:
>>> es_client = elasticsearch.Elasticsearch()
>>> ds = datasets.load_dataset('crime_and_punish', split='train')
>>> ds.add_elasticsearch_index(column='line', es_client=es_client, es_index_name="my_es_index")
>>> scores, retrieved_examples = ds.get_nearest_examples('line', 'my new query', k=10)
( index_name: str es_index_name: str host: typing.Optional[str] = None port: typing.Optional[int] = None es_client: typing.Optional[ForwardRef('Elasticsearch')] = None es_index_config: typing.Optional[dict] = None )
Parameters
str
) —
The index_name
/identifier of the index. This is the index name that is used to call get_nearest
or search
. str
) —
The name of elasticsearch index to load. str
, optional, defaults to localhost
) —
Host of where ElasticSearch is running. str
, optional, defaults to 9200
) —
Port of where ElasticSearch is running. elasticsearch.Elasticsearch
, optional) —
The elasticsearch client used to create the index if host and port are None
. dict
, optional) —
The configuration of the elasticsearch index.
Default config is: Load an existing text index using ElasticSearch for fast retrieval.
List the colindex_nameumns
/identifiers of all the attached indexes.
List the index_name
/identifiers of all the attached indexes.
( index_name: str )
Drop the index with the specified column.
( index_name: str query: typing.Union[str, <built-in function array>] k: int = 10 **kwargs ) → (scores, indices)
Parameters
str
) —
The name/identifier of the index. Union[str, np.ndarray]
) —
The query as a string if index_name
is a text index or as a numpy array if index_name
is a vector index. int
) —
The number of examples to retrieve. Returns
(scores, indices)
A tuple of (scores, indices)
where:
List[List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examplesList[List[int]]
): the indices of the retrieved examplesFind the nearest examples indices in the dataset to the query.
( index_name: str queries: typing.Union[typing.List[str], <built-in function array>] k: int = 10 **kwargs ) → (total_scores, total_indices)
Parameters
str
) —
The index_name
/identifier of the index. Union[List[str], np.ndarray]
) —
The queries as a list of strings if index_name
is a text index or as a numpy array if index_name
is a vector index. int
) —
The number of examples to retrieve per query. Returns
(total_scores, total_indices)
A tuple of (total_scores, total_indices)
where:
List[List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examples per queryList[List[int]]
): the indices of the retrieved examples per queryFind the nearest examples indices in the dataset to the query.
( index_name: str query: typing.Union[str, <built-in function array>] k: int = 10 **kwargs ) → (scores, examples)
Parameters
str
) —
The index_name/identifier of the index. Union[str, np.ndarray]
) —
The query as a string if index_name
is a text index or as a numpy array if index_name
is a vector index. int
) —
The number of examples to retrieve. Returns
(scores, examples)
A tuple of (scores, examples)
where:
List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examplesdict
): the retrieved examplesFind the nearest examples in the dataset to the query.
( index_name: str queries: typing.Union[typing.List[str], <built-in function array>] k: int = 10 **kwargs ) → (total_scores, total_examples)
Parameters
str
) —
The index_name
/identifier of the index. Union[List[str], np.ndarray]
) —
The queries as a list of strings if index_name
is a text index or as a numpy array if index_name
is a vector index. int
) —
The number of examples to retrieve per query. Returns
(total_scores, total_examples)
A tuple of (total_scores, total_examples)
where:
List[List[float]
): the retrieval scores from either FAISS (IndexFlatL2
by default) or ElasticSearch of the retrieved examples per queryList[dict]
): the retrieved examples per queryFind the nearest examples in the dataset to the query.
DatasetInfo object containing all the metadata in the dataset.
NamedSplit object corresponding to a named dataset split.
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]] split: typing.Optional[datasets.splits.NamedSplit] = None features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False num_proc: typing.Optional[int] = None **kwargs )
Parameters
path-like
or list of path-like
) —
Path(s) of the CSV file(s). str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. int
, optional, defaults to None
) —
Number of processes when downloading and generating the dataset locally.
This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.
Added in 2.8.0
pandas.read_csv
. Create Dataset from CSV file(s).
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]] split: typing.Optional[datasets.splits.NamedSplit] = None features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False field: typing.Optional[str] = None num_proc: typing.Optional[int] = None **kwargs )
Parameters
path-like
or list of path-like
) —
Path(s) of the JSON or JSON Lines file(s). str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. str
, optional) —
Field name of the JSON file where the dataset is contained in. int
, optional defaults to None
) —
Number of processes when downloading and generating the dataset locally.
This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.
Added in 2.8.0
JsonConfig
. Create Dataset from JSON or JSON Lines file(s).
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]] split: typing.Optional[datasets.splits.NamedSplit] = None features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False columns: typing.Optional[typing.List[str]] = None num_proc: typing.Optional[int] = None **kwargs )
Parameters
path-like
or list of path-like
) —
Path(s) of the Parquet file(s). NamedSplit
, optional) —
Split name to be assigned to the dataset. Features
, optional) —
Dataset features. str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. List[str]
, optional) —
If not None
, only these columns will be read from the file.
A column name may be a prefix of a nested field, e.g. ‘a’ will select
‘a.b’, ‘a.c’, and ‘a.d.e’. int
, optional, defaults to None
) —
Number of processes when downloading and generating the dataset locally.
This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.
Added in 2.8.0
ParquetConfig
. Create Dataset from Parquet file(s).
( path_or_paths: typing.Union[str, bytes, os.PathLike, typing.List[typing.Union[str, bytes, os.PathLike]]] split: typing.Optional[datasets.splits.NamedSplit] = None features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False num_proc: typing.Optional[int] = None **kwargs )
Parameters
path-like
or list of path-like
) —
Path(s) of the text file(s). NamedSplit
, optional) —
Split name to be assigned to the dataset. Features
, optional) —
Dataset features. str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. int
, optional, defaults to None
) —
Number of processes when downloading and generating the dataset locally.
This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default.
Added in 2.8.0
TextConfig
. Create Dataset from text file(s).
( sql: typing.Union[str, ForwardRef('sqlalchemy.sql.Selectable')] con: typing.Union[str, ForwardRef('sqlalchemy.engine.Connection'), ForwardRef('sqlalchemy.engine.Engine'), ForwardRef('sqlite3.Connection')] features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False **kwargs )
Parameters
str
or sqlalchemy.sql.Selectable
) —
SQL query to be executed or a table name. str
or sqlite3.Connection
or sqlalchemy.engine.Connection
or sqlalchemy.engine.Connection
) —
A URI string used to instantiate a database connection or a SQLite3/SQLAlchemy connection object. str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. SqlConfig
. Create Dataset from SQL query or database table.
Example:
>>> # Fetch a database table
>>> ds = Dataset.from_sql("test_data", "postgres:///db_name")
>>> # Execute a SQL query on the table
>>> ds = Dataset.from_sql("SELECT sentence FROM test_data", "postgres:///db_name")
>>> # Use a Selectable object to specify the query
>>> from sqlalchemy import select, text
>>> stmt = select([text("sentence")]).select_from(text("test_data"))
>>> ds = Dataset.from_sql(stmt, "postgres:///db_name")
The returned dataset can only be cached if con
is specified as URI string.
( label2id: typing.Dict label_column: str )
Align the dataset’s label ID and label name mapping to match an input label2id
mapping.
This is useful when you want to ensure that a model’s predicted labels are aligned with the dataset.
The alignment in done using the lowercase label names.
Example:
>>> # dataset with mapping {'entailment': 0, 'neutral': 1, 'contradiction': 2}
>>> ds = load_dataset("glue", "mnli", split="train")
>>> # mapping to align with
>>> label2id = {'CONTRADICTION': 0, 'NEUTRAL': 1, 'ENTAILMENT': 2}
>>> ds_aligned = ds.align_labels_with_mapping(label2id, "label")
( dsets: typing.List[~DatasetType] info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None axis: int = 0 )
Parameters
List[datasets.Dataset]
) —
List of Datasets to concatenate. DatasetInfo
, optional) —
Dataset information, like description, citation, etc. NamedSplit
, optional) —
Name of the dataset split. {0, 1}
, defaults to 0
) —
Axis to concatenate over, where 0
means over rows (vertically) and 1
means over columns
(horizontally).
Added in 1.6.0
Converts a list of Dataset with the same schema into a single Dataset.
( datasets: typing.List[~DatasetType] probabilities: typing.Optional[typing.List[float]] = None seed: typing.Optional[int] = None info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None stopping_strategy: typing.Literal['first_exhausted', 'all_exhausted'] = 'first_exhausted' ) → Dataset or IterableDataset
Parameters
List[Dataset]
or List[IterableDataset]
) —
List of datasets to interleave. List[float]
, optional, defaults to None
) —
If specified, the new dataset is constructed by sampling
examples from one source at a time according to these probabilities. int
, optional, defaults to None
) —
The random seed used to choose a source for each example. Added in 2.4.0
Added in 2.4.0
str
, defaults to first_exhausted
) —
Two strategies are proposed right now, first_exhausted
and all_exhausted
.
By default, first_exhausted
is an undersampling strategy, i.e the dataset construction is stopped as soon as one dataset has ran out of samples.
If the strategy is all_exhausted
, we use an oversampling strategy, i.e the dataset construction is stopped as soon as every samples of every dataset has been added at least once.
Note that if the strategy is all_exhausted
, the interleaved dataset size can get enormous:max_length_datasets*nb_dataset
samples.Returns
Return type depends on the input datasets
parameter. Dataset
if the input is a list of Dataset
, IterableDataset
if the input is a list of
IterableDataset
.
Interleave several datasets (sources) into a single dataset. The new dataset is constructed by alternating between the sources to get the examples.
You can use this function on a list of Dataset objects, or on a list of IterableDataset objects.
probabilities
is None
(default) the new dataset is constructed by cycling between each source to get the examples.probabilities
is not None
, the new dataset is constructed by getting examples from a random source at a time according to the provided probabilities.The resulting dataset ends when one of the source datasets runs out of examples except when oversampling
is True
,
in which case, the resulting dataset ends when all datasets have ran out of examples at least one time.
Note for iterable datasets:
In a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process. Therefore the “first_exhausted” strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).
Example:
For regular datasets (map-style):
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]
For datasets in streaming mode (iterable):
>>> from datasets import load_dataset, interleave_datasets
>>> d1 = load_dataset("oscar", "unshuffled_deduplicated_en", split="train", streaming=True)
>>> d2 = load_dataset("oscar", "unshuffled_deduplicated_fr", split="train", streaming=True)
>>> dataset = interleave_datasets([d1, d2])
>>> iterator = iter(dataset)
>>> next(iterator)
{'text': 'Mtendere Village was inspired by the vision...}
>>> next(iterator)
{'text': "Média de débat d'idées, de culture...}
( dataset: ~DatasetType rank: int world_size: int ) → Dataset or IterableDataset
Parameters
int
) —
Rank of the current node. int
) —
Total number of nodes. Returns
The dataset to be used on the node at rank rank
.
Split a dataset for the node at rank rank
in a pool of nodes of size world_size
.
For map-style datasets:
Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset. To maximize data loading throughput, chunks are made of contiguous data on disk if possible.
For iterable datasets:
If the dataset has a number of shards that is a factor of world_size
(i.e. if dataset.num_shards % world_size == 0
),
then the shards are evenly assigned across the nodes, which is the most optimized.
Otherwise, each node keeps 1 example out of world_size
, skipping the other examples.
When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.
Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:
download_mode
parameter in load_dataset().When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.
Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:
download_mode
parameter in load_dataset().When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.
Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:
download_mode
parameter in load_dataset().Dictionary with split names as keys (‘train’, ‘test’ for example), and Dataset
objects as values.
It also has dataset transform methods like map or filter, to process all the splits at once.
A dictionary (dict of str: datasets.Dataset) with dataset transforms methods (map, filter, etc.)
The Apache Arrow tables backing each split.
The cache files containing the Apache Arrow table backing each split.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds.cache_files
{'test': [{'filename': '/root/.cache/huggingface/datasets/rotten_tomatoes_movie_review/default/1.0.0/40d411e45a6ce3484deed7cc15b82a53dad9a72aafd9f86f8f227134bec5ca46/rotten_tomatoes_movie_review-test.arrow'}],
'train': [{'filename': '/root/.cache/huggingface/datasets/rotten_tomatoes_movie_review/default/1.0.0/40d411e45a6ce3484deed7cc15b82a53dad9a72aafd9f86f8f227134bec5ca46/rotten_tomatoes_movie_review-train.arrow'}],
'validation': [{'filename': '/root/.cache/huggingface/datasets/rotten_tomatoes_movie_review/default/1.0.0/40d411e45a6ce3484deed7cc15b82a53dad9a72aafd9f86f8f227134bec5ca46/rotten_tomatoes_movie_review-validation.arrow'}]}
Number of columns in each split of the dataset.
Number of rows in each split of the dataset.
Names of the columns in each split of the dataset.
Shape of each split of the dataset (number of rows, number of columns).
( column: str ) → Dict[str
, list
]
Parameters
str
) —
column name (list all the column names with column_names) Returns
Dict[str
, list
]
Dictionary of unique elements in the given column.
Return a list of the unique elements in a column for each split.
This is implemented in the low-level backend and as such, very fast.
Clean up all cache files in the dataset cache directory, excepted the currently used cache file if there is one. Be careful when running this command that no other process is currently using other cache files.
( function: typing.Optional[typing.Callable] = None with_indices: bool = False with_rank: bool = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 drop_last_batch: bool = False remove_columns: typing.Union[str, typing.List[str], NoneType] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_names: typing.Optional[typing.Dict[str, typing.Optional[str]]] = None writer_batch_size: typing.Optional[int] = 1000 features: typing.Optional[datasets.features.features.Features] = None disable_nullable: bool = False fn_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None desc: typing.Optional[str] = None )
Parameters
callable
) — with one of the following signature:
function(example: Dict[str, Any]) -> Dict[str, Any]
if batched=False
and with_indices=False
function(example: Dict[str, Any], indices: int) -> Dict[str, Any]
if batched=False
and with_indices=True
function(batch: Dict[str, List]) -> Dict[str, List]
if batched=True
and with_indices=False
function(batch: Dict[str, List], indices: List[int]) -> Dict[str, List]
if batched=True
and with_indices=True
For advanced usage, the function can also return a pyarrow.Table
.
Moreover if your function returns nothing (None
), then map
will run your function and return the dataset unchanged.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the signature of function
should be def function(example, idx): ...
. bool
, defaults to False
) —
Provide process rank to function
. Note that in this case the
signature of function
should be def function(example[, idx], rank): ...
. [Union[str, List[str]]]
, optional, defaults to None
) —
The columns to be passed into function
as
positional arguments. If None
, a dict mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
. int
, optional, defaults to 1000
) —
Number of examples per batch provided to function
if batched=True
,
batch_size <= 0
or batch_size == None
then provide the full dataset as a single batch to function
. bool
, defaults to False
) —
Whether a last batch smaller than the batch_size should be
dropped instead of being processed by the function. [Union[str, List[str]]]
, optional, defaults to None
) —
Remove a selection of columns while doing the mapping.
Columns will be removed before updating the examples with the output of function
, i.e. if function
is adding
columns with names in remove_columns
, these columns will be kept. bool
, defaults to False
) —
Keep the dataset in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the current computation from function
can be identified, use it instead of recomputing. [Dict[str, str]]
, optional, defaults to None
) —
Provide the name of a path for the cache file. It is used to store the
results of the computation instead of the automatically generated cache file name.
You have to provide one cache_file_name
per dataset in the dataset dictionary. int
, default 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. [datasets.Features]
, optional, defaults to None
) —
Use a specific Features to store the cache file
instead of the automatically generated one. bool
, defaults to False
) —
Disallow null values in the table. Dict
, optional, defaults to None
) —
Keyword arguments to be passed to function
int
, optional, defaults to None
) —
Number of processes for multiprocessing. By default it doesn’t
use multiprocessing. str
, optional, defaults to None
) —
Meaningful description to be displayed alongside with the progress bar while mapping examples. Apply a function to all the elements in the table (individually or in batches) and update the table (if function does updated examples). The transformation is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> def add_prefix(example):
... example["text"] = "Review: " + example["text"]
... return example
>>> ds = ds.map(add_prefix)
>>> ds["train"][0:3]["text"]
['Review: the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .',
'Review: the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .',
'Review: effective but too-tepid biopic']
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
# set number of processors
>>> ds = ds.map(add_prefix, num_proc=4)
( function: typing.Optional[typing.Callable] = None with_indices: bool = False with_rank: bool = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None cache_file_names: typing.Optional[typing.Dict[str, typing.Optional[str]]] = None writer_batch_size: typing.Optional[int] = 1000 fn_kwargs: typing.Optional[dict] = None num_proc: typing.Optional[int] = None desc: typing.Optional[str] = None )
Parameters
Callable
) — Callable with one of the following signatures:
function(example: Dict[str, Any]) -> bool
if batched=False
and with_indices=False
and with_rank=False
function(example: Dict[str, Any], *extra_args) -> bool
if batched=False
and with_indices=True
and/or with_rank=True
(one extra arg for each)function(batch: Dict[str, List]) -> List[bool]
if batched=True
and with_indices=False
and with_rank=False
function(batch: Dict[str, List], *extra_args) -> List[bool]
if batched=True
and with_indices=True
and/or with_rank=True
(one extra arg for each)If no function is provided, defaults to an always True
function: lambda x: True
.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the
signature of function
should be def function(example, idx[, rank]): ...
. bool
, defaults to False
) —
Provide process rank to function
. Note that in this case the
signature of function
should be def function(example[, idx], rank): ...
. [Union[str, List[str]]]
, optional, defaults to None
) —
The columns to be passed into function
as
positional arguments. If None
, a dict mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
. int
, optional, defaults to 1000
) —
Number of examples per batch provided to function
if batched=True
batch_size <= 0
or batch_size == None
then provide the full dataset as a single batch to function
. bool
, defaults to False
) —
Keep the dataset in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the current computation from function
can be identified, use it instead of recomputing. [Dict[str, str]]
, optional, defaults to None
) —
Provide the name of a path for the cache file. It is used to store the
results of the computation instead of the automatically generated cache file name.
You have to provide one cache_file_name
per dataset in the dataset dictionary. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. Dict
, optional, defaults to None
) —
Keyword arguments to be passed to function
int
, optional, defaults to None
) —
Number of processes for multiprocessing. By default it doesn’t
use multiprocessing. str
, optional, defaults to None
) —
Meaningful description to be displayed alongside with the progress bar while filtering examples. Apply a filter function to all the elements in the table in batches and update the table so that the dataset only includes examples according to the filter function. The transformation is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds.filter(lambda x: x["label"] == 1)
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 4265
})
validation: Dataset({
features: ['text', 'label'],
num_rows: 533
})
test: Dataset({
features: ['text', 'label'],
num_rows: 533
})
})
( column_names: typing.Union[str, typing.Sequence[str]] reverse: typing.Union[bool, typing.Sequence[bool]] = False null_placement: str = 'at_end' keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None indices_cache_file_names: typing.Optional[typing.Dict[str, typing.Optional[str]]] = None writer_batch_size: typing.Optional[int] = 1000 )
Parameters
Union[str, Sequence[str]]
) —
Column name(s) to sort by. Union[bool, Sequence[bool]]
, defaults to False
) —
If True
, sort by descending order rather than ascending. If a single bool is provided,
the value is applied to the sorting of all column names. Otherwise a list of bools with the
same length and order as column_names must be provided. str
, defaults to at_end
) —
Put None
values at the beginning if at_start
or first
or at the end if at_end
or last
bool
, defaults to False
) —
Keep the sorted indices in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the sorted indices
can be identified, use it instead of recomputing. [Dict[str, str]]
, optional, defaults to None
) —
Provide the name of a path for the cache file. It is used to store the
indices mapping instead of the automatically generated cache file name.
You have to provide one cache_file_name
per dataset in the dataset dictionary. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
Higher value gives smaller cache files, lower value consume less temporary memory. Create a new dataset sorted according to a single or multiple columns.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset('rotten_tomatoes')
>>> ds['train']['label'][:10]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>> sorted_ds = ds.sort('label')
>>> sorted_ds['train']['label'][:10]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> another_sorted_ds = ds.sort(['label', 'text'], reverse=[True, False])
>>> another_sorted_ds['train']['label'][:10]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
( seeds: typing.Union[int, typing.Dict[str, typing.Optional[int]], NoneType] = None seed: typing.Optional[int] = None generators: typing.Optional[typing.Dict[str, numpy.random._generator.Generator]] = None keep_in_memory: bool = False load_from_cache_file: typing.Optional[bool] = None indices_cache_file_names: typing.Optional[typing.Dict[str, typing.Optional[str]]] = None writer_batch_size: typing.Optional[int] = 1000 )
Parameters
Dict[str, int]
or int
, optional) —
A seed to initialize the default BitGenerator if generator=None
.
If None
, then fresh, unpredictable entropy will be pulled from the OS.
If an int
or array_like[ints]
is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state.
You can provide one seed
per dataset in the dataset dictionary. int
, optional) —
A seed to initialize the default BitGenerator if generator=None
. Alias for seeds (a ValueError
is raised if both are provided). Dict[str, *optional*, np.random.Generator]
) —
Numpy random Generator to use to compute the permutation of the dataset rows.
If generator=None
(default), uses np.random.default_rng
(the default BitGenerator (PCG64) of NumPy).
You have to provide one generator
per dataset in the dataset dictionary. bool
, defaults to False
) —
Keep the dataset in memory instead of writing it to a cache file. Optional[bool]
, defaults to True
if caching is enabled) —
If a cache file storing the current computation from function
can be identified, use it instead of recomputing. Dict[str, str]
, optional) —
Provide the name of a path for the cache file. It is used to store the
indices mappings instead of the automatically generated cache file name.
You have to provide one cache_file_name
per dataset in the dataset dictionary. int
, defaults to 1000
) —
Number of rows per write operation for the cache file writer.
This value is a good trade-off between memory usage during the processing, and processing speed.
Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running map
. Create a new Dataset where the rows are shuffled.
The transformation is applied to all the datasets of the dataset dictionary.
Currently shuffling uses numpy random generators. You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPy’s default random generator (PCG64).
( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
str
, optional) —
Output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.
None
means __getitem__
returns python objects (default). List[str]
, optional) —
Columns to format in the output.
None
means __getitem__
returns all columns (default). bool
, defaults to False) —
Keep un-formatted columns as well in the output (as python objects), np.array
, torch.tensor
or tensorflow.ragged.constant
. Set __getitem__
return format (type and columns).
The format is set for every dataset in the dataset dictionary.
It is possible to call map
after calling set_format
. Since map
may add new columns, then the list of formatted columns
gets updated. In this case, if you apply map
on a dataset to add a new column, then this column will be formatted:
new formatted columns = (all columns - previously unformatted columns)
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x["text"], truncation=True, padding=True), batched=True)
>>> ds.set_format(type="numpy", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
>>> ds["train"].format
{'columns': ['input_ids', 'token_type_ids', 'attention_mask', 'label'],
'format_kwargs': {},
'output_all_columns': False,
'type': 'numpy'}
Reset __getitem__
return format to python objects and all columns.
The transformation is applied to all the datasets of the dataset dictionary.
Same as self.set_format()
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x["text"], truncation=True, padding=True), batched=True)
>>> ds.set_format(type="numpy", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
>>> ds["train"].format
{'columns': ['input_ids', 'token_type_ids', 'attention_mask', 'label'],
'format_kwargs': {},
'output_all_columns': False,
'type': 'numpy'}
>>> ds.reset_format()
>>> ds["train"].format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': None}
( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
str
, optional) —
Output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.
None
means __getitem__
returns python objects (default). List[str]
, optional) —
Columns to format in the output.
None
means __getitem__
returns all columns (default). bool
, defaults to False) —
Keep un-formatted columns as well in the output (as python objects). np.array
, torch.tensor
or tensorflow.ragged.constant
. To be used in a with
statement. Set __getitem__
return format (type and columns).
The transformation is applied to all the datasets of the dataset dictionary.
( type: typing.Optional[str] = None columns: typing.Optional[typing.List] = None output_all_columns: bool = False **format_kwargs )
Parameters
str
, optional) —
Output type selected in [None, 'numpy', 'torch', 'tensorflow', 'pandas', 'arrow', 'jax']
.
None
means __getitem__
returns python objects (default). List[str]
, optional) —
Columns to format in the output.
None
means __getitem__
returns all columns (default). bool
, defaults to False
) —
Keep un-formatted columns as well in the output (as python objects). np.array
, torch.tensor
or tensorflow.ragged.constant
. Set __getitem__
return format (type and columns). The data formatting is applied on-the-fly.
The format type
(for example “numpy”) is used to format batches when using __getitem__
.
The format is set for every dataset in the dataset dictionary.
It’s also possible to use custom transforms for formatting using with_transform().
Contrary to set_format(), with_format
returns a new DatasetDict object with new Dataset objects.
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds["train"].format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': None}
>>> ds = ds.with_format("torch")
>>> ds["train"].format
{'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'],
'format_kwargs': {},
'output_all_columns': False,
'type': 'torch'}
>>> ds["train"][0]
{'text': 'compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .',
'label': tensor(1),
'input_ids': tensor([ 101, 18027, 16310, 16001, 1103, 9321, 178, 11604, 7235, 6617,
1742, 2165, 2820, 1206, 6588, 22572, 12937, 1811, 2153, 1105,
1147, 12890, 19587, 6463, 1105, 15026, 1482, 119, 102, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])}
( transform: typing.Optional[typing.Callable] columns: typing.Optional[typing.List] = None output_all_columns: bool = False )
Parameters
Callable
, optional) —
User-defined formatting transform, replaces the format defined by set_format().
A formatting function is a callable that takes a batch (as a dict) as input and returns a batch.
This function is applied right before returning the objects in __getitem__
. List[str]
, optional) —
Columns to format in the output.
If specified, then the input batch of the transform only contains those columns. bool
, defaults to False) —
Keep un-formatted columns as well in the output (as python objects).
If set to True
, then the other un-formatted columns are kept with the output of the transform. Set __getitem__
return format using this transform. The transform is applied on-the-fly on batches when __getitem__
is called.
The transform is set for every dataset in the dataset dictionary
As set_format(), this can be reset using reset_format().
Contrary to set_transform()
, with_transform
returns a new DatasetDict object with new Dataset objects.
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes")
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> def encode(example):
... return tokenizer(example['text'], truncation=True, padding=True, return_tensors="pt")
>>> ds = ds.with_transform(encode)
>>> ds["train"][0]
{'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1]),
'input_ids': tensor([ 101, 1103, 2067, 1110, 17348, 1106, 1129, 1103, 6880, 1432,
112, 188, 1207, 107, 14255, 1389, 107, 1105, 1115, 1119,
112, 188, 1280, 1106, 1294, 170, 24194, 1256, 3407, 1190,
170, 11791, 5253, 188, 1732, 7200, 10947, 12606, 2895, 117,
179, 7766, 118, 172, 15554, 1181, 3498, 6961, 3263, 1137,
188, 1566, 7912, 14516, 6997, 119, 102]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0])}
Flatten the Apache Arrow Table of each split (nested features are flatten). Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("squad")
>>> ds["train"].features
{'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None),
'context': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}
>>> ds.flatten()
DatasetDict({
train: Dataset({
features: ['id', 'title', 'context', 'question', 'answers.text', 'answers.answer_start'],
num_rows: 87599
})
validation: Dataset({
features: ['id', 'title', 'context', 'question', 'answers.text', 'answers.answer_start'],
num_rows: 10570
})
})
( features: Features )
Parameters
string
<-> ClassLabel
you should use map() to update the dataset. Cast the dataset to a new set of features. The transformation is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> new_features = ds["train"].features.copy()
>>> new_features['label'] = ClassLabel(names=['bad', 'good'])
>>> new_features['text'] = Value('large_string')
>>> ds = ds.cast(new_features)
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='large_string', id=None)}
( column: str feature )
Cast column to feature for decoding.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> ds = ds.cast_column('label', ClassLabel(names=['bad', 'good']))
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='string', id=None)}
( column_names: typing.Union[str, typing.List[str]] ) → DatasetDict
Parameters
Returns
A copy of the dataset object without the columns to remove.
Remove one or several column(s) from each split in the dataset and the features associated to the column(s).
The transformation is applied to all the splits of the dataset dictionary.
You can also remove a column using map() with remove_columns
but the present method
doesn’t copy the data of the remaining columns and is thus faster.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds = ds.remove_columns("label")
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 8530
})
validation: Dataset({
features: ['text'],
num_rows: 1066
})
test: Dataset({
features: ['text'],
num_rows: 1066
})
})
( original_column_name: str new_column_name: str )
Rename a column in the dataset and move the features associated to the original column under the new column name. The transformation is applied to all the datasets of the dataset dictionary.
You can also rename a column using map() with remove_columns
but the present method:
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds = ds.rename_column("label", "label_new")
DatasetDict({
train: Dataset({
features: ['text', 'label_new'],
num_rows: 8530
})
validation: Dataset({
features: ['text', 'label_new'],
num_rows: 1066
})
test: Dataset({
features: ['text', 'label_new'],
num_rows: 1066
})
})
( column_mapping: typing.Dict[str, str] ) → DatasetDict
Parameters
Returns
A copy of the dataset with renamed columns.
Rename several columns in the dataset, and move the features associated to the original columns under the new column names. The transformation is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds.rename_columns({'text': 'text_new', 'label': 'label_new'})
DatasetDict({
train: Dataset({
features: ['text_new', 'label_new'],
num_rows: 8530
})
validation: Dataset({
features: ['text_new', 'label_new'],
num_rows: 1066
})
test: Dataset({
features: ['text_new', 'label_new'],
num_rows: 1066
})
})
( column_names: typing.Union[str, typing.List[str]] )
Select one or several column(s) from each split in the dataset and the features associated to the column(s).
The transformation is applied to all the splits of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes")
>>> ds.select_columns("text")
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 8530
})
validation: Dataset({
features: ['text'],
num_rows: 1066
})
test: Dataset({
features: ['text'],
num_rows: 1066
})
})
( column: str include_nulls: bool = False )
Casts the given column as ClassLabel and updates the tables.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("boolq")
>>> ds["train"].features
{'answer': Value(dtype='bool', id=None),
'passage': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None)}
>>> ds = ds.class_encode_column("answer")
>>> ds["train"].features
{'answer': ClassLabel(num_classes=2, names=['False', 'True'], id=None),
'passage': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None)}
( repo_id config_name: str = 'default' set_default: typing.Optional[bool] = None data_dir: typing.Optional[str] = None commit_message: typing.Optional[str] = None commit_description: typing.Optional[str] = None private: typing.Optional[bool] = False token: typing.Optional[str] = None revision: typing.Optional[str] = None create_pr: typing.Optional[bool] = False max_shard_size: typing.Union[str, int, NoneType] = None num_shards: typing.Optional[typing.Dict[str, int]] = None embed_external_files: bool = True )
Parameters
str
) —
The ID of the repository to push to in the following format: <user>/<dataset_name>
or
<org>/<dataset_name>
. Also accepts <dataset_name>
, which will default to the namespace
of the logged-in user. str
) —
Configuration name of a dataset. Defaults to “default”. bool
, optional) —
Whether to set this configuration as the default one. Otherwise, the default configuration is the one
named “default”. str
, optional) —
Directory name that will contain the uploaded data files. Defaults to the config_name
if different
from “default”, else “data”.
Added in 2.17.0
str
, optional) —
Message to commit while pushing. Will default to "Upload dataset"
. str
, optional) —
Description of the commit that will be created.
Additionally, description of the PR if a PR is created (create_pr
is True).
Added in 2.16.0
bool
, optional) —
Whether the dataset repository should be set to private or not. Only affects repository creation:
a repository that already exists will not be affected by that parameter. str
, optional) —
An optional authentication token for the Hugging Face Hub. If no token is passed, will default
to the token saved locally when logging in with huggingface-cli login
. Will raise an error
if no token is passed and the user is not logged-in. str
, optional) —
Branch to push the uploaded files to. Defaults to the "main"
branch.
Added in 2.15.0
bool
, optional, defaults to False
) —
Whether to create a PR with the uploaded files or directly commit.
Added in 2.15.0
int
or str
, optional, defaults to "500MB"
) —
The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit
(like "500MB"
or "1GB"
). Dict[str, int]
, optional) —
Number of shards to write. By default, the number of shards depends on max_shard_size
.
Use a dictionary to define a different num_shards for each split.
Added in 2.8.0
bool
, defaults to True
) —
Whether to embed file bytes in the shards.
In particular, this will do the following before the push for the fields of type:
Pushes the DatasetDict to the hub as a Parquet dataset. The DatasetDict is pushed using HTTP requests and does not need to have neither git or git-lfs installed.
Each dataset split will be pushed independently. The pushed dataset will keep the original split names.
The resulting Parquet files are self-contained by default: if your dataset contains Image or Audio
data, the Parquet files will store the bytes of your images or audio files.
You can disable this by setting embed_external_files
to False.
Example:
>>> dataset_dict.push_to_hub("<organization>/<dataset_id>")
>>> dataset_dict.push_to_hub("<organization>/<dataset_id>", private=True)
>>> dataset_dict.push_to_hub("<organization>/<dataset_id>", max_shard_size="1GB")
>>> dataset_dict.push_to_hub("<organization>/<dataset_id>", num_shards={"train": 1024, "test": 8})
If you want to add a new configuration (or subset) to a dataset (e.g. if the dataset has multiple tasks/versions/languages):
>>> english_dataset.push_to_hub("<organization>/<dataset_id>", "en")
>>> french_dataset.push_to_hub("<organization>/<dataset_id>", "fr")
>>> # later
>>> english_dataset = load_dataset("<organization>/<dataset_id>", "en")
>>> french_dataset = load_dataset("<organization>/<dataset_id>", "fr")
( dataset_dict_path: typing.Union[str, bytes, os.PathLike] max_shard_size: typing.Union[str, int, NoneType] = None num_shards: typing.Optional[typing.Dict[str, int]] = None num_proc: typing.Optional[int] = None storage_options: typing.Optional[dict] = None )
Parameters
path-like
) —
Path (e.g. dataset/train
) or remote URI (e.g. s3://my-bucket/dataset/train
)
of the dataset dict directory where the dataset dict will be saved to. int
or str
, optional, defaults to "500MB"
) —
The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit
(like "50MB"
). Dict[str, int]
, optional) —
Number of shards to write. By default the number of shards depends on max_shard_size
and num_proc
.
You need to provide the number of shards for each dataset in the dataset dictionary.
Use a dictionary to define a different num_shards for each split.
Added in 2.8.0
int
, optional, default None
) —
Number of processes when downloading and generating the dataset locally.
Multiprocessing is disabled by default.
Added in 2.8.0
dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.8.0
Saves a dataset dict to a filesystem using fsspec.spec.AbstractFileSystem
.
For Image, Audio and Video data:
All the Image(), Audio() and Video() data are stored in the arrow files. If you want to store paths or urls, please use the Value(“string”) type.
( dataset_dict_path: typing.Union[str, bytes, os.PathLike] keep_in_memory: typing.Optional[bool] = None storage_options: typing.Optional[dict] = None )
Parameters
path-like
) —
Path (e.g. "dataset/train"
) or remote URI (e.g. "s3//my-bucket/dataset/train"
)
of the dataset dict directory where the dataset dict will be loaded from. bool
, defaults to None
) —
Whether to copy the dataset in-memory. If None
, the
dataset will not be copied in-memory unless explicitly enabled by setting
datasets.config.IN_MEMORY_MAX_SIZE
to nonzero. See more details in the
improve performance section. dict
, optional) —
Key/value pairs to be passed on to the file-system backend, if any.
Added in 2.8.0
Load a dataset that was previously saved using save_to_disk
from a filesystem using fsspec.spec.AbstractFileSystem
.
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]] features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False **kwargs )
Parameters
dict
of path-like) —
Path(s) of the CSV file(s). "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. pandas.read_csv
. Create DatasetDict from CSV file(s).
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]] features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False **kwargs )
Parameters
path-like
or list of path-like
) —
Path(s) of the JSON Lines file(s). "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. JsonConfig
. Create DatasetDict from JSON Lines file(s).
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]] features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False columns: typing.Optional[typing.List[str]] = None **kwargs )
Parameters
dict
of path-like) —
Path(s) of the CSV file(s). str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. List[str]
, optional) —
If not None
, only these columns will be read from the file.
A column name may be a prefix of a nested field, e.g. ‘a’ will select
‘a.b’, ‘a.c’, and ‘a.d.e’. ParquetConfig
. Create DatasetDict from Parquet file(s).
( path_or_paths: typing.Dict[str, typing.Union[str, bytes, os.PathLike]] features: typing.Optional[datasets.features.features.Features] = None cache_dir: str = None keep_in_memory: bool = False **kwargs )
Parameters
dict
of path-like) —
Path(s) of the text file(s). str
, optional, defaults to "~/.cache/huggingface/datasets"
) —
Directory to cache data. bool
, defaults to False
) —
Whether to copy the data in-memory. TextConfig
. Create DatasetDict from text file(s).
The base class IterableDataset implements an iterable Dataset backed by python generators.
( ex_iterable: _BaseExamplesIterable info: typing.Optional[datasets.info.DatasetInfo] = None split: typing.Optional[datasets.splits.NamedSplit] = None formatting: typing.Optional[datasets.iterable_dataset.FormattingConfig] = None shuffling: typing.Optional[datasets.iterable_dataset.ShufflingConfig] = None distributed: typing.Optional[datasets.iterable_dataset.DistributedConfig] = None token_per_repo_id: typing.Optional[typing.Dict[str, typing.Union[str, bool, NoneType]]] = None )
A Dataset backed by an iterable.
( generator: typing.Callable features: typing.Optional[datasets.features.features.Features] = None gen_kwargs: typing.Optional[dict] = None split: NamedSplit = NamedSplit('train') ) → IterableDataset
Parameters
Callable
) —
A generator function that yields
examples. Features
, optional) —
Dataset features. dict
, optional) —
Keyword arguments to be passed to the generator
callable.
You can define a sharded iterable dataset by passing the list of shards in gen_kwargs
.
This can be used to improve shuffling and when iterating over the dataset with multiple workers. Split.TRAIN
) —
Split name to be assigned to the dataset.
Added in 2.21.0
Returns
IterableDataset
Create an Iterable Dataset from a generator.
Example:
>>> def gen():
... yield {"text": "Good", "label": 0}
... yield {"text": "Bad", "label": 1}
...
>>> ds = IterableDataset.from_generator(gen)
>>> def gen(shards):
... for shard in shards:
... with open(shard) as f:
... for line in f:
... yield {"line": line}
...
>>> shards = [f"data{i}.txt" for i in range(32)]
>>> ds = IterableDataset.from_generator(gen, gen_kwargs={"shards": shards})
>>> ds = ds.shuffle(seed=42, buffer_size=10_000) # shuffles the shards order + uses a shuffle buffer
>>> from torch.utils.data import DataLoader
>>> dataloader = DataLoader(ds.with_format("torch"), num_workers=4) # give each worker a subset of 32/4=8 shards
( column_names: typing.Union[str, typing.List[str]] ) → IterableDataset
Remove one or several column(s) in the dataset and the features associated to them. The removal is done on-the-fly on the examples when iterating over the dataset.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> next(iter(ds))
{'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .', 'label': 1}
>>> ds = ds.remove_columns("label")
>>> next(iter(ds))
{'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
( column_names: typing.Union[str, typing.List[str]] ) → IterableDataset
Select one or several column(s) in the dataset and the features associated to them. The selection is done on-the-fly on the examples when iterating over the dataset.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> next(iter(ds))
{'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .', 'label': 1}
>>> ds = ds.select_columns("text")
>>> next(iter(ds))
{'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
( column: str feature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.LargeList, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image, datasets.features.video.Video] ) → IterableDataset
Cast column to feature for decoding.
Example:
>>> from datasets import load_dataset, Audio
>>> ds = load_dataset("PolyAI/minds14", name="en-US", split="train", streaming=True)
>>> ds.features
{'audio': Audio(sampling_rate=8000, mono=True, decode=True, id=None),
'english_transcription': Value(dtype='string', id=None),
'intent_class': ClassLabel(num_classes=14, names=['abroad', 'address', 'app_error', 'atm_limit', 'balance', 'business_loan', 'card_issues', 'cash_deposit', 'direct_debit', 'freeze', 'high_value_payment', 'joint_account', 'latest_transactions', 'pay_bill'], id=None),
'lang_id': ClassLabel(num_classes=14, names=['cs-CZ', 'de-DE', 'en-AU', 'en-GB', 'en-US', 'es-ES', 'fr-FR', 'it-IT', 'ko-KR', 'nl-NL', 'pl-PL', 'pt-PT', 'ru-RU', 'zh-CN'], id=None),
'path': Value(dtype='string', id=None),
'transcription': Value(dtype='string', id=None)}
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16000))
>>> ds.features
{'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None),
'english_transcription': Value(dtype='string', id=None),
'intent_class': ClassLabel(num_classes=14, names=['abroad', 'address', 'app_error', 'atm_limit', 'balance', 'business_loan', 'card_issues', 'cash_deposit', 'direct_debit', 'freeze', 'high_value_payment', 'joint_account', 'latest_transactions', 'pay_bill'], id=None),
'lang_id': ClassLabel(num_classes=14, names=['cs-CZ', 'de-DE', 'en-AU', 'en-GB', 'en-US', 'es-ES', 'fr-FR', 'it-IT', 'ko-KR', 'nl-NL', 'pl-PL', 'pt-PT', 'ru-RU', 'zh-CN'], id=None),
'path': Value(dtype='string', id=None),
'transcription': Value(dtype='string', id=None)}
( features: Features ) → IterableDataset
Parameters
string
<-> ClassLabel
you should use map() to update the Dataset. Returns
IterableDataset
A copy of the dataset with casted features.
Cast the dataset to a new set of features.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> new_features = ds.features.copy()
>>> new_features["label"] = ClassLabel(names=["bad", "good"])
>>> new_features["text"] = Value("large_string")
>>> ds = ds.cast(new_features)
>>> ds.features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='large_string', id=None)}
( batch_size: int drop_last_batch: bool = False )
Iterate through the batches of size batch_size.
( function: typing.Optional[typing.Callable] = None with_indices: bool = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 drop_last_batch: bool = False remove_columns: typing.Union[str, typing.List[str], NoneType] = None features: typing.Optional[datasets.features.features.Features] = None fn_kwargs: typing.Optional[dict] = None )
Parameters
Callable
, optional, defaults to None
) —
Function applied on-the-fly on the examples when you iterate on the dataset.
It must have one of the following signatures:
function(example: Dict[str, Any]) -> Dict[str, Any]
if batched=False
and with_indices=False
function(example: Dict[str, Any], idx: int) -> Dict[str, Any]
if batched=False
and with_indices=True
function(batch: Dict[str, List]) -> Dict[str, List]
if batched=True
and with_indices=False
function(batch: Dict[str, List], indices: List[int]) -> Dict[str, List]
if batched=True
and with_indices=True
For advanced usage, the function can also return a pyarrow.Table
.
Moreover if your function returns nothing (None
), then map
will run your function and return the dataset unchanged.
If no function is provided, default to identity function: lambda x: x
.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the signature of function
should be def function(example, idx[, rank]): ...
. Optional[Union[str, List[str]]]
, defaults to None
) —
The columns to be passed into function
as positional arguments. If None
, a dict mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
. int
, optional, defaults to 1000
) —
Number of examples per batch provided to function
if batched=True
.
batch_size <= 0
or batch_size == None
then provide the full dataset as a single batch to function
. bool
, defaults to False
) —
Whether a last batch smaller than the batch_size should be
dropped instead of being processed by the function. [List[str]]
, optional, defaults to None
) —
Remove a selection of columns while doing the mapping.
Columns will be removed before updating the examples with the output of function
, i.e. if function
is adding
columns with names in remove_columns
, these columns will be kept. [Features]
, optional, defaults to None
) —
Feature types of the resulting dataset. Dict
, optional, default None
) —
Keyword arguments to be passed to function
. Apply a function to all the examples in the iterable dataset (individually or in batches) and update them. If your function returns a column that already exists, then it overwrites it. The function is applied on-the-fly on the examples when iterating over the dataset.
You can specify whether the function should be batched or not with the batched
parameter:
False
, then the function takes 1 example in and should return 1 example.
An example is a dictionary, e.g. {"text": "Hello there !"}
.True
and batch_size
is 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples.
A batch is a dictionary, e.g. a batch of 1 example is {“text”: [“Hello there !”]}.True
and batch_size
is n
> 1, then the function takes a batch of n
examples as input and can return a batch with n
examples, or with an arbitrary number of examples.
Note that the last batch may have less than n
examples.
A batch is a dictionary, e.g. a batch of n
examples is {"text": ["Hello there !"] * n}
.Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> def add_prefix(example):
... example["text"] = "Review: " + example["text"]
... return example
>>> ds = ds.map(add_prefix)
>>> list(ds.take(3))
[{'label': 1,
'text': 'Review: the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},
{'label': 1,
'text': 'Review: the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'},
{'label': 1, 'text': 'Review: effective but too-tepid biopic'}]
( original_column_name: str new_column_name: str ) → IterableDataset
Rename a column in the dataset, and move the features associated to the original column under the new column name.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> next(iter(ds))
{'label': 1,
'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
>>> ds = ds.rename_column("text", "movie_review")
>>> next(iter(ds))
{'label': 1,
'movie_review': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
( function: typing.Optional[typing.Callable] = None with_indices = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 fn_kwargs: typing.Optional[dict] = None )
Parameters
Callable
) —
Callable with one of the following signatures:
function(example: Dict[str, Any]) -> bool
if with_indices=False, batched=False
function(example: Dict[str, Any], indices: int) -> bool
if with_indices=True, batched=False
function(example: Dict[str, List]) -> List[bool]
if with_indices=False, batched=True
function(example: Dict[str, List], indices: List[int]) -> List[bool]
if with_indices=True, batched=True
If no function is provided, defaults to an always True function: lambda x: True
.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the signature of function
should be def function(example, idx): ...
. str
or List[str]
, optional) —
The columns to be passed into function
as
positional arguments. If None
, a dict mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
. int
, optional, default 1000
) —
Number of examples per batch provided to function
if batched=True
. Dict
, optional, default None
) —
Keyword arguments to be passed to function
. Apply a filter function to all the elements so that the dataset only includes examples according to the filter function. The filtering is done on-the-fly when iterating over the dataset.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> ds = ds.filter(lambda x: x["label"] == 0)
>>> list(ds.take(3))
[{'label': 0, 'movie_review': 'simplistic , silly and tedious .'},
{'label': 0,
'movie_review': "it's so laddish and juvenile , only teenage boys could possibly find it funny ."},
{'label': 0,
'movie_review': 'exploitative and largely devoid of the depth or sophistication that would make watching such a graphic treatment of the crimes bearable .'}]
( seed = None generator: typing.Optional[numpy.random._generator.Generator] = None buffer_size: int = 1000 )
Parameters
int
, optional, defaults to None
) —
Random seed that will be used to shuffle the dataset.
It is used to sample from the shuffle buffer and also to shuffle the data shards. numpy.random.Generator
, optional) —
Numpy random Generator to use to compute the permutation of the dataset rows.
If generator=None
(default), uses np.random.default_rng
(the default BitGenerator (PCG64) of NumPy). int
, defaults to 1000
) —
Size of the buffer. Randomly shuffles the elements of this dataset.
This dataset fills a buffer with buffer_size
elements, then randomly samples elements from this buffer,
replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or
equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but buffer_size
is set to 1000, then shuffle
will
initially select a random element from only the first 1000 elements in the buffer. Once an element is
selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element,
maintaining the 1000 element buffer.
If the dataset is made of several shards, it also does shuffle the order of the shards. However if the order has been fixed by using skip() or take() then the order of the shards is kept unchanged.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> list(ds.take(3))
[{'label': 1,
'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},
{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'},
{'label': 1, 'text': 'effective but too-tepid biopic'}]
>>> shuffled_ds = ds.shuffle(seed=42)
>>> list(shuffled_ds.take(3))
[{'label': 1,
'text': "a sports movie with action that's exciting on the field and a story you care about off it ."},
{'label': 1,
'text': 'at its best , the good girl is a refreshingly adult take on adultery . . .'},
{'label': 1,
'text': "sam jones became a very lucky filmmaker the day wilco got dropped from their record label , proving that one man's ruin may be another's fortune ."}]
( batch_size: int drop_last_batch: bool = False )
Group samples from the dataset into batches.
Create a new IterableDataset that skips the first n
elements.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> list(ds.take(3))
[{'label': 1,
'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},
{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'},
{'label': 1, 'text': 'effective but too-tepid biopic'}]
>>> ds = ds.skip(1)
>>> list(ds.take(3))
[{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'},
{'label': 1, 'text': 'effective but too-tepid biopic'},
{'label': 1,
'text': 'if you sometimes like to go to the movies to have fun , wasabi is a good place to start .'}]
Create a new IterableDataset with only the first n
elements.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> small_ds = ds.take(2)
>>> list(small_ds)
[{'label': 1,
'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},
{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'}]
( num_shards: int index: int contiguous: bool = True )
Return the index
-nth shard from dataset split into num_shards
pieces.
This shards deterministically. dataset.shard(n, i)
splits the dataset into contiguous chunks,
so it can be easily concatenated back together after processing. If dataset.num_shards % n == l
, then the
first l
datasets each have (dataset.num_shards // n) + 1
shards, and the remaining datasets have (dataset.num_shards // n)
shards.
datasets.concatenate_datasets([dset.shard(n, i) for i in range(n)])
returns a dataset with the same order as the original.
In particular, dataset.shard(dataset.num_shards, i)
returns a dataset with 1 shard.
Note: n should be less or equal to the number of shards in the dataset dataset.num_shards
.
On the other hand, dataset.shard(n, i, contiguous=False)
contains all the shards of the dataset whose index mod n = i
.
Be sure to shard before using any randomizing operator (such as shuffle
).
It is best if the shard operator is used early in the dataset pipeline.
Load the state_dict of the dataset. The iteration will restart at the next example from when the state was saved.
Resuming returns exactly where the checkpoint was saved except in two cases:
.with_format(arrow)
and batched .map()
may skip one batch.Example:
>>> from datasets import Dataset, concatenate_datasets
>>> ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)
>>> for idx, example in enumerate(ds):
... print(example)
... if idx == 2:
... state_dict = ds.state_dict()
... print("checkpoint")
... break
>>> ds.load_state_dict(state_dict)
>>> print(f"restart from checkpoint")
>>> for example in ds:
... print(example)
which returns:
{'a': 0}
{'a': 1}
{'a': 2}
checkpoint
restart from checkpoint
{'a': 3}
{'a': 4}
{'a': 5}
>>> from torchdata.stateful_dataloader import StatefulDataLoader
>>> ds = load_dataset("deepmind/code_contests", streaming=True, split="train")
>>> dataloader = StatefulDataLoader(ds, batch_size=32, num_workers=4)
>>> # checkpoint
>>> state_dict = dataloader.state_dict() # uses ds.state_dict() under the hood
>>> # resume from checkpoint
>>> dataloader.load_state_dict(state_dict) # uses ds.load_state_dict() under the hood
Get the current state_dict of the dataset. It corresponds to the state at the latest example it yielded.
Resuming returns exactly where the checkpoint was saved except in two cases:
.with_format(arrow)
and batched .map()
may skip one batch.Example:
>>> from datasets import Dataset, concatenate_datasets
>>> ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)
>>> for idx, example in enumerate(ds):
... print(example)
... if idx == 2:
... state_dict = ds.state_dict()
... print("checkpoint")
... break
>>> ds.load_state_dict(state_dict)
>>> print(f"restart from checkpoint")
>>> for example in ds:
... print(example)
which returns:
{'a': 0}
{'a': 1}
{'a': 2}
checkpoint
restart from checkpoint
{'a': 3}
{'a': 4}
{'a': 5}
>>> from torchdata.stateful_dataloader import StatefulDataLoader
>>> ds = load_dataset("deepmind/code_contests", streaming=True, split="train")
>>> dataloader = StatefulDataLoader(ds, batch_size=32, num_workers=4)
>>> # checkpoint
>>> state_dict = dataloader.state_dict() # uses ds.state_dict() under the hood
>>> # resume from checkpoint
>>> dataloader.load_state_dict(state_dict) # uses ds.load_state_dict() under the hood
DatasetInfo object containing all the metadata in the dataset.
NamedSplit object corresponding to a named dataset split.
Dictionary with split names as keys (‘train’, ‘test’ for example), and IterableDataset
objects as values.
( function: typing.Optional[typing.Callable] = None with_indices: bool = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: int = 1000 drop_last_batch: bool = False remove_columns: typing.Union[str, typing.List[str], NoneType] = None fn_kwargs: typing.Optional[dict] = None )
Parameters
Callable
, optional, defaults to None
) —
Function applied on-the-fly on the examples when you iterate on the dataset.
It must have one of the following signatures:
function(example: Dict[str, Any]) -> Dict[str, Any]
if batched=False
and with_indices=False
function(example: Dict[str, Any], idx: int) -> Dict[str, Any]
if batched=False
and with_indices=True
function(batch: Dict[str, List]) -> Dict[str, List]
if batched=True
and with_indices=False
function(batch: Dict[str, List], indices: List[int]) -> Dict[str, List]
if batched=True
and with_indices=True
For advanced usage, the function can also return a pyarrow.Table
.
Moreover if your function returns nothing (None
), then map
will run your function and return the dataset unchanged.
If no function is provided, default to identity function: lambda x: x
.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the signature of function
should be def function(example, idx[, rank]): ...
. [Union[str, List[str]]]
, optional, defaults to None
) —
The columns to be passed into function
as positional arguments. If None
, a dict mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
. int
, optional, defaults to 1000
) —
Number of examples per batch provided to function
if batched=True
. bool
, defaults to False
) —
Whether a last batch smaller than the batch_size
should be
dropped instead of being processed by the function. [List[str]]
, optional, defaults to None
) —
Remove a selection of columns while doing the mapping.
Columns will be removed before updating the examples with the output of function
, i.e. if function
is adding
columns with names in remove_columns
, these columns will be kept. Dict
, optional, defaults to None
) —
Keyword arguments to be passed to function
Apply a function to all the examples in the iterable dataset (individually or in batches) and update them. If your function returns a column that already exists, then it overwrites it. The function is applied on-the-fly on the examples when iterating over the dataset. The transformation is applied to all the datasets of the dataset dictionary.
You can specify whether the function should be batched or not with the batched
parameter:
False
, then the function takes 1 example in and should return 1 example.
An example is a dictionary, e.g. {"text": "Hello there !"}
.True
and batch_size
is 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples.
A batch is a dictionary, e.g. a batch of 1 example is {"text": ["Hello there !"]}
.True
and batch_size
is n
> 1, then the function takes a batch of n
examples as input and can return a batch with n
examples, or with an arbitrary number of examples.
Note that the last batch may have less than n
examples.
A batch is a dictionary, e.g. a batch of n
examples is {"text": ["Hello there !"] * n}
.Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> def add_prefix(example):
... example["text"] = "Review: " + example["text"]
... return example
>>> ds = ds.map(add_prefix)
>>> next(iter(ds["train"]))
{'label': 1,
'text': 'Review: the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
( function: typing.Optional[typing.Callable] = None with_indices = False input_columns: typing.Union[str, typing.List[str], NoneType] = None batched: bool = False batch_size: typing.Optional[int] = 1000 fn_kwargs: typing.Optional[dict] = None )
Parameters
Callable
) —
Callable with one of the following signatures:
function(example: Dict[str, Any]) -> bool
if with_indices=False, batched=False
function(example: Dict[str, Any], indices: int) -> bool
if with_indices=True, batched=False
function(example: Dict[str, List]) -> List[bool]
if with_indices=False, batched=True
function(example: Dict[str, List], indices: List[int]) -> List[bool]
if with_indices=True, batched=True
If no function is provided, defaults to an always True function: lambda x: True
.
bool
, defaults to False
) —
Provide example indices to function
. Note that in this case the signature of function
should be def function(example, idx): ...
. str
or List[str]
, optional) —
The columns to be passed into function
as
positional arguments. If None
, a dict mapping to all formatted columns is passed as one argument. bool
, defaults to False
) —
Provide batch of examples to function
int
, optional, defaults to 1000
) —
Number of examples per batch provided to function
if batched=True
. Dict
, optional, defaults to None
) —
Keyword arguments to be passed to function
Apply a filter function to all the elements so that the dataset only includes examples according to the filter function. The filtering is done on-the-fly when iterating over the dataset. The filtering is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> ds = ds.filter(lambda x: x["label"] == 0)
>>> list(ds["train"].take(3))
[{'label': 0, 'text': 'Review: simplistic , silly and tedious .'},
{'label': 0,
'text': "Review: it's so laddish and juvenile , only teenage boys could possibly find it funny ."},
{'label': 0,
'text': 'Review: exploitative and largely devoid of the depth or sophistication that would make watching such a graphic treatment of the crimes bearable .'}]
( seed = None generator: typing.Optional[numpy.random._generator.Generator] = None buffer_size: int = 1000 )
Parameters
int
, optional, defaults to None
) —
Random seed that will be used to shuffle the dataset.
It is used to sample from the shuffle buffer and also to shuffle the data shards. numpy.random.Generator
, optional) —
Numpy random Generator to use to compute the permutation of the dataset rows.
If generator=None
(default), uses np.random.default_rng
(the default BitGenerator (PCG64) of NumPy). int
, defaults to 1000
) —
Size of the buffer. Randomly shuffles the elements of this dataset. The shuffling is applied to all the datasets of the dataset dictionary.
This dataset fills a buffer with buffer_size elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but buffer_size
is set to 1000, then shuffle
will
initially select a random element from only the first 1000 elements in the buffer. Once an element is
selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element,
maintaining the 1000 element buffer.
If the dataset is made of several shards, it also does shuffle
the order of the shards.
However if the order has been fixed by using skip() or take()
then the order of the shards is kept unchanged.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> list(ds["train"].take(3))
[{'label': 1,
'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'},
{'label': 1,
'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'},
{'label': 1, 'text': 'effective but too-tepid biopic'}]
>>> ds = ds.shuffle(seed=42)
>>> list(ds["train"].take(3))
[{'label': 1,
'text': "a sports movie with action that's exciting on the field and a story you care about off it ."},
{'label': 1,
'text': 'at its best , the good girl is a refreshingly adult take on adultery . . .'},
{'label': 1,
'text': "sam jones became a very lucky filmmaker the day wilco got dropped from their record label , proving that one man's ruin may be another's fortune ."}]
( type: typing.Optional[str] = None )
Return a dataset with the specified format. The ‘pandas’ format is currently not implemented.
Example:
>>> from datasets import load_dataset
>>> from transformers import AutoTokenizer
>>> ds = load_dataset("rotten_tomatoes", split="validation", streaming=True)
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
>>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)
>>> ds = ds.with_format("torch")
>>> next(iter(ds))
{'text': 'compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .',
'label': tensor(1),
'input_ids': tensor([ 101, 18027, 16310, 16001, 1103, 9321, 178, 11604, 7235, 6617,
1742, 2165, 2820, 1206, 6588, 22572, 12937, 1811, 2153, 1105,
1147, 12890, 19587, 6463, 1105, 15026, 1482, 119, 102, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0]),
'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])}
( features: Features ) → IterableDatasetDict
Parameters
Features
) —
New features to cast the dataset to.
The name of the fields in the features must match the current column names.
The type of the data must also be convertible from one type to the other.
For non-trivial conversion, e.g. string
<-> ClassLabel
you should use map
to update the Dataset. Returns
A copy of the dataset with casted features.
Cast the dataset to a new set of features. The type casting is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> new_features = ds["train"].features.copy()
>>> new_features['label'] = ClassLabel(names=['bad', 'good'])
>>> new_features['text'] = Value('large_string')
>>> ds = ds.cast(new_features)
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='large_string', id=None)}
( column: str feature: typing.Union[dict, list, tuple, datasets.features.features.Value, datasets.features.features.ClassLabel, datasets.features.translation.Translation, datasets.features.translation.TranslationVariableLanguages, datasets.features.features.LargeList, datasets.features.features.Sequence, datasets.features.features.Array2D, datasets.features.features.Array3D, datasets.features.features.Array4D, datasets.features.features.Array5D, datasets.features.audio.Audio, datasets.features.image.Image, datasets.features.video.Video] )
Cast column to feature for decoding. The type casting is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None),
'text': Value(dtype='string', id=None)}
>>> ds = ds.cast_column('label', ClassLabel(names=['bad', 'good']))
>>> ds["train"].features
{'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None),
'text': Value(dtype='string', id=None)}
( column_names: typing.Union[str, typing.List[str]] ) → IterableDatasetDict
Parameters
Returns
A copy of the dataset object without the columns to remove.
Remove one or several column(s) in the dataset and the features associated to them. The removal is done on-the-fly on the examples when iterating over the dataset. The removal is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> ds = ds.remove_columns("label")
>>> next(iter(ds["train"]))
{'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
( original_column_name: str new_column_name: str ) → IterableDatasetDict
Rename a column in the dataset, and move the features associated to the original column under the new column name. The renaming is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> ds = ds.rename_column("text", "movie_review")
>>> next(iter(ds["train"]))
{'label': 1,
'movie_review': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
( column_mapping: typing.Dict[str, str] ) → IterableDatasetDict
Parameters
Returns
A copy of the dataset with renamed columns
Rename several columns in the dataset, and move the features associated to the original columns under the new column names. The renaming is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> ds = ds.rename_columns({"text": "movie_review", "label": "rating"})
>>> next(iter(ds["train"]))
{'movie_review': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .',
'rating': 1}
( column_names: typing.Union[str, typing.List[str]] ) → IterableDatasetDict
Parameters
Returns
A copy of the dataset object with only selected columns.
Select one or several column(s) in the dataset and the features associated to them. The selection is done on-the-fly on the examples when iterating over the dataset. The selection is applied to all the datasets of the dataset dictionary.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", streaming=True)
>>> ds = ds.select("text")
>>> next(iter(ds["train"]))
{'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}
A special dictionary that defines the internal structure of a dataset.
Instantiated with a dictionary of type dict[str, FieldType]
, where keys are the desired column names,
and values are the type of that column.
FieldType
can be one of the following:
Value feature specifies a single data type value, e.g. int64
or string
.
ClassLabel feature specifies a predefined set of classes which can have labels associated to them and will be stored as integers in the dataset.
Python dict
specifies a composite feature containing a mapping of sub-fields to sub-features.
It’s possible to have nested fields of nested fields in an arbitrary manner.
Python list
, LargeList or Sequence specifies a composite feature containing a sequence of
sub-features, all of the same feature type.
A Sequence with an internal dictionary feature will be automatically converted into a dictionary of
lists. This behavior is implemented to have a compatibility layer with the TensorFlow Datasets library but may be
un-wanted in some cases. If you don’t want this behavior, you can use a Python list
or a LargeList
instead of the Sequence.
Array2D, Array3D, Array4D or Array5D feature for multidimensional arrays.
Audio feature to store the absolute path to an audio file or a dictionary with the relative path to an audio file (“path” key) and its bytes content (“bytes” key). This feature extracts the audio data.
Image feature to store the absolute path to an image file, an np.ndarray
object, a PIL.Image.Image
object
or a dictionary with the relative path to an image file (“path” key) and its bytes content (“bytes” key).
This feature extracts the image data.
Translation or TranslationVariableLanguages feature specific to Machine Translation.
Make a deep copy of Features.
( batch: dict token_per_repo_id: typing.Optional[typing.Dict[str, typing.Union[str, bool, NoneType]]] = None )
Decode batch with custom feature decoding.
( column: list column_name: str )
Decode column with custom feature decoding.
( example: dict token_per_repo_id: typing.Optional[typing.Dict[str, typing.Union[str, bool, NoneType]]] = None )
Decode example with custom feature decoding.
( batch )
Encode batch into a format for Arrow.
( column column_name: str )
Encode column into a format for Arrow.
Encode example into a format for Arrow.
Flatten the features. Every dictionary column is removed and is replaced by
all the subfields it contains. The new fields are named by concatenating the
name of the original column and the subfield name like this: <original>.<subfield>
.
If a column contains nested dictionaries, then all the lower-level subfields names are
also concatenated to form new columns: <original>.<subfield>.<subsubfield>
, etc.
Example:
>>> from datasets import load_dataset
>>> ds = load_dataset("squad", split="train")
>>> ds.features.flatten()
{'answers.answer_start': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),
'answers.text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'context': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}
( pa_schema: Schema )
Construct Features from Arrow Schema. It also checks the schema metadata for Hugging Face Datasets features. Non-nullable fields are not supported and set to nullable.
Also, pa.dictionary is not supported and it uses its underlying type instead. Therefore datasets convert DictionaryArray objects to their actual values.
( dic ) → Features
Construct [Features] from dict.
Regenerate the nested feature object from a deserialized dict. We use the _type key to infer the dataclass name of the feature FieldType.
It allows for a convenient constructor syntax to define features from deserialized JSON dictionaries. This function is used in particular when deserializing a [DatasetInfo] that was dumped to a JSON object. This acts as an analogue to [Features.from_arrow_schema] and handles the recursive field-by-field instantiation, but doesn’t require any mapping to/from pyarrow, except for the fact that it takes advantage of the mapping of pyarrow primitive dtypes that [Value] automatically performs.
( other: Features )
Reorder Features fields to match the field order of other [Features].
The order of the fields is important since it matters for the underlying arrow data. Re-ordering the fields allows to make the underlying arrow data type match.
Example:
>>> from datasets import Features, Sequence, Value
>>> # let's say we have two features with a different order of nested fields (for a and b for example)
>>> f1 = Features({"root": Sequence({"a": Value("string"), "b": Value("string")})})
>>> f2 = Features({"root": {"b": Sequence(Value("string")), "a": Sequence(Value("string"))}})
>>> assert f1.type != f2.type
>>> # re-ordering keeps the base structure (here Sequence is defined at the root level), but makes the fields order match
>>> f1.reorder_fields_as(f2)
{'root': Sequence(feature={'b': Value(dtype='string', id=None), 'a': Value(dtype='string', id=None)}, length=-1, id=None)}
>>> assert f1.reorder_fields_as(f2).type == f2.type
( dtype: str id: typing.Optional[str] = None )
Scalar feature value of a particular data type.
The possible dtypes of Value
are as follows:
null
bool
int8
int16
int32
int64
uint8
uint16
uint32
uint64
float16
float32
(alias float)float64
(alias double)time32[(s|ms)]
time64[(us|ns)]
timestamp[(s|ms|us|ns)]
timestamp[(s|ms|us|ns), tz=(tzstring)]
date32
date64
duration[(s|ms|us|ns)]
decimal128(precision, scale)
decimal256(precision, scale)
binary
large_binary
string
large_string
( num_classes: dataclasses.InitVar[typing.Optional[int]] = None names: typing.List[str] = None names_file: dataclasses.InitVar[typing.Optional[str]] = None id: typing.Optional[str] = None )
Parameters
Feature type for integer class labels.
There are 3 ways to define a ClassLabel
, which correspond to the 3 arguments:
num_classes
: Create 0 to (num_classes-1) labels.names
: List of label strings.names_file
: File containing the list of labels.Under the hood the labels are stored as integers. You can use negative integers to represent unknown/missing labels.
Example:
>>> from datasets import Features
>>> features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
>>> features
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
( storage: typing.Union[pyarrow.lib.StringArray, pyarrow.lib.IntegerArray] ) → pa.Int64Array
Cast an Arrow array to the ClassLabel
arrow storage type.
The Arrow types that can be converted to the ClassLabel
pyarrow storage type are:
pa.string()
pa.int()
Conversion integer
=> class name string
.
Regarding unknown/missing labels: passing negative integers raises ValueError
.
Conversion class name string
=> integer
.
( feature: typing.Any id: typing.Optional[str] = None )
Feature type for large list data composed of child feature data type.
It is backed by pyarrow.LargeListType
, which is like pyarrow.ListType
but with 64-bit rather than 32-bit offsets.
( feature: typing.Any length: int = -1 id: typing.Optional[str] = None )
Construct a list of feature from a single type or a dict of types. Mostly here for compatiblity with tfds.
Example:
>>> from datasets import Features, Sequence, Value, ClassLabel
>>> features = Features({'post': Sequence(feature={'text': Value(dtype='string'), 'upvotes': Value(dtype='int32'), 'label': ClassLabel(num_classes=2, names=['hot', 'cold'])})})
>>> features
{'post': Sequence(feature={'text': Value(dtype='string', id=None), 'upvotes': Value(dtype='int32', id=None), 'label': ClassLabel(num_classes=2, names=['hot', 'cold'], id=None)}, length=-1, id=None)}
( languages: typing.List[str] id: typing.Optional[str] = None )
Feature
for translations with fixed languages per example.
Here for compatiblity with tfds.
Example:
>>> # At construction time:
>>> datasets.features.Translation(languages=['en', 'fr', 'de'])
>>> # During data generation:
>>> yield {
... 'en': 'the cat',
... 'fr': 'le chat',
... 'de': 'die katze'
... }
Flatten the Translation feature into a dictionary.
( languages: typing.Optional[typing.List] = None num_languages: typing.Optional[int] = None id: typing.Optional[str] = None ) →
language
or translation
(variable-length 1D tf.Tensor
of tf.string
)
Parameters
dict
) —
A dictionary for each example mapping string language codes to one or more string translations.
The languages present may vary from example to example. Returns
language
or translation
(variable-length 1D tf.Tensor
of tf.string
)
Language codes sorted in ascending order or plain text translations, sorted to align with language codes.
Feature
for translations with variable languages per example.
Here for compatiblity with tfds.
Example:
>>> # At construction time:
>>> datasets.features.TranslationVariableLanguages(languages=['en', 'fr', 'de'])
>>> # During data generation:
>>> yield {
... 'en': 'the cat',
... 'fr': ['le chat', 'la chatte,']
... 'de': 'die katze'
... }
>>> # Tensor returned :
>>> {
... 'language': ['en', 'de', 'fr', 'fr'],
... 'translation': ['the cat', 'die katze', 'la chatte', 'le chat'],
... }
Flatten the TranslationVariableLanguages feature into a dictionary.
( shape: tuple dtype: str id: typing.Optional[str] = None )
Create a two-dimensional array.
( shape: tuple dtype: str id: typing.Optional[str] = None )
Create a three-dimensional array.
( shape: tuple dtype: str id: typing.Optional[str] = None )
Create a four-dimensional array.
( shape: tuple dtype: str id: typing.Optional[str] = None )
Create a five-dimensional array.
( sampling_rate: typing.Optional[int] = None mono: bool = True decode: bool = True id: typing.Optional[str] = None )
Parameters
int
, optional) —
Target sampling rate. If None
, the native sampling rate is used. bool
, defaults to True
) —
Whether to convert the audio signal to mono by averaging samples across
channels. bool
, defaults to True
) —
Whether to decode the audio data. If False
,
returns the underlying dictionary in the format {"path": audio_path, "bytes": audio_bytes}
. Audio Feature
to extract audio data from an audio file.
Input: The Audio feature accepts as input:
A str
: Absolute path to the audio file (i.e. random access is allowed).
A dict
with the keys:
path
: String with relative path of the audio file to the archive file.bytes
: Bytes content of the audio file.This is useful for archived files with sequential access.
A dict
with the keys:
path
: String with relative path of the audio file to the archive file.array
: Array containing the audio samplesampling_rate
: Integer corresponding to the sampling rate of the audio sample.This is useful for archived files with sequential access.
Example:
>>> from datasets import load_dataset, Audio
>>> ds = load_dataset("PolyAI/minds14", name="en-US", split="train")
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16000))
>>> ds[0]["audio"]
{'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ...,
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 16000}
( storage: typing.Union[pyarrow.lib.StringArray, pyarrow.lib.StructArray] ) → pa.StructArray
Cast an Arrow array to the Audio arrow storage type. The Arrow types that can be converted to the Audio pyarrow storage type are:
pa.string()
- it must contain the “path” datapa.binary()
- it must contain the audio bytespa.struct({"bytes": pa.binary()})
pa.struct({"path": pa.string()})
pa.struct({"bytes": pa.binary(), "path": pa.string()})
- order doesn’t matter( value: dict token_per_repo_id: typing.Optional[typing.Dict[str, typing.Union[str, bool, NoneType]]] = None ) → dict
Parameters
dict
) —
A dictionary with keys:
path
: String with relative audio file path.bytes
: Bytes of the audio file.dict
, optional) —
To access and decode
audio files from private repositories on the Hub, you can pass
a dictionary repo_id (str
) -> token (bool
or str
) Returns
dict
Decode example audio file into audio data.
( storage: StructArray ) → pa.StructArray
Embed audio files into the Arrow array.
( value: typing.Union[str, bytes, dict] ) → dict
Encode example into a format for Arrow.
If in the decodable state, raise an error, otherwise flatten the feature into a dictionary.
( mode: typing.Optional[str] = None decode: bool = True id: typing.Optional[str] = None )
Image Feature
to read image data from an image file.
Input: The Image feature accepts as input:
A str
: Absolute path to the image file (i.e. random access is allowed).
A dict
with the keys:
path
: String with relative path of the image file to the archive file.bytes
: Bytes of the image file.This is useful for archived files with sequential access.
An np.ndarray
: NumPy array representing an image.
A PIL.Image.Image
: PIL image object.
Examples:
>>> from datasets import load_dataset, Image
>>> ds = load_dataset("beans", split="train")
>>> ds.features["image"]
Image(decode=True, id=None)
>>> ds[0]["image"]
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x15E52E7F0>
>>> ds = ds.cast_column('image', Image(decode=False))
{'bytes': None,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/healthy/healthy_train.85.jpg'}
( storage: typing.Union[pyarrow.lib.StringArray, pyarrow.lib.StructArray, pyarrow.lib.ListArray] ) → pa.StructArray
Cast an Arrow array to the Image arrow storage type. The Arrow types that can be converted to the Image pyarrow storage type are:
pa.string()
- it must contain the “path” datapa.binary()
- it must contain the image bytespa.struct({"bytes": pa.binary()})
pa.struct({"path": pa.string()})
pa.struct({"bytes": pa.binary(), "path": pa.string()})
- order doesn’t matterpa.list(*)
- it must contain the image array data( value: dict token_per_repo_id = None )
Parameters
str
or dict
) —
A string with the absolute image file path, a dictionary with
keys:
path
: String with absolute or relative image file path.bytes
: The bytes of the image file.dict
, optional) —
To access and decode
image files from private repositories on the Hub, you can pass
a dictionary repo_id (str
) -> token (bool
or str
). Decode example image file into image data.
( storage: StructArray ) → pa.StructArray
Embed image files into the Arrow array.
( value: typing.Union[str, bytes, dict, numpy.ndarray, ForwardRef('PIL.Image.Image')] )
Encode example into a format for Arrow.
If in the decodable state, return the feature itself, otherwise flatten the feature into a dictionary.
( decode: bool = True id: typing.Optional[str] = None )
Experimental. Video Feature
to read video data from a video file.
Input: The Video feature accepts as input:
A str
: Absolute path to the video file (i.e. random access is allowed).
A dict
with the keys:
path
: String with relative path of the video file in a dataset repository.bytes
: Bytes of the video file.This is useful for archived files with sequential access.
A decord.VideoReader
: decord video reader object.
Examples:
>>> from datasets import Dataset, Video
>>> ds = Dataset.from_dict({"video":["path/to/Screen Recording.mov"]}).cast_column("video", Video())
>>> ds.features["video"]
Video(decode=True, id=None)
>>> ds[0]["video"]
<decord.video_reader.VideoReader at 0x105525c70>
>>> ds = ds.cast_column('video', Video(decode=False))
{'bytes': None,
'path': 'path/to/Screen Recording.mov'}
( storage: typing.Union[pyarrow.lib.StringArray, pyarrow.lib.StructArray, pyarrow.lib.ListArray] ) → pa.StructArray
Cast an Arrow array to the Video arrow storage type. The Arrow types that can be converted to the Video pyarrow storage type are:
pa.string()
- it must contain the “path” datapa.binary()
- it must contain the video bytespa.struct({"bytes": pa.binary()})
pa.struct({"path": pa.string()})
pa.struct({"bytes": pa.binary(), "path": pa.string()})
- order doesn’t matterpa.list(*)
- it must contain the video array data( value: dict token_per_repo_id = None )
Parameters
str
or dict
) —
A string with the absolute video file path, a dictionary with
keys:
path
: String with absolute or relative video file path.bytes
: The bytes of the video file.dict
, optional) —
To access and decode
video files from private repositories on the Hub, you can pass
a dictionary repo_id (str
) -> token (bool
or str
). Decode example video file into video data.
( value: typing.Union[str, bytes, dict, numpy.ndarray, ForwardRef('VideoReader')] )
Encode example into a format for Arrow.
If in the decodable state, return the feature itself, otherwise flatten the feature into a dictionary.
( fs: AbstractFileSystem )
Checks if fs
is a remote filesystem.
Hasher that accepts python objects as inputs.