( model )
Parameters
Tokenizer
should be using. A Tokenizer
works as a pipeline. It processes some raw text as input
and outputs an Encoding.
The Model in use by the Tokenizer
The optional Normalizer in use by the Tokenizer
Returns
(dict
, optional)
A dict with the current padding parameters if padding is enabled
Get the current padding parameters
Cannot be set, use enable_padding()
instead
The optional PreTokenizer in use by the Tokenizer
Returns
(dict
, optional)
A dict with the current truncation parameters if truncation is enabled
Get the currently set truncation parameters
Cannot set, use enable_truncation()
instead
( tokens ) → int
Parameters
List
of AddedToken or str
) —
The list of special tokens we want to add to the vocabulary. Each token can either
be a string or an instance of AddedToken for more
customization. Returns
int
The number of tokens that were created in the vocabulary
Add the given special tokens to the Tokenizer.
If these tokens are already part of the vocabulary, it just let the Tokenizer know about them. If they don’t exist, the Tokenizer creates them, giving them a new id.
These special tokens will never be processed by the model (ie won’t be split into multiple tokens), and they can be removed from the output when decoding.
( tokens ) → int
Parameters
List
of AddedToken or str
) —
The list of tokens we want to add to the vocabulary. Each token can be either a
string or an instance of AddedToken for more customization. Returns
int
The number of tokens that were created in the vocabulary
Add the given tokens to the vocabulary
The given tokens are added only if they don’t already exist in the vocabulary. Each token then gets a new attributed id.
Decode the given list of ids back to a string
This is used to decode anything coming back from a Language Model
Decode a batch of ids back to their corresponding string
( direction = 'right' pad_id = 0 pad_type_id = 0 pad_token = '[PAD]' length = None pad_to_multiple_of = None )
Parameters
str
, optional, defaults to right
) —
The direction in which to pad. Can be either right
or left
int
, optional) —
If specified, the padding length should always snap to the next multiple of the
given value. For example if we were going to pad witha length of 250 but
pad_to_multiple_of=8
then we will pad to 256. int
, defaults to 0) —
The id to be used when padding int
, defaults to 0) —
The type id to be used when padding str
, defaults to [PAD]
) —
The pad token to be used when padding int
, optional) —
If specified, the length at which to pad. If not specified we pad using the size of
the longest sequence in a batch. Enable the padding
( max_length stride = 0 strategy = 'longest_first' direction = 'right' )
Parameters
int
) —
The max length at which to truncate int
, optional) —
The length of the previous first sequence to be included in the overflowing
sequence str
, optional, defaults to longest_first
) —
The strategy used to truncation. Can be one of longest_first
, only_first
or
only_second
. str
, defaults to right
) —
Truncate direction Enable truncation
( sequence pair = None is_pretokenized = False add_special_tokens = True ) → Encoding
Parameters
~tokenizers.InputSequence
) —
The main input sequence we want to encode. This sequence can be either raw
text or pre-tokenized, according to the is_pretokenized
argument:
is_pretokenized=False
: TextInputSequence
is_pretokenized=True
: PreTokenizedInputSequence()
~tokenizers.InputSequence
, optional) —
An optional input sequence. The expected format is the same that for sequence
. bool
, defaults to False
) —
Whether the input is already pre-tokenized bool
, defaults to True
) —
Whether to add the special tokens Returns
The encoded result
Encode the given sequence and pair. This method can process raw text sequences as well as already pre-tokenized sequences.
Example:
( input is_pretokenized = False add_special_tokens = True ) → A List
of [`~tokenizers.Encoding“]
Parameters
List
/`Tuple
of ~tokenizers.EncodeInput
) —
A list of single sequences or pair sequences to encode. Each sequence
can be either raw text or pre-tokenized, according to the is_pretokenized
argument:
is_pretokenized=False
: TextEncodeInput()
is_pretokenized=True
: PreTokenizedEncodeInput()
bool
, defaults to False
) —
Whether the input is already pre-tokenized bool
, defaults to True
) —
Whether to add the special tokens Returns
A List
of [`~tokenizers.Encoding“]
The encoded batch
Encode the given batch of inputs. This method accept both raw text sequences as well as already pre-tokenized sequences.
Example:
( input is_pretokenized = False add_special_tokens = True ) → A List
of [`~tokenizers.Encoding“]
Parameters
List
/`Tuple
of ~tokenizers.EncodeInput
) —
A list of single sequences or pair sequences to encode. Each sequence
can be either raw text or pre-tokenized, according to the is_pretokenized
argument:
is_pretokenized=False
: TextEncodeInput()
is_pretokenized=True
: PreTokenizedEncodeInput()
bool
, defaults to False
) —
Whether the input is already pre-tokenized bool
, defaults to True
) —
Whether to add the special tokens Returns
A List
of [`~tokenizers.Encoding“]
The encoded batch
Encode the given batch of inputs. This method is faster than encode_batch because it doesn’t keep track of offsets, they will be all zeros.
Example:
( identifier revision = 'main' auth_token = None ) → Tokenizer
Parameters
str
) —
The identifier of a Model on the Hugging Face Hub, that contains
a tokenizer.json file str
, defaults to main) —
A branch or commit id str
, optional, defaults to None) —
An optional auth token used to access private repositories on the
Hugging Face Hub Returns
The new tokenizer
Instantiate a new Tokenizer from an existing file on the Hugging Face Hub.
Get the underlying vocabulary
Return the number of special tokens that would be added for single/pair sentences. :param is_pair: Boolean indicating if the input would be a single sentence or a pair :return:
( encoding pair = None add_special_tokens = True ) → Encoding
Apply all the post-processing steps to the given encodings.
The various steps are:
enable_truncation()
)PostProcessor
enable_padding()
)Save the Tokenizer to the file at the given path.
Gets a serialized string representing this Tokenizer.
Train the Tokenizer using the given files.
Reads the files line by line, while keeping all the whitespace, even new lines.
If you want to train from data store in-memory, you can check
train_from_iterator()
( iterator trainer = None length = None )
Parameters
Iterator
) —
Any iterator over strings or list of strings ~tokenizers.trainers.Trainer
, optional) —
An optional trainer that should be used to train our Model int
, optional) —
The total number of sequences in the iterator. This is used to
provide meaningful progress tracking Train the Tokenizer using the provided iterator.
You can provide anything that is a Python Iterator
List[str]
str
or List[str]