models

Definitions of all models available in Transformers.js.

Example: Load and run an AutoModel.

import { AutoModel, AutoTokenizer } from '@xenova/transformers';

let tokenizer = await AutoTokenizer.from_pretrained('Xenova/bert-base-uncased');
let model = await AutoModel.from_pretrained('Xenova/bert-base-uncased');

let inputs = await tokenizer('I love transformers!');
let { logits } = await model(inputs);
// Tensor {
//     data: Float32Array(183132) [-7.117443084716797, -7.107812881469727, -7.092104911804199, ...]
//     dims: (3) [1, 6, 30522],
//     type: "float32",
//     size: 183132,
// }

We also provide other AutoModels (listed below), which you can use in the same way as the Python library. For example:

Example: Load and run a AutoModelForSeq2SeqLM.

import { AutoModelForSeq2SeqLM, AutoTokenizer } from '@xenova/transformers';

let tokenizer = await AutoTokenizer.from_pretrained('Xenova/t5-small');
let model = await AutoModelForSeq2SeqLM.from_pretrained('Xenova/t5-small');

let { input_ids } = await tokenizer('translate English to German: I love transformers!');
let outputs = await model.generate(input_ids);
let decoded = tokenizer.decode(outputs[0], { skip_special_tokens: true });
// 'Ich liebe Transformatoren!'

models.PreTrainedModel ⇐ Callable

A base class for pre-trained models that provides the model configuration and an ONNX session.

Kind: static class of models
Extends: Callable


new PreTrainedModel(config, session)

Creates a new instance of the PreTrainedModel class.

ParamTypeDescription
configObject

The model configuration.

sessionany

session for the model.


preTrainedModel.dispose()Promise.<Array<unknown>>

Disposes of all the ONNX sessions that were created during inference.

Kind: instance method of PreTrainedModel
Returns: Promise.<Array<unknown>> - An array of promises, one for each ONNX session that is being disposed.
Todo


preTrainedModel._call(model_inputs)Promise.<Object>

Runs the model with the provided inputs

Kind: instance method of PreTrainedModel
Returns: Promise.<Object> - Object containing output tensors

ParamTypeDescription
model_inputsObject

Object containing input tensors


preTrainedModel.forward(model_inputs)Promise.<Object>

Forward method for a pretrained model. If not overridden by a subclass, the correct forward method will be chosen based on the model type.

Kind: instance method of PreTrainedModel
Returns: Promise.<Object> - The output data from the model in the format specified in the ONNX model.
Throws:

ParamTypeDescription
model_inputsObject

The input data to the model in the format specified in the ONNX model.


preTrainedModel._get_logits_processor(generation_config, input_ids_seq_length)LogitsProcessorList

Kind: instance method of PreTrainedModel

ParamTypeDescription
generation_configGenerationConfig
input_ids_seq_lengthnumber

The starting sequence length for the input ids.


preTrainedModel._get_generation_config(generation_config)GenerationConfig

This function merges multiple generation configs together to form a final generation config to be used by the model for text generation. It first creates an empty GenerationConfig object, then it applies the model’s own generation_config property to it. Finally, if a generation_config object was passed in the arguments, it overwrites the corresponding properties in the final config with those of the passed config object.

Kind: instance method of PreTrainedModel
Returns: GenerationConfig - The final generation config object to be used by the model for text generation.

ParamTypeDescription
generation_configGenerationConfig

A GenerationConfig object containing generation parameters.


preTrainedModel.groupBeams(beams)Array

Groups an array of beam objects by their ids.

Kind: instance method of PreTrainedModel
Returns: Array - An array of arrays, where each inner array contains beam objects with the same id.

ParamTypeDescription
beamsArray

The array of beam objects to group.


preTrainedModel.addPastKeyValues(decoderFeeds, pastKeyValues, [hasDecoder])

Adds past key values to the decoder feeds object. If pastKeyValues is null, creates new tensors for past key values.

Kind: instance method of PreTrainedModel

ParamTypeDefaultDescription
decoderFeedsObject

The decoder feeds object to add past key values to.

pastKeyValuesObject

An object containing past key values.

[hasDecoder]booleanfalse

Whether the model has a decoder.


PreTrainedModel.from_pretrained(pretrained_model_name_or_path, options)Promise.<PreTrainedModel>

Instantiate one of the model classes of the library from a pretrained model.

The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible)

Kind: static method of PreTrainedModel
Returns: Promise.<PreTrainedModel> - A new instance of the PreTrainedModel class.

ParamTypeDescription
pretrained_model_name_or_pathstring

The name or path of the pretrained model. Can be either:

  • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
  • A path to a directory containing model weights, e.g., ./my_model_directory/.
optionsPretrainedOptions

Additional options for loading the model.


models.BaseModelOutput

Base class for model’s outputs, with potential hidden states and attentions.

Kind: static class of models


new BaseModelOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.last_hidden_stateTensor

Sequence of hidden-states at the output of the last layer of the model.

[output.hidden_states]Tensor

Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

[output.attentions]Tensor

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.


models.BertForMaskedLM ⇐ BertPreTrainedModel

BertForMaskedLM is a class representing a BERT model for masked language modeling.

Kind: static class of models
Extends: BertPreTrainedModel


bertForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of BertForMaskedLM
Returns: Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.BertForSequenceClassification ⇐ BertPreTrainedModel

BertForSequenceClassification is a class representing a BERT model for sequence classification.

Kind: static class of models
Extends: BertPreTrainedModel


bertForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of BertForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.BertForTokenClassification ⇐ BertPreTrainedModel

BertForTokenClassification is a class representing a BERT model for token classification.

Kind: static class of models
Extends: BertPreTrainedModel


bertForTokenClassification._call(model_inputs)Promise.<TokenClassifierOutput>

Calls the model on new inputs.

Kind: instance method of BertForTokenClassification
Returns: Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.BertForQuestionAnswering ⇐ BertPreTrainedModel

BertForQuestionAnswering is a class representing a BERT model for question answering.

Kind: static class of models
Extends: BertPreTrainedModel


bertForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of BertForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaModel

The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.

Kind: static class of models


models.DebertaForMaskedLM

DeBERTa Model with a language modeling head on top.

Kind: static class of models


debertaForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of DebertaForMaskedLM
Returns: Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaForSequenceClassification

DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output)

Kind: static class of models


debertaForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of DebertaForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaForTokenClassification

DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

Kind: static class of models


debertaForTokenClassification._call(model_inputs)Promise.<TokenClassifierOutput>

Calls the model on new inputs.

Kind: instance method of DebertaForTokenClassification
Returns: Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaForQuestionAnswering

DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

Kind: static class of models


debertaForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of DebertaForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaV2Model

The bare DeBERTa-V2 Model transformer outputting raw hidden-states without any specific head on top.

Kind: static class of models


models.DebertaV2ForMaskedLM

DeBERTa-V2 Model with a language modeling head on top.

Kind: static class of models


debertaV2ForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of DebertaV2ForMaskedLM
Returns: Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaV2ForSequenceClassification

DeBERTa-V2 Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output)

Kind: static class of models


debertaV2ForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of DebertaV2ForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaV2ForTokenClassification

DeBERTa-V2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

Kind: static class of models


debertaV2ForTokenClassification._call(model_inputs)Promise.<TokenClassifierOutput>

Calls the model on new inputs.

Kind: instance method of DebertaV2ForTokenClassification
Returns: Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DebertaV2ForQuestionAnswering

DeBERTa-V2 Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).

Kind: static class of models


debertaV2ForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of DebertaV2ForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DistilBertForSequenceClassification ⇐ DistilBertPreTrainedModel

DistilBertForSequenceClassification is a class representing a DistilBERT model for sequence classification.

Kind: static class of models
Extends: DistilBertPreTrainedModel


distilBertForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of DistilBertForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DistilBertForTokenClassification ⇐ DistilBertPreTrainedModel

DistilBertForTokenClassification is a class representing a DistilBERT model for token classification.

Kind: static class of models
Extends: DistilBertPreTrainedModel


distilBertForTokenClassification._call(model_inputs)Promise.<TokenClassifierOutput>

Calls the model on new inputs.

Kind: instance method of DistilBertForTokenClassification
Returns: Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DistilBertForQuestionAnswering ⇐ DistilBertPreTrainedModel

DistilBertForQuestionAnswering is a class representing a DistilBERT model for question answering.

Kind: static class of models
Extends: DistilBertPreTrainedModel


distilBertForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of DistilBertForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.DistilBertForMaskedLM ⇐ DistilBertPreTrainedModel

DistilBertForMaskedLM is a class representing a DistilBERT model for masking task.

Kind: static class of models
Extends: DistilBertPreTrainedModel


distilBertForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of DistilBertForMaskedLM
Returns: Promise.<MaskedLMOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.MobileBertForMaskedLM ⇐ MobileBertPreTrainedModel

MobileBertForMaskedLM is a class representing a MobileBERT model for masking task.

Kind: static class of models
Extends: MobileBertPreTrainedModel


mobileBertForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of MobileBertForMaskedLM
Returns: Promise.<MaskedLMOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.MobileBertForSequenceClassification ⇐ MobileBertPreTrainedModel

Kind: static class of models
Extends: MobileBertPreTrainedModel


mobileBertForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of MobileBertForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.MobileBertForQuestionAnswering ⇐ MobileBertPreTrainedModel

Kind: static class of models
Extends: MobileBertPreTrainedModel


mobileBertForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of MobileBertForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.MPNetModel ⇐ MPNetPreTrainedModel

The bare MPNet Model transformer outputting raw hidden-states without any specific head on top.

Kind: static class of models
Extends: MPNetPreTrainedModel


models.MPNetForMaskedLM ⇐ MPNetPreTrainedModel

MPNetForMaskedLM is a class representing a MPNet model for masked language modeling.

Kind: static class of models
Extends: MPNetPreTrainedModel


mpNetForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of MPNetForMaskedLM
Returns: Promise.<MaskedLMOutput> - An object containing the model’s output logits for masked language modeling.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.MPNetForSequenceClassification ⇐ MPNetPreTrainedModel

MPNetForSequenceClassification is a class representing a MPNet model for sequence classification.

Kind: static class of models
Extends: MPNetPreTrainedModel


mpNetForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of MPNetForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - An object containing the model’s output logits for sequence classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.MPNetForTokenClassification ⇐ MPNetPreTrainedModel

MPNetForTokenClassification is a class representing a MPNet model for token classification.

Kind: static class of models
Extends: MPNetPreTrainedModel


mpNetForTokenClassification._call(model_inputs)Promise.<TokenClassifierOutput>

Calls the model on new inputs.

Kind: instance method of MPNetForTokenClassification
Returns: Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.MPNetForQuestionAnswering ⇐ MPNetPreTrainedModel

MPNetForQuestionAnswering is a class representing a MPNet model for question answering.

Kind: static class of models
Extends: MPNetPreTrainedModel


mpNetForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of MPNetForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - An object containing the model’s output logits for question answering.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.T5ForConditionalGeneration ⇐ T5PreTrainedModel

T5Model is a class representing a T5 model for conditional generation.

Kind: static class of models
Extends: T5PreTrainedModel


new T5ForConditionalGeneration(config, session, decoder_merged_session, generation_config)

Creates a new instance of the T5ForConditionalGeneration class.

ParamTypeDescription
configObject

The model configuration.

sessionany

session for the model.

decoder_merged_sessionany

session for the decoder.

generation_configGenerationConfig

The generation configuration.


t5ForConditionalGeneration.getStartBeams(inputs, numOutputTokens)Array

Generates the start beams for a given set of inputs and output length.

Kind: instance method of T5ForConditionalGeneration
Returns: Array - The start beams.

ParamTypeDescription
inputsArray.<Array<number>>

The input token IDs.

numOutputTokensnumber

The desired output length.


t5ForConditionalGeneration.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of T5ForConditionalGeneration
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


t5ForConditionalGeneration.updateBeam(beam, newTokenId)

Updates the given beam with a new token ID.

Kind: instance method of T5ForConditionalGeneration

ParamTypeDescription
beamany

The current beam.

newTokenIdnumber

The new token ID to add to the output sequence.


t5ForConditionalGeneration.forward(model_inputs)Promise.<Object>

Runs the forward pass of the model for a given set of inputs.

Kind: instance method of T5ForConditionalGeneration
Returns: Promise.<Object> - The model output.

ParamTypeDescription
model_inputsObject

The model inputs.


models.MT5ForConditionalGeneration ⇐ MT5PreTrainedModel

A class representing a conditional sequence-to-sequence model based on the MT5 architecture.

Kind: static class of models
Extends: MT5PreTrainedModel


new MT5ForConditionalGeneration(config, session, decoder_merged_session, generation_config)

Creates a new instance of the MT5ForConditionalGeneration class.

ParamTypeDescription
configany

The model configuration.

sessionany

The ONNX session containing the encoder weights.

decoder_merged_sessionany

The ONNX session containing the merged decoder weights.

generation_configGenerationConfig

The generation configuration.


mT5ForConditionalGeneration.getStartBeams(inputs, numOutputTokens, ...args)Array.<any>

Generates the start beams for the given input tokens and output sequence length.

Kind: instance method of MT5ForConditionalGeneration
Returns: Array.<any> - An array of Beam objects representing the start beams.

ParamTypeDescription
inputsArray.<any>

The input sequence.

numOutputTokensnumber

The desired length of the output sequence.

...args*

Additional arguments to pass to the seq2seqStartBeams function.


mT5ForConditionalGeneration.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of MT5ForConditionalGeneration
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


mT5ForConditionalGeneration.updateBeam(beam, newTokenId)

Updates the given beam with the new predicted token.

Kind: instance method of MT5ForConditionalGeneration

ParamTypeDescription
beamany

The beam to update.

newTokenIdnumber

The index of the predicted token.


mT5ForConditionalGeneration.forward(model_inputs)Promise.<any>

Runs the forward pass of the model on the given inputs.

Kind: instance method of MT5ForConditionalGeneration
Returns: Promise.<any> - A Promise that resolves to the model outputs.

ParamTypeDescription
model_inputsany

The model inputs.


models.BartModel ⇐ BartPretrainedModel

BART encoder and decoder model.

Kind: static class of models
Extends: BartPretrainedModel


bartModel.generate()Promise.<any>

Throws an error because the current model class (BartModel) is not compatible with .generate().

Kind: instance method of BartModel
Throws:


models.BartForConditionalGeneration ⇐ BartPretrainedModel

BART model with a language model head for conditional generation.

Kind: static class of models
Extends: BartPretrainedModel


new BartForConditionalGeneration(config, session, decoder_merged_session, generation_config)

Creates a new instance of the BartForConditionalGeneration class.

ParamTypeDescription
configObject

The configuration object for the Bart model.

sessionObject

The ONNX session used to execute the model.

decoder_merged_sessionObject

The ONNX session used to execute the decoder.

generation_configObject

The generation configuration object.


bartForConditionalGeneration.getStartBeams(inputs, numOutputTokens, ...args)any

Returns the initial beam for generating output text.

Kind: instance method of BartForConditionalGeneration
Returns: any - The initial beam for generating output text.

ParamTypeDescription
inputsObject

The input object containing the encoded input text.

numOutputTokensnumber

The maximum number of output tokens to generate.

...argsany

Additional arguments to pass to the sequence-to-sequence generation function.


bartForConditionalGeneration.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of BartForConditionalGeneration
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


bartForConditionalGeneration.updateBeam(beam, newTokenId)

Updates the beam by appending the newly generated token ID to the list of output token IDs.

Kind: instance method of BartForConditionalGeneration

ParamTypeDescription
beamany

The current beam being generated.

newTokenIdnumber

The ID of the newly generated token to append to the list of output token IDs.


bartForConditionalGeneration.forward(model_inputs)Promise.<Object>

Runs the forward pass of the model for a given set of inputs.

Kind: instance method of BartForConditionalGeneration
Returns: Promise.<Object> - The model output.

ParamTypeDescription
model_inputsObject

The model inputs.


models.RobertaForMaskedLM ⇐ RobertaPreTrainedModel

RobertaForMaskedLM class for performing masked language modeling on Roberta models.

Kind: static class of models
Extends: RobertaPreTrainedModel


robertaForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of RobertaForMaskedLM
Returns: Promise.<MaskedLMOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.RobertaForSequenceClassification ⇐ RobertaPreTrainedModel

RobertaForSequenceClassification class for performing sequence classification on Roberta models.

Kind: static class of models
Extends: RobertaPreTrainedModel


robertaForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of RobertaForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.RobertaForTokenClassification ⇐ RobertaPreTrainedModel

RobertaForTokenClassification class for performing token classification on Roberta models.

Kind: static class of models
Extends: RobertaPreTrainedModel


robertaForTokenClassification._call(model_inputs)Promise.<TokenClassifierOutput>

Calls the model on new inputs.

Kind: instance method of RobertaForTokenClassification
Returns: Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.RobertaForQuestionAnswering ⇐ RobertaPreTrainedModel

RobertaForQuestionAnswering class for performing question answering on Roberta models.

Kind: static class of models
Extends: RobertaPreTrainedModel


robertaForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of RobertaForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.XLMRobertaForMaskedLM ⇐ XLMRobertaPreTrainedModel

XLMRobertaForMaskedLM class for performing masked language modeling on XLMRoberta models.

Kind: static class of models
Extends: XLMRobertaPreTrainedModel


xlmRobertaForMaskedLM._call(model_inputs)Promise.<MaskedLMOutput>

Calls the model on new inputs.

Kind: instance method of XLMRobertaForMaskedLM
Returns: Promise.<MaskedLMOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.XLMRobertaForSequenceClassification ⇐ XLMRobertaPreTrainedModel

XLMRobertaForSequenceClassification class for performing sequence classification on XLMRoberta models.

Kind: static class of models
Extends: XLMRobertaPreTrainedModel


xlmRobertaForSequenceClassification._call(model_inputs)Promise.<SequenceClassifierOutput>

Calls the model on new inputs.

Kind: instance method of XLMRobertaForSequenceClassification
Returns: Promise.<SequenceClassifierOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.XLMRobertaForTokenClassification ⇐ XLMRobertaPreTrainedModel

XLMRobertaForTokenClassification class for performing token classification on XLMRoberta models.

Kind: static class of models
Extends: XLMRobertaPreTrainedModel


xlmRobertaForTokenClassification._call(model_inputs)Promise.<TokenClassifierOutput>

Calls the model on new inputs.

Kind: instance method of XLMRobertaForTokenClassification
Returns: Promise.<TokenClassifierOutput> - An object containing the model’s output logits for token classification.

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.XLMRobertaForQuestionAnswering ⇐ XLMRobertaPreTrainedModel

XLMRobertaForQuestionAnswering class for performing question answering on XLMRoberta models.

Kind: static class of models
Extends: XLMRobertaPreTrainedModel


xlmRobertaForQuestionAnswering._call(model_inputs)Promise.<QuestionAnsweringModelOutput>

Calls the model on new inputs.

Kind: instance method of XLMRobertaForQuestionAnswering
Returns: Promise.<QuestionAnsweringModelOutput> - returned object

ParamTypeDescription
model_inputsObject

The inputs to the model.


models.WhisperModel ⇐ WhisperPreTrainedModel

WhisperModel class for training Whisper models without a language model head.

Kind: static class of models
Extends: WhisperPreTrainedModel


whisperModel.generate(...args)Promise.<any>

Throws an error when attempting to generate output since this model doesn’t have a language model head.

Kind: instance method of WhisperModel
Throws:

ParamType
...argsArray.<any>

models.WhisperForConditionalGeneration ⇐ WhisperPreTrainedModel

WhisperForConditionalGeneration class for generating conditional outputs from Whisper models.

Kind: static class of models
Extends: WhisperPreTrainedModel


new WhisperForConditionalGeneration(config, session, decoder_merged_session, generation_config)

Creates a new instance of the WhisperForConditionalGeneration class.

ParamTypeDescription
configObject

Configuration object for the model.

sessionObject

ONNX Session object for the model.

decoder_merged_sessionObject

ONNX Session object for the decoder.

generation_configObject

Configuration object for the generation process.


whisperForConditionalGeneration.generate(inputs, generation_config, logits_processor, options)Promise.<Object>

Generates outputs based on input and generation configuration.

Kind: instance method of WhisperForConditionalGeneration
Returns: Promise.<Object> - Promise object represents the generated outputs.

ParamTypeDefaultDescription
inputsObject

Input data for the model.

generation_configObject

Configuration object for the generation process.

logits_processorObject

Optional logits processor object.

optionsObject

options

[options.return_timestamps]Object

Whether to return the timestamps with the text. This enables the WhisperTimestampsLogitsProcessor.

[options.return_token_timestamps]Object

Whether to return token-level timestamps with the text. This can be used with or without the return_timestamps option. To get word-level timestamps, use the tokenizer to group the tokens into words.


whisperForConditionalGeneration.getStartBeams(inputTokenIds, numOutputTokens)Array

Gets the start beams for generating outputs.

Kind: instance method of WhisperForConditionalGeneration
Returns: Array - Array of start beams.

ParamTypeDescription
inputTokenIdsArray

Array of input token IDs.

numOutputTokensnumber

Number of output tokens to generate.


whisperForConditionalGeneration.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of WhisperForConditionalGeneration
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


whisperForConditionalGeneration.updateBeam(beam, newTokenId)

Updates the beam by appending the newly generated token ID to the list of output token IDs.

Kind: instance method of WhisperForConditionalGeneration

ParamTypeDescription
beamany

The current beam being generated.

newTokenIdnumber

The ID of the newly generated token to append to the list of output token IDs.


whisperForConditionalGeneration.forward(model_inputs)Promise.<Object>

Runs the forward pass of the model for a given set of inputs.

Kind: instance method of WhisperForConditionalGeneration
Returns: Promise.<Object> - The model output.

ParamTypeDescription
model_inputsObject

The model inputs.


whisperForConditionalGeneration._extract_token_timestamps(generate_outputs, alignment_heads, time_precision)Tensor

Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to map each output token to a position in the input audio.

Kind: instance method of WhisperForConditionalGeneration
Returns: Tensor - tensor containing the timestamps in seconds for each predicted token

ParamTypeDescription
generate_outputsObject

Outputs generated by the model

generate_outputs.cross_attentionsArray.<Array<Array<Tensor>>>

The cross attentions output by the model

generate_outputs.decoder_attentionsArray.<Array<Array<Tensor>>>

The decoder attentions output by the model

generate_outputs.sequencesArray.<Array<number>>

The sequences output by the model

alignment_headsArray.<Array<number>>

Alignment heads of the model

time_precisionnumber

Precision of the timestamps in seconds


models.VisionEncoderDecoderModel ⇐ PreTrainedModel

Vision Encoder-Decoder model based on OpenAI’s GPT architecture for image captioning and other vision tasks

Kind: static class of models
Extends: PreTrainedModel


new VisionEncoderDecoderModel(config, session, decoder_merged_session)

Creates a new instance of the VisionEncoderDecoderModel class.

ParamTypeDescription
configObject

The configuration object specifying the hyperparameters and other model settings.

sessionObject

The ONNX session containing the encoder model.

decoder_merged_sessionany

The ONNX session containing the merged decoder model.


visionEncoderDecoderModel.getStartBeams(inputs, numOutputTokens, ...args)any

Generate beam search outputs for the given input pixels and number of output tokens.

Kind: instance method of VisionEncoderDecoderModel
Returns: any - An array of Beam objects representing the top-K output sequences.

ParamTypeDescription
inputsarray

The input pixels as a Tensor.

numOutputTokensnumber

The number of output tokens to generate.

...args*

Optional additional arguments to pass to seq2seqStartBeams.


visionEncoderDecoderModel.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of VisionEncoderDecoderModel
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


visionEncoderDecoderModel.updateBeam(beam, newTokenId)

Update the given beam with the additional predicted token ID.

Kind: instance method of VisionEncoderDecoderModel

ParamTypeDescription
beamany

The current beam.

newTokenIdnumber

The new predicted token ID to add to the beam's output sequence.


visionEncoderDecoderModel.forward(model_inputs)Promise.<any>

Compute the forward pass of the model on the given input tensors.

Kind: instance method of VisionEncoderDecoderModel
Returns: Promise.<any> - The output tensor of the model.

ParamTypeDescription
model_inputsObject

The input tensors as an object with keys 'pixel_values' and 'decoder_input_ids'.


models.CLIPModel

CLIP Text and Vision Model with a projection layers on top

Example: Perform zero-shot image classification with a CLIPModel.

import { AutoTokenizer, AutoProcessor, CLIPModel, RawImage } from '@xenova/transformers';

// Load tokenizer, processor, and model
let tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
let processor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16');
let model = await CLIPModel.from_pretrained('Xenova/clip-vit-base-patch16');

// Run tokenization
let texts = ['a photo of a car', 'a photo of a football match']
let text_inputs = tokenizer(texts, { padding: true, truncation: true });

// Read image and run processor
let image = await RawImage.read('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg');
let image_inputs = await processor(image);

// Run model with both text and pixel inputs
let output = await model({ ...text_inputs, ...image_inputs });
// {
//   logits_per_image: Tensor {
//     dims: [ 1, 2 ],
//     data: Float32Array(2) [ 18.579734802246094, 24.31830596923828 ],
//   },
//   logits_per_text: Tensor {
//     dims: [ 2, 1 ],
//     data: Float32Array(2) [ 18.579734802246094, 24.31830596923828 ],
//   },
//   text_embeds: Tensor {
//     dims: [ 2, 512 ],
//     data: Float32Array(1024) [ ... ],
//   },
//   image_embeds: Tensor {
//     dims: [ 1, 512 ],
//     data: Float32Array(512) [ ... ],
//   }
// }

Kind: static class of models


models.CLIPTextModelWithProjection

CLIP Text Model with a projection layer on top (a linear layer on top of the pooled output)

Example: Compute text embeddings with CLIPTextModelWithProjection.

import { AutoTokenizer, CLIPTextModelWithProjection } from '@xenova/transformers';

// Load tokenizer and text model
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
const text_model = await CLIPTextModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16');

// Run tokenization
let texts = ['a photo of a car', 'a photo of a football match'];
let text_inputs = tokenizer(texts, { padding: true, truncation: true });

// Compute embeddings
const { text_embeds } = await text_model(text_inputs);
// Tensor {
//   dims: [ 2, 512 ],
//   type: 'float32',
//   data: Float32Array(1024) [ ... ],
//   size: 1024
// }

Kind: static class of models


CLIPTextModelWithProjection.from_pretrained() : PreTrainedModel.from_pretrained

Kind: static method of CLIPTextModelWithProjection


models.CLIPVisionModelWithProjection

CLIP Vision Model with a projection layer on top (a linear layer on top of the pooled output)

Example: Compute vision embeddings with CLIPVisionModelWithProjection.

import { AutoProcessor, CLIPVisionModelWithProjection, RawImage} from '@xenova/transformers';

// Load processor and vision model
const processor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16');
const vision_model = await CLIPVisionModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16');

// Read image and run processor
let image = await RawImage.read('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg');
let image_inputs = await processor(image);

// Compute embeddings
const { image_embeds } = await vision_model(image_inputs);
// Tensor {
//   dims: [ 1, 512 ],
//   type: 'float32',
//   data: Float32Array(512) [ ... ],
//   size: 512
// }

Kind: static class of models


CLIPVisionModelWithProjection.from_pretrained() : PreTrainedModel.from_pretrained

Kind: static method of CLIPVisionModelWithProjection


models.GPT2PreTrainedModel

Kind: static class of models


new GPT2PreTrainedModel(config, session)

Creates a new instance of the GPT2PreTrainedModel class.

ParamTypeDescription
configObject

The configuration of the model.

sessionany

The ONNX session containing the model weights.


models.GPT2LMHeadModel ⇐ GPT2PreTrainedModel

GPT-2 language model head on top of the GPT-2 base model. This model is suitable for text generation tasks.

Kind: static class of models
Extends: GPT2PreTrainedModel


gpT2LMHeadModel.getStartBeams(inputTokenIds, numOutputTokens, inputs_attention_mask)any

Initializes and returns the beam for text generation task

Kind: instance method of GPT2LMHeadModel
Returns: any - A Beam object representing the initialized beam.

ParamTypeDescription
inputTokenIdsTensor

The input token ids.

numOutputTokensnumber

The number of tokens to be generated.

inputs_attention_maskTensor

Optional input attention mask.


gpT2LMHeadModel.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of GPT2LMHeadModel
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


gpT2LMHeadModel.updateBeam(beam, newTokenId)

Updates the given beam with the new generated token id.

Kind: instance method of GPT2LMHeadModel

ParamTypeDescription
beamany

The Beam object representing the beam.

newTokenIdnumber

The new generated token id to be added to the beam.


gpT2LMHeadModel.forward(model_inputs)Promise.<any>

Forward pass for the model.

Kind: instance method of GPT2LMHeadModel
Returns: Promise.<any> - The output tensor of the model.

ParamTypeDescription
model_inputsObject

The inputs for the model.


models.GPTNeoPreTrainedModel

Kind: static class of models


new GPTNeoPreTrainedModel(config, session)

Creates a new instance of the GPTNeoPreTrainedModel class.

ParamTypeDescription
configObject

The configuration of the model.

sessionany

The ONNX session containing the model weights.


models.GPTBigCodePreTrainedModel

Kind: static class of models


new GPTBigCodePreTrainedModel(config, session)

Creates a new instance of the GPTBigCodePreTrainedModel class.

ParamTypeDescription
configObject

The configuration of the model.

sessionany

The ONNX session containing the model weights.


models.CodeGenPreTrainedModel

Kind: static class of models


new CodeGenPreTrainedModel(config, session)

Creates a new instance of the CodeGenPreTrainedModel class.

ParamTypeDescription
configObject

The model configuration object.

sessionObject

The ONNX session object.


models.CodeGenModel ⇐ CodeGenPreTrainedModel

CodeGenModel is a class representing a code generation model without a language model head.

Kind: static class of models
Extends: CodeGenPreTrainedModel


codeGenModel.generate(...args)Promise.<any>

Throws an error indicating that the current model class is not compatible with .generate(), as it doesn’t have a language model head.

Kind: instance method of CodeGenModel
Throws:

ParamTypeDescription
...argsany

Arguments passed to the generate function


models.CodeGenForCausalLM ⇐ CodeGenPreTrainedModel

CodeGenForCausalLM is a class that represents a code generation model based on the GPT-2 architecture. It extends the CodeGenPreTrainedModel class.

Kind: static class of models
Extends: CodeGenPreTrainedModel


codeGenForCausalLM.getStartBeams(inputTokenIds, numOutputTokens, inputs_attention_mask)any

Initializes and returns the beam for text generation task

Kind: instance method of CodeGenForCausalLM
Returns: any - A Beam object representing the initialized beam.

ParamTypeDescription
inputTokenIdsTensor

The input token ids.

numOutputTokensnumber

The number of tokens to be generated.

inputs_attention_maskTensor

Optional input attention mask.


codeGenForCausalLM.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of CodeGenForCausalLM
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


codeGenForCausalLM.updateBeam(beam, newTokenId)

Updates the given beam with the new generated token id.

Kind: instance method of CodeGenForCausalLM

ParamTypeDescription
beamany

The Beam object representing the beam.

newTokenIdnumber

The new generated token id to be added to the beam.


codeGenForCausalLM.forward(model_inputs)Promise.<any>

Forward pass for the model.

Kind: instance method of CodeGenForCausalLM
Returns: Promise.<any> - The output tensor of the model.

ParamTypeDescription
model_inputsObject

The inputs for the model.


models.LlamaPreTrainedModel

The bare LLama Model outputting raw hidden-states without any specific head on top.

Kind: static class of models


new LlamaPreTrainedModel(config, session)

Creates a new instance of the LlamaPreTrainedModel class.

ParamTypeDescription
configObject

The model configuration object.

sessionObject

The ONNX session object.


models.LlamaModel

The bare LLaMA Model outputting raw hidden-states without any specific head on top.

Kind: static class of models


llamaModel.generate(...args)Promise.<any>

Throws an error indicating that the current model class is not compatible with .generate(), as it doesn’t have a language model head.

Kind: instance method of LlamaModel
Throws:

ParamTypeDescription
...argsany

Arguments passed to the generate function


models.DetrObjectDetectionOutput

Kind: static class of models


new DetrObjectDetectionOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.logitsTensor

Classification logits (including no-object) for all queries.

output.pred_boxesTensor

Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding).


models.DetrSegmentationOutput

Kind: static class of models


new DetrSegmentationOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.logitsTensor

The output logits of the model.

output.pred_boxesTensor

Predicted boxes.

output.pred_masksTensor

Predicted masks.


models.SamImageSegmentationOutput

Base class for Segment-Anything model’s output.

Kind: static class of models


new SamImageSegmentationOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.iou_scoresTensor

The output logits of the model.

output.pred_masksTensor

Predicted boxes.


models.MarianMTModel

Kind: static class of models


new MarianMTModel(config, session, decoder_merged_session, generation_config)

Creates a new instance of the MarianMTModel class.

ParamTypeDescription
configObject

The model configuration object.

sessionObject

The ONNX session object.

decoder_merged_sessionany
generation_configany

marianMTModel.getStartBeams(inputs, numOutputTokens, ...args)any

Initializes and returns the beam for text generation task

Kind: instance method of MarianMTModel
Returns: any - A Beam object representing the initialized beam.

ParamTypeDescription
inputsArray.<any>

The input token ids.

numOutputTokensnumber

The number of tokens to be generated.

...argsArray.<any>

marianMTModel.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of MarianMTModel
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


marianMTModel.updateBeam(beam, newTokenId)

Kind: instance method of MarianMTModel

ParamType
beamany
newTokenIdany

marianMTModel.forward(model_inputs)Promise.<Seq2SeqLMOutput>

Kind: instance method of MarianMTModel

ParamType
model_inputsany

models.M2M100ForConditionalGeneration

Kind: static class of models


new M2M100ForConditionalGeneration(config, session, decoder_merged_session, generation_config)

Creates a new instance of the M2M100ForConditionalGeneration class.

ParamTypeDescription
configObject

The model configuration object.

sessionObject

The ONNX session object.

decoder_merged_sessionany
generation_configany

m2M100ForConditionalGeneration.getStartBeams(inputs, numOutputTokens, ...args)any

Initializes and returns the beam for text generation task

Kind: instance method of M2M100ForConditionalGeneration
Returns: any - A Beam object representing the initialized beam.

ParamTypeDescription
inputsArray.<any>

The input token ids.

numOutputTokensnumber

The number of tokens to be generated.

...argsArray.<any>

m2M100ForConditionalGeneration.runBeam(beam)Promise.<any>

Runs a single step of the beam search generation algorithm.

Kind: instance method of M2M100ForConditionalGeneration
Returns: Promise.<any> - The updated beam after a single generation step.

ParamTypeDescription
beamany

The current beam being generated.


m2M100ForConditionalGeneration.updateBeam(beam, newTokenId)

Kind: instance method of M2M100ForConditionalGeneration

ParamType
beamany
newTokenIdany

m2M100ForConditionalGeneration.forward(model_inputs)Promise.<Seq2SeqLMOutput>

Kind: instance method of M2M100ForConditionalGeneration

ParamType
model_inputsany

models.PretrainedMixin

Base class of all AutoModels. Contains the from_pretrained function which is used to instantiate pretrained models.

Kind: static class of models


pretrainedMixin.MODEL_CLASS_MAPPINGS : *

Mapping from model type to model class.

Kind: instance property of PretrainedMixin


pretrainedMixin.BASE_IF_FAIL

Whether to attempt to instantiate the base class (PretrainedModel) if the model type is not found in the mapping.

Kind: instance property of PretrainedMixin


PretrainedMixin.from_pretrained() : PreTrainedModel.from_pretrained

Kind: static method of PretrainedMixin


models.AutoModel

Helper class which is used to instantiate pretrained models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForSequenceClassification

Helper class which is used to instantiate pretrained sequence classification models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForTokenClassification

Helper class which is used to instantiate pretrained token classification models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForSeq2SeqLM

Helper class which is used to instantiate pretrained sequence-to-sequence models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForCausalLM

Helper class which is used to instantiate pretrained causal language models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForMaskedLM

Helper class which is used to instantiate pretrained masked language models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForQuestionAnswering

Helper class which is used to instantiate pretrained question answering models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForVision2Seq

Helper class which is used to instantiate pretrained vision-to-sequence models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForImageClassification

Helper class which is used to instantiate pretrained image classification models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForImageSegmentation

Helper class which is used to instantiate pretrained image segmentation models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForObjectDetection

Helper class which is used to instantiate pretrained object detection models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.AutoModelForMaskGeneration

Helper class which is used to instantiate pretrained object detection models with the from_pretrained function. The chosen model class is determined by the type specified in the model config.

Kind: static class of models


models.Seq2SeqLMOutput

Kind: static class of models


new Seq2SeqLMOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.logitsTensor

The output logits of the model.

output.past_key_valuesTensor

An tensor of key/value pairs that represent the previous state of the model.

output.encoder_outputsTensor

The output of the encoder in a sequence-to-sequence model.

[output.decoder_attentions]Tensor

Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.

[output.cross_attentions]Tensor

Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.


models.SequenceClassifierOutput

Base class for outputs of sentence classification models.

Kind: static class of models


new SequenceClassifierOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.logitsTensor

classification (or regression if config.num_labels==1) scores (before SoftMax).


models.TokenClassifierOutput

Base class for outputs of token classification models.

Kind: static class of models


new TokenClassifierOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.logitsTensor

Classification scores (before SoftMax).


models.MaskedLMOutput

Base class for masked language models outputs.

Kind: static class of models


new MaskedLMOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.logitsTensor

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).


models.QuestionAnsweringModelOutput

Base class for outputs of question answering models.

Kind: static class of models


new QuestionAnsweringModelOutput(output)

ParamTypeDescription
outputObject

The output of the model.

output.start_logitsTensor

Span-start scores (before SoftMax).

output.end_logitsTensor

Span-end scores (before SoftMax).


models.CausalLMOutputWithPast

Base class for causal language model (or autoregressive) outputs.

Kind: static class of models


new CausalLMOutputWithPast(output)

ParamTypeDescription
outputObject

The output of the model.

output.logitsTensor

Prediction scores of the language modeling head (scores for each vocabulary token before softmax).

output.past_key_valuesTensor

Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.


models~forward(self, model_inputs)Promise.<Object>

Helper function to determine which forward method to run for a specific model.

Kind: inner method of models
Returns: Promise.<Object> - The model output

ParamTypeDescription
selfObject

The calling object

model_inputsObject

The inputs to be sent to the model


models~PretrainedOptions : *

Kind: inner typedef of models


models~TypedArray : *

Kind: inner typedef of models


models~DecoderOutputPromise.<(Array<Array<number>>|EncoderDecoderOutput|DecoderOutput)>

Generates text based on the given inputs and generation configuration using the model.

Kind: inner typedef of models
Returns: Promise.<(Array<Array<number>>|EncoderDecoderOutput|DecoderOutput)> - An array of generated output sequences, where each sequence is an array of token IDs.
Throws:

ParamTypeDefaultDescription
inputsTensor | Array | TypedArray

An array of input token IDs.

generation_configObject | GenerationConfig | null

The generation configuration to use. If null, default configuration will be used.

logits_processorObject | null

An optional logits processor to use. If null, a new LogitsProcessorList instance will be created.

optionsObject

options

[options.inputs_attention_mask]Object

An optional attention mask for the inputs.