Exporting a model to ONNX involves specifying:
Since this data depends on the choice of model and task, we represent it in terms of configuration classes. Each configuration class is associated with
a specific model architecture, and follows the naming convention ArchitectureNameOnnxConfig
. For instance, the configuration which specifies the ONNX
export of BERT models is BertOnnxConfig
.
Since many architectures share similar properties for their ONNX configuration, 🤗 Optimum adopts a 3-level class hierarchy:
BertOnnxConfig
mentioned above. These are the ones actually used to export models.( config: PretrainedConfig task: str = 'feature-extraction' preprocessors: typing.Optional[typing.List[typing.Any]] = None int_dtype: str = 'int64' float_dtype: str = 'fp32' )
Parameters
transformers.PretrainedConfig
) —
The model configuration. str
, defaults to "feature-extraction"
) —
The task the model should be exported for. str
, defaults to "int64"
) —
The data type of integer tensors, could be [“int64”, “int32”, “int8”], default to “int64”. str
, defaults to "fp32"
) —
The data type of float tensors, could be [“fp32”, “fp16”, “bf16”], default to “fp32”. Base class for ONNX exportable model describing metadata on how to export the model through the ONNX format.
Class attributes:
Type
) — A class derived from NormalizedConfig specifying how to
normalize the model config.Tuple[Type]
) — A tuple of classes derived from
DummyInputGenerator specifying how to create dummy inputs.Union[float, Dict[str, float]]
) — A float or a dictionary mapping task names to float,
where the float values represent the absolute tolerance value to use during model conversion validation.int
, defaults to 11) — The default ONNX opset to use for the ONNX export.packaging.version.Version
, defaults to ~optimum.exporters.onnx.utils.TORCH_MINIMUM_VERSION
) — The
minimum torch version supporting the export of the model to ONNX.packaging.version.Version
, defaults to
~optimum.exporters.onnx.utils.TRANSFORMERS_MINIMUM_VERSION
— The minimum transformers version supporting the
export of the model to ONNX. Not always up-to-date or accurate. This is more for internal use.Optional[List[PatchingSpec]]
, defaults to None
) — Specify which operators / modules should be
patched before performing the export, and how. This is useful when some operator is not supported in ONNX for
instance.( ) → Dict[str, Dict[int, str]]
Returns
Dict[str, Dict[int, str]]
A mapping of each input name to a mapping of axis position to the axes symbolic name.
Dict containing the axis definition of the input tensors to provide to the model.
( ) → Dict[str, Dict[int, str]]
Returns
Dict[str, Dict[int, str]]
A mapping of each output name to a mapping of axis position to the axes symbolic name.
Dict containing the axis definition of the output tensors to provide to the model.
( framework: str = 'pt' **kwargs ) → Dict
Parameters
str
, defaults to "pt"
) —
The framework for which to create the dummy inputs. int
, defaults to 2) —
The batch size to use in the dummy inputs. int
, defaults to 16) —
The sequence length to use in the dummy inputs. int
, defaults to 4) —
The number of candidate answers provided for multiple choice task. int
, defaults to 64) —
The width to use in the dummy inputs for vision tasks. int
, defaults to 64) —
The height to use in the dummy inputs for vision tasks. int
, defaults to 3) —
The number of channels to use in the dummpy inputs for vision tasks. int
, defaults to 80) —
The number of features to use in the dummpy inputs for audio tasks in case it is not raw audio.
This is for example the number of STFT bins or MEL bins. int
, defaults to 3000) —
The number of frames to use in the dummpy inputs for audio tasks in case the input is not raw audio. int
, defaults to 16000) —
The number of frames to use in the dummpy inputs for audio tasks in case the input is raw audio. Returns
Dict
A dictionary mapping the input names to dummy tensors in the proper framework format.
Generates the dummy inputs necessary for tracing the model. If not explicitely specified, default input shapes are used.
( config: PretrainedConfig task: str = 'feature-extraction' int_dtype: str = 'int64' float_dtype: str = 'fp32' use_past: bool = False use_past_in_inputs: bool = False preprocessors: typing.Optional[typing.List[typing.Any]] = None )
Inherits from OnnxConfig. A base class to handle the ONNX configuration of decoder-only models.
( inputs_or_outputs: typing.Dict[str, typing.Dict[int, str]] direction: str )
Fills input_or_outputs
mapping with past_key_values dynamic axes considering the direction.
( config: PretrainedConfig task: str = 'feature-extraction' int_dtype: str = 'int64' float_dtype: str = 'fp32' use_past: bool = False use_past_in_inputs: bool = False behavior: ConfigBehavior = <ConfigBehavior.MONOLITH: 'monolith'> preprocessors: typing.Optional[typing.List[typing.Any]] = None )
Inherits from OnnxConfigWithPast. A base class to handle the ONNX configuration of encoder-decoder models.
( behavior: typing.Union[str, optimum.exporters.onnx.base.ConfigBehavior] use_past: bool = False use_past_in_inputs: bool = False ) → OnnxSeq2SeqConfigWithPast
Parameters
ConfigBehavior
) —
The behavior to use for the new instance. bool
, defaults to False
) —
Whether or not the ONNX config to instantiate is for a model using KV cache. bool
, defaults to False
) —
Whether the KV cache is to be passed as an input to the ONNX. Returns
OnnxSeq2SeqConfigWithPast
Creates a copy of the current OnnxConfig but with a different ConfigBehavior
and use_past
value.
( config: PretrainedConfig task: str = 'feature-extraction' preprocessors: typing.Optional[typing.List[typing.Any]] = None int_dtype: str = 'int64' float_dtype: str = 'fp32' )
Handles encoder-based text architectures.
( config: PretrainedConfig task: str = 'feature-extraction' int_dtype: str = 'int64' float_dtype: str = 'fp32' use_past: bool = False use_past_in_inputs: bool = False preprocessors: typing.Optional[typing.List[typing.Any]] = None no_position_ids: bool = False )
Handles decoder-based text architectures.
( config: PretrainedConfig task: str = 'feature-extraction' int_dtype: str = 'int64' float_dtype: str = 'fp32' use_past: bool = False use_past_in_inputs: bool = False behavior: ConfigBehavior = <ConfigBehavior.MONOLITH: 'monolith'> preprocessors: typing.Optional[typing.List[typing.Any]] = None )
Handles encoder-decoder-based text architectures.
( config: PretrainedConfig task: str = 'feature-extraction' preprocessors: typing.Optional[typing.List[typing.Any]] = None int_dtype: str = 'int64' float_dtype: str = 'fp32' )
Handles vision architectures.
( config: PretrainedConfig task: str = 'feature-extraction' preprocessors: typing.Optional[typing.List[typing.Any]] = None int_dtype: str = 'int64' float_dtype: str = 'fp32' )
Handles multi-modal text and vision architectures.