Configuration for ExecuTorch Export

ExecuTorch export provides a flexible configuration mechanism through dynamic registration, enabling users to have complete control over the export process. The configuration system is divided into task configurations and recipe configurations, each addressing specific aspects of the export pipeline.

Task Configurations

Task configurations determine how a Hugging Face model should be loaded and prepared for export, tailored to specific tasks.

For instance, when exporting a model for a text generation task, the provided configuration utilizes static caching and SDPA (Scaled Dot-Product Attention) for inference optimization.

By leveraging task configurations, users can ensure that their models are appropriately prepared for efficient execution on the ExecuTorch backend.

optimum.exporters.executorch.discover_tasks

< >

( )

Dynamically discovers and imports all task modules within the optimum.exporters.executorch.tasks package.

Ensures tasks under ./tasks directory are dynamically loaded without requiring manual imports.

Notes: New tasks must be added to the ./tasks directory to be discovered and used by main_export. Failure to do so will prevent dynamic discovery and registration. Tasks must also use the @register_task decorator to be properly registered in the task_registry.

optimum.exporters.executorch.register_task

< >

( task_name ) Callable

Parameters

  • task_name (str) — The name of the task to associate with a callable task.

Returns

Callable

The original function wrapped as a registered task.

Decorator to register a task under a specific name.

Example:

@register_task("my_new_task")
def my_new_task(...):
    ...

optimum.exporters.executorch.tasks.causal_lm.load_causal_lm_model

< >

( model_name_or_path: str **kwargs ) transformers.PreTrainedModel

Parameters

  • model_name_or_path (str) — Model ID on huggingface.co or path on disk to the model repository to export. For example: model_name_or_path="meta-llama/Llama-3.2-1B" or mode_name_or_path="/path/to/model_folder
  • **kwargs — Additional configuration options for the model:
    • dtype (str, optional): Data type for model weights (default: “float32”). Options include “float16” and “bfloat16”.
    • attn_implementation (str, optional): Attention mechanism implementation (default: “sdpa”).
    • cache_implementation (str, optional): Cache management strategy (default: “static”).
    • max_length (int, optional): Maximum sequence length for generation (default: 2048).

Returns

transformers.PreTrainedModel

An instance of a model subclass (e.g., Llama, Gemma) with the configuration for exporting and lowering to ExecuTorch.

Loads a causal language model for text generation and registers it under the task ‘text-generation’ using Hugging Face’s AutoModelForCausalLM.

Recipe Configurations

Recipe configurations control the specifics of lowering an eager PyTorch module to the ExecuTorch backend. These configurations allow users to:

optimum.exporters.executorch.discover_recipes

< >

( )

Dynamically discovers and imports all recipe modules within the optimum.exporters.executorch.recipes package.

Ensures recipes under ./recipes directory are dynamically loaded without requiring manual imports.

Notes: New recipes must be added to the ./recipes directory to be discovered and used by main_export. Failure to do so will prevent dynamic discovery and registration. Recipes must also use the @register_recipe decorator to be properly registered in the recipe_registry.

optimum.exporters.executorch.register_recipe

< >

( recipe_name ) Callable

Parameters

  • recipe_name (str) — The name of the recipe to associate with a callable recipe.

Returns

Callable

The original function wrapped as a registered recipe.

Decorator to register a recipe for exporting and lowering an ExecuTorch model under a specific name.

Example:

@register_recipe("my_new_recipe")
def my_new_recipe(...):
    ...

optimum.exporters.executorch.recipes.xnnpack.export_to_executorch_with_xnnpack

< >

( model: typing.Union[transformers.modeling_utils.PreTrainedModel, transformers.integrations.executorch.TorchExportableModuleWithStaticCache] task: str **kwargs ) ExecuTorchProgram

Parameters

  • model (Union[PreTrainedModel, TorchExportableModuleWithStaticCache]) — The PyTorch model to be exported to ExecuTorch.
  • task (str) — The task name to export the model for (e.g., “text-generation”).
  • **kwargs — Additional keyword arguments for recipe-specific configurations.

Returns

ExecuTorchProgram

The exported and optimized program for ExecuTorch.

Export a PyTorch model to ExecuTorch w/ delegation to XNNPACK backend.

This function also write metadata required by the ExecuTorch runtime to the model.

The combination of task and recipe configurations ensures that users can customize both the high-level task setup and the low-level export details to suit their deployment requirements.

< > Update on GitHub