( pretrained: str accelerator: Accelerator = None tokenizer: typing.Optional[str] = None multichoice_continuations_start_space: typing.Optional[bool] = None pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None revision: str = 'main' batch_size: int = -1 max_gen_toks: typing.Optional[int] = 256 max_length: typing.Optional[int] = None add_special_tokens: bool = True model_parallel: typing.Optional[bool] = None dtype: typing.Union[str, torch.dtype, NoneType] = None device: typing.Union[int, str] = 'cuda' quantization_config: typing.Optional[transformers.utils.quantization_config.BitsAndBytesConfig] = None trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False )
Parameters
pretrained_model_name_or_path
argument of from_pretrained
in the HuggingFace transformers
API. None
, the default value will be set to True
for seq2seq models (e.g. T5) and
False
for causal models. accelerate
library to load a large
model across multiple devices.
Default: None which corresponds to comparing the number of processes with
the number of GPUs. If it’s smaller => model-parallelism, else not. dtype
, if specified. Strings get
converted to torch.dtype
objects (e.g. float16
-> torch.float16
).
Use dtype="auto"
to derive the type from the model’s weights. Base configuration class for models.
Methods: post_init(): Performs post-initialization checks on the configuration. _init_configs(model_name, env_config): Initializes the model configuration. init_configs(env_config): Initializes the model configuration using the environment configuration. get_model_sha(): Retrieves the SHA of the model.
( pretrained: str accelerator: Accelerator = None tokenizer: typing.Optional[str] = None multichoice_continuations_start_space: typing.Optional[bool] = None pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None revision: str = 'main' batch_size: int = -1 max_gen_toks: typing.Optional[int] = 256 max_length: typing.Optional[int] = None add_special_tokens: bool = True model_parallel: typing.Optional[bool] = None dtype: typing.Union[str, torch.dtype, NoneType] = None device: typing.Union[int, str] = 'cuda' quantization_config: typing.Optional[transformers.utils.quantization_config.BitsAndBytesConfig] = None trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False base_model: str = None )
( pretrained: str accelerator: Accelerator = None tokenizer: typing.Optional[str] = None multichoice_continuations_start_space: typing.Optional[bool] = None pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None revision: str = 'main' batch_size: int = -1 max_gen_toks: typing.Optional[int] = 256 max_length: typing.Optional[int] = None add_special_tokens: bool = True model_parallel: typing.Optional[bool] = None dtype: typing.Union[str, torch.dtype, NoneType] = None device: typing.Union[int, str] = 'cuda' quantization_config: typing.Optional[transformers.utils.quantization_config.BitsAndBytesConfig] = None trust_remote_code: bool = False use_chat_template: bool = False compile: bool = False base_model: str = None )
( name: str repository: str accelerator: str vendor: str region: str instance_size: str instance_type: str model_dtype: str framework: str = 'pytorch' endpoint_type: str = 'protected' should_reuse_existing: bool = False add_special_tokens: bool = True revision: str = 'main' namespace: str = None image_url: str = None env_vars: dict = None )
Returns the list of optional keys in an endpoint model configuration. By default, the code requires that all the keys be specified in the configuration in order to launch the endpoint. This function returns the list of keys that are not required and can remain None.
( model: str add_special_tokens: bool = True )
( inference_server_address: str inference_server_auth: str model_id: str )
( pretrained: str gpu_memory_utilisation: float = 0.9 revision: str = 'main' dtype: str | None = None tensor_parallel_size: int = 1 pipeline_parallel_size: int = 1 data_parallel_size: int = 1 max_model_length: int | None = None swap_space: int = 4 seed: int = 1234 trust_remote_code: bool = False use_chat_template: bool = False add_special_tokens: bool = True multichoice_continuations_start_space: bool = True pairwise_tokenization: bool = False subfolder: typing.Optional[str] = None temperature: float = 0.6 )
( use_chat_template: bool override_batch_size: int accelerator: typing.Optional[ForwardRef('Accelerator')] model_args: typing.Union[str, dict] = None model_config_path: str = None ) → Union[BaseModelConfig, AdapterModelConfig, DeltaModelConfig, TGIModelConfig, InferenceEndpointModelConfig, DummyModelConfig]
Parameters
dummy
or a base model (using accelerate or no accelerator), in
which case corresponding full model args available are the arguments of the [[BaseModelConfig]].
Minimal configuration is pretrained=<name_of_the_model_on_the_hub>
. Returns
Union[BaseModelConfig, AdapterModelConfig, DeltaModelConfig, TGIModelConfig, InferenceEndpointModelConfig, DummyModelConfig]
model configuration.
Raises
ValueError
ValueError
— If both an inference server address and model arguments are provided.Create a model configuration based on the provided arguments.
ValueError: If multichoice continuations both should start with a space and should not start with a space. ValueError: If a base model is not specified when using delta weights or adapter weights. ValueError: If a base model is specified when not using delta weights or adapter weights.