많은 경우, 사용하려는 아키텍처는 from_pretrained()
메소드에서 제공하는 사전 훈련된 모델의 이름이나 경로로부터 유추할 수 있습니다. AutoClasses는 이 작업을 위해 존재하며, 사전 학습된 모델 가중치/구성/단어사전에 대한 이름/경로를 제공하면 자동으로 관련 모델을 가져오도록 도와줍니다.
AutoConfig, AutoModel, AutoTokenizer 중 하나를 인스턴스화하면 해당 아키텍처의 클래스를 직접 생성합니다. 예를 들어,
model = AutoModel.from_pretrained("google-bert/bert-base-cased")
위 코드는 BertModel의 인스턴스인 모델을 생성합니다.
각 작업에 대해 하나의 AutoModel
클래스가 있으며, 각각의 백엔드(PyTorch, TensorFlow 또는 Flax)에 해당하는 클래스가 존재합니다.
각 자동 클래스는 사용자의 커스텀 클래스로 확장될 수 있는 메소드를 가지고 있습니다. 예를 들어, NewModel
이라는 커스텀 모델 클래스를 정의했다면, NewModelConfig
를 준비한 후 다음과 같이 자동 클래스에 추가할 수 있습니다:
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
이후에는 일반적으로 자동 클래스를 사용하는 것처럼 사용할 수 있습니다!
만약 NewModelConfig
가 PretrainedConfig의 서브클래스라면, 해당 model_type
속성이 등록할 때 사용하는 키(여기서는 "new-model"
)와 동일하게 설정되어 있는지 확인하세요.
마찬가지로, NewModel
이 PreTrainedModel의 서브클래스라면, 해당 config_class
속성이 등록할 때 사용하는 클래스(여기서는 NewModelConfig
)와 동일하게 설정되어 있는지 확인하세요.
This is a generic configuration class that will be instantiated as one of the configuration classes of the library when created with the from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../my_model_directory/configuration.json
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Whether or not to force the (re-)download the model weights and configuration files and override the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
If False
, then this function returns just the final configuration object.
If True
, then this functions returns a Tuple(config, unused_kwargs)
where unused_kwargs is a
dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
part of kwargs
which has not been used to update config
and is otherwise ignored.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. return_unused_kwargs
keyword parameter. Instantiate one of the configuration classes of the library from a pretrained model configuration.
The configuration class to instantiate is selected based on the model_type
property of the config object that
is loaded, or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertConfig
(ALBERT model)AlignConfig
(ALIGN model)AriaConfig
(Aria model)AriaTextConfig
(AriaText model)ASTConfig
(Audio Spectrogram Transformer model)BambaConfig
(Bamba model)BarkConfig
(Bark model)BeitConfig
(BEiT model)BertGenerationConfig
(Bert Generation model)BigBirdConfig
(BigBird model)BigBirdPegasusConfig
(BigBird-Pegasus model)BitConfig
(BiT model)BlenderbotConfig
(Blenderbot model)BlenderbotSmallConfig
(BlenderbotSmall model)BloomConfig
(BLOOM model)BridgeTowerConfig
(BridgeTower model)BrosConfig
(BROS model)CamembertConfig
(CamemBERT model)CanineConfig
(CANINE model)ChineseCLIPConfig
(Chinese-CLIP model)ChineseCLIPVisionConfig
(ChineseCLIPVisionModel model)ClapConfig
(CLAP model)CLIPSegConfig
(CLIPSeg model)ClvpConfig
(CLVP model)CodeGenConfig
(CodeGen model)Cohere2Config
(Cohere2 model)ColPaliConfig
(ColPali model)ConditionalDetrConfig
(Conditional DETR model)ConvNextConfig
(ConvNeXT model)ConvNextV2Config
(ConvNeXTV2 model)CpmAntConfig
(CPM-Ant model)CTRLConfig
(CTRL model)CvtConfig
(CvT model)DacConfig
(DAC model)Data2VecAudioConfig
(Data2VecAudio model)Data2VecTextConfig
(Data2VecText model)Data2VecVisionConfig
(Data2VecVision model)DecisionTransformerConfig
(Decision Transformer model)DeformableDetrConfig
(Deformable DETR model)DeiTConfig
(DeiT model)DepthAnythingConfig
(Depth Anything model)DetaConfig
(DETA model)DetrConfig
(DETR model)DiffLlamaConfig
(DiffLlama model)DinatConfig
(DiNAT model)Dinov2Config
(DINOv2 model)Dinov2WithRegistersConfig
(DINOv2 with Registers model)DistilBertConfig
(DistilBERT model)DonutSwinConfig
(DonutSwin model)DPRConfig
(DPR model)DPTConfig
(DPT model)EfficientFormerConfig
(EfficientFormer model)EfficientNetConfig
(EfficientNet model)ElectraConfig
(ELECTRA model)EncodecConfig
(EnCodec model)ErnieConfig
(ERNIE model)ErnieMConfig
(ErnieM model)FalconConfig
(Falcon model)FalconMambaConfig
(FalconMamba model)FastSpeech2ConformerConfig
(FastSpeech2Conformer model)FlaubertConfig
(FlauBERT model)FlavaConfig
(FLAVA model)FNetConfig
(FNet model)FocalNetConfig
(FocalNet model)FSMTConfig
(FairSeq Machine-Translation model)FunnelConfig
(Funnel Transformer model)FuyuConfig
(Fuyu model)GitConfig
(GIT model)GlmConfig
(GLM model)GLPNConfig
(GLPN model)GPT2Config
(GPT-Sw3 model)GPT2Config
(OpenAI GPT-2 model)GPTBigCodeConfig
(GPTBigCode model)GPTNeoConfig
(GPT Neo model)GPTNeoXConfig
(GPT NeoX model)GPTJConfig
(GPT-J model)GPTSanJapaneseConfig
(GPTSAN-japanese model)GraniteConfig
(Granite model)GraniteMoeConfig
(GraniteMoeMoe model)GroundingDinoConfig
(Grounding DINO model)GroupViTConfig
(GroupViT model)HieraConfig
(Hiera model)HubertConfig
(Hubert model)IBertConfig
(I-BERT model)IdeficsConfig
(IDEFICS model)Idefics2Config
(Idefics2 model)Idefics3Config
(Idefics3 model)Idefics3VisionConfig
(Idefics3VisionTransformer model)IJepaConfig
(I-JEPA model)ImageGPTConfig
(ImageGPT model)InstructBlipConfig
(InstructBLIP model)InstructBlipVideoConfig
(InstructBlipVideo model)JambaConfig
(Jamba model)JetMoeConfig
(JetMoe model)JukeboxConfig
(Jukebox model)Kosmos2Config
(KOSMOS-2 model)LayoutLMConfig
(LayoutLM model)LayoutLMv2Config
(LayoutLMv2 model)LayoutLMv3Config
(LayoutLMv3 model)LEDConfig
(LED model)LevitConfig
(LeViT model)LiltConfig
(LiLT model)LlavaConfig
(LLaVa model)LlavaNextConfig
(LLaVA-NeXT model)LlavaNextVideoConfig
(LLaVa-NeXT-Video model)LlavaOnevisionConfig
(LLaVA-Onevision model)LongformerConfig
(Longformer model)LongT5Config
(LongT5 model)LukeConfig
(LUKE model)LxmertConfig
(LXMERT model)M2M100Config
(M2M100 model)MarkupLMConfig
(MarkupLM model)Mask2FormerConfig
(Mask2Former model)MaskFormerConfig
(MaskFormer model)MaskFormerSwinConfig
(MaskFormerSwin model)MBartConfig
(mBART model)MCTCTConfig
(M-CTC-T model)MegaConfig
(MEGA model)MegatronBertConfig
(Megatron-BERT model)MgpstrConfig
(MGP-STR model)MimiConfig
(Mimi model)MixtralConfig
(Mixtral model)MllamaConfig
(Mllama model)MobileBertConfig
(MobileBERT model)MobileNetV1Config
(MobileNetV1 model)MobileNetV2Config
(MobileNetV2 model)MobileViTConfig
(MobileViT model)MobileViTV2Config
(MobileViTV2 model)ModernBertConfig
(ModernBERT model)MoshiConfig
(Moshi model)MPNetConfig
(MPNet model)MptConfig
(MPT model)MraConfig
(MRA model)MT5Config
(MT5 model)MusicgenConfig
(MusicGen model)MusicgenMelodyConfig
(MusicGen Melody model)MvpConfig
(MVP model)NatConfig
(NAT model)NemotronConfig
(Nemotron model)NezhaConfig
(Nezha model)NllbMoeConfig
(NLLB-MOE model)VisionEncoderDecoderConfig
(Nougat model)NystromformerConfig
(Nyströmformer model)OlmoConfig
(OLMo model)Olmo2Config
(OLMo2 model)OlmoeConfig
(OLMoE model)OmDetTurboConfig
(OmDet-Turbo model)OneFormerConfig
(OneFormer model)OpenLlamaConfig
(OpenLlama model)OPTConfig
(OPT model)Owlv2Config
(OWLv2 model)OwlViTConfig
(OWL-ViT model)PegasusConfig
(Pegasus model)PegasusXConfig
(PEGASUS-X model)PerceiverConfig
(Perceiver model)PersimmonConfig
(Persimmon model)PhiConfig
(Phi model)Phi3Config
(Phi3 model)PhimoeConfig
(Phimoe model)Pix2StructConfig
(Pix2Struct model)PixtralVisionConfig
(Pixtral model)PLBartConfig
(PLBart model)PoolFormerConfig
(PoolFormer model)Pop2PianoConfig
(Pop2Piano model)ProphetNetConfig
(ProphetNet model)PvtConfig
(PVT model)PvtV2Config
(PVTv2 model)QDQBertConfig
(QDQBert model)Qwen2Config
(Qwen2 model)Qwen2AudioConfig
(Qwen2Audio model)Qwen2AudioEncoderConfig
(Qwen2AudioEncoder model)Qwen2MoeConfig
(Qwen2MoE model)Qwen2VLConfig
(Qwen2VL model)RealmConfig
(REALM model)RecurrentGemmaConfig
(RecurrentGemma model)ReformerConfig
(Reformer model)RegNetConfig
(RegNet model)RemBertConfig
(RemBERT model)ResNetConfig
(ResNet model)RetriBertConfig
(RetriBERT model)RobertaConfig
(RoBERTa model)RobertaPreLayerNormConfig
(RoBERTa-PreLayerNorm model)RoCBertConfig
(RoCBert model)RoFormerConfig
(RoFormer model)RTDetrConfig
(RT-DETR model)RTDetrResNetConfig
(RT-DETR-ResNet model)RwkvConfig
(RWKV model)SamConfig
(SAM model)SeamlessM4TConfig
(SeamlessM4T model)SeamlessM4Tv2Config
(SeamlessM4Tv2 model)SegformerConfig
(SegFormer model)SegGptConfig
(SegGPT model)SEWConfig
(SEW model)SEWDConfig
(SEW-D model)SiglipConfig
(SigLIP model)SiglipVisionConfig
(SiglipVisionModel model)SpeechEncoderDecoderConfig
(Speech Encoder decoder model)Speech2TextConfig
(Speech2Text model)Speech2Text2Config
(Speech2Text2 model)SpeechT5Config
(SpeechT5 model)SplinterConfig
(Splinter model)SqueezeBertConfig
(SqueezeBERT model)StableLmConfig
(StableLm model)Starcoder2Config
(Starcoder2 model)SuperPointConfig
(SuperPoint model)SwiftFormerConfig
(SwiftFormer model)SwitchTransformersConfig
(SwitchTransformers model)T5Config
(T5 model)TableTransformerConfig
(Table Transformer model)TapasConfig
(TAPAS model)TextNetConfig
(TextNet model)TimmBackboneConfig
(TimmBackbone model)TimmWrapperConfig
(TimmWrapperModel model)TransfoXLConfig
(Transformer-XL model)TrOCRConfig
(TrOCR model)TvltConfig
(TVLT model)TvpConfig
(TVP model)UdopConfig
(UDOP model)UMT5Config
(UMT5 model)UniSpeechConfig
(UniSpeech model)UniSpeechSatConfig
(UniSpeechSat model)UnivNetConfig
(UnivNet model)UperNetConfig
(UPerNet model)VanConfig
(VAN model)VideoLlavaConfig
(VideoLlava model)VideoMAEConfig
(VideoMAE model)ViltConfig
(ViLT model)VipLlavaConfig
(VipLlava model)VisionEncoderDecoderConfig
(Vision Encoder decoder model)VisionTextDualEncoderConfig
(VisionTextDualEncoder model)VisualBertConfig
(VisualBERT model)ViTHybridConfig
(ViT Hybrid model)ViTMAEConfig
(ViTMAE model)ViTMSNConfig
(ViTMSN model)VitDetConfig
(VitDet model)VitMatteConfig
(ViTMatte model)VitsConfig
(VITS model)Wav2Vec2Config
(Wav2Vec2 model)Wav2Vec2BertConfig
(Wav2Vec2-BERT model)Wav2Vec2ConformerConfig
(Wav2Vec2-Conformer model)WavLMConfig
(WavLM model)XCLIPConfig
(X-CLIP model)XGLMConfig
(XGLM model)XLMConfig
(XLM model)XLMProphetNetConfig
(XLM-ProphetNet model)XLMRobertaConfig
(XLM-RoBERTa model)XLMRobertaXLConfig
(XLM-RoBERTa-XL model)XLNetConfig
(XLNet model)XmodConfig
(X-MOD model)YolosConfig
(YOLOS model)YosoConfig
(YOSO model)ZambaConfig
(Zamba model)ZoeDepthConfig
(ZoeDepth model)Examples:
>>> from transformers import AutoConfig
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-uncased")
>>> # Download configuration from huggingface.co (user-uploaded) and cache.
>>> config = AutoConfig.from_pretrained("dbmdz/bert-base-german-cased")
>>> # If configuration file is in a directory (e.g., was saved using *save_pretrained('./test/saved_model/')*).
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/")
>>> # Load a specific configuration file.
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/my_configuration.json")
>>> # Change some config attributes when loading a pretrained config.
>>> config = AutoConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
>>> config.output_attentions
True
>>> config, unused_kwargs = AutoConfig.from_pretrained(
... "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
... )
>>> config.output_attentions
True
>>> unused_kwargs
{'foo': False}
( model_type config exist_ok = False )
Parameters
str
) — The model type like “bert” or “gpt”. Register a new configuration for this class.
This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the AutoTokenizer.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path *inputs **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../my_model_directory/vocab.txt
. (Not
applicable to all derived classes)__init__()
method. str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Whether or not to force the (re-)download the model weights and configuration files and override the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. str
, optional) —
In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for
facebook/rag-token-base), specify it here. bool
, optional, defaults to True
) —
Use a fast Rust-based tokenizer if it is supported for
a given model. If a fast tokenizer is not available for a given model, a normal Python-based tokenizer
is returned instead. str
, optional) —
Tokenizer type to be loaded. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. __init__()
method. Can be used to set special tokens like
bos_token
, eos_token
, unk_token
, sep_token
, pad_token
, cls_token
, mask_token
,
additional_special_tokens
. See parameters in the __init__()
for more details. Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.
The tokenizer class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertTokenizer
or AlbertTokenizerFast
(ALBERT model)BertGenerationTokenizer
(Bert Generation model)BigBirdTokenizer
or BigBirdTokenizerFast
(BigBird model)PegasusTokenizer
or PegasusTokenizerFast
(BigBird-Pegasus model)BlenderbotTokenizer
or BlenderbotTokenizerFast
(Blenderbot model)BlenderbotSmallTokenizer
(BlenderbotSmall model)GPT2Tokenizer
or GPT2TokenizerFast
(BLIP-2 model)BloomTokenizerFast
(BLOOM model)RobertaTokenizer
or RobertaTokenizerFast
(BridgeTower model)ByT5Tokenizer
(ByT5 model)CamembertTokenizer
or CamembertTokenizerFast
(CamemBERT model)CanineTokenizer
(CANINE model)RobertaTokenizer
or RobertaTokenizerFast
(CLAP model)ClvpTokenizer
(CLVP model)CodeLlamaTokenizer
or CodeLlamaTokenizerFast
(CodeLlama model)CodeGenTokenizer
or CodeGenTokenizerFast
(CodeGen model)CpmTokenizer
or CpmTokenizerFast
(CPM model)CpmAntTokenizer
(CPM-Ant model)CTRLTokenizer
(CTRL model)Wav2Vec2CTCTokenizer
(Data2VecAudio model)RobertaTokenizer
or RobertaTokenizerFast
(Data2VecText model)GPT2Tokenizer
or GPT2TokenizerFast
(DBRX model)DistilBertTokenizer
or DistilBertTokenizerFast
(DistilBERT model)DPRQuestionEncoderTokenizer
or DPRQuestionEncoderTokenizerFast
(DPR model)ElectraTokenizer
or ElectraTokenizerFast
(ELECTRA model)ErnieMTokenizer
(ErnieM model)PreTrainedTokenizerFast
(Falcon model)GPTNeoXTokenizerFast
(FalconMamba model)FlaubertTokenizer
(FlauBERT model)FNetTokenizer
or FNetTokenizerFast
(FNet model)FSMTTokenizer
(FairSeq Machine-Translation model)FunnelTokenizer
or FunnelTokenizerFast
(Funnel Transformer model)PreTrainedTokenizerFast
(GLM model)GPTSw3Tokenizer
(GPT-Sw3 model)GPT2Tokenizer
or GPT2TokenizerFast
(OpenAI GPT-2 model)GPT2Tokenizer
or GPT2TokenizerFast
(GPTBigCode model)GPT2Tokenizer
or GPT2TokenizerFast
(GPT Neo model)GPTNeoXTokenizerFast
(GPT NeoX model)GPT2Tokenizer
or GPT2TokenizerFast
(GPT-J model)GPTSanJapaneseTokenizer
(GPTSAN-japanese model)HerbertTokenizer
or HerbertTokenizerFast
(HerBERT model)Wav2Vec2CTCTokenizer
(Hubert model)RobertaTokenizer
or RobertaTokenizerFast
(I-BERT model)GPT2Tokenizer
or GPT2TokenizerFast
(InstructBLIP model)GPT2Tokenizer
or GPT2TokenizerFast
(InstructBlipVideo model)JukeboxTokenizer
(Jukebox model)XLMRobertaTokenizer
or XLMRobertaTokenizerFast
(KOSMOS-2 model)LayoutLMTokenizer
or LayoutLMTokenizerFast
(LayoutLM model)LayoutLMv2Tokenizer
or LayoutLMv2TokenizerFast
(LayoutLMv2 model)LayoutLMv3Tokenizer
or LayoutLMv3TokenizerFast
(LayoutLMv3 model)LayoutXLMTokenizer
or LayoutXLMTokenizerFast
(LayoutXLM model)LEDTokenizer
or LEDTokenizerFast
(LED model)LayoutLMv3Tokenizer
or LayoutLMv3TokenizerFast
(LiLT model)LongformerTokenizer
or LongformerTokenizerFast
(Longformer model)T5Tokenizer
or T5TokenizerFast
(LongT5 model)LukeTokenizer
(LUKE model)LxmertTokenizer
or LxmertTokenizerFast
(LXMERT model)M2M100Tokenizer
(M2M100 model)GPTNeoXTokenizerFast
(Mamba model)GPTNeoXTokenizerFast
(mamba2 model)MBartTokenizer
or MBartTokenizerFast
(mBART model)MBart50Tokenizer
or MBart50TokenizerFast
(mBART-50 model)RobertaTokenizer
or RobertaTokenizerFast
(MEGA model)MgpstrTokenizer
(MGP-STR model)MLukeTokenizer
(mLUKE model)MobileBertTokenizer
or MobileBertTokenizerFast
(MobileBERT model)PreTrainedTokenizerFast
(ModernBERT model)PreTrainedTokenizerFast
(Moshi model)MPNetTokenizer
or MPNetTokenizerFast
(MPNet model)GPTNeoXTokenizerFast
(MPT model)RobertaTokenizer
or RobertaTokenizerFast
(MRA model)MT5Tokenizer
or MT5TokenizerFast
(MT5 model)T5Tokenizer
or T5TokenizerFast
(MusicGen model)T5Tokenizer
or T5TokenizerFast
(MusicGen Melody model)MvpTokenizer
or MvpTokenizerFast
(MVP model)MyT5Tokenizer
(myt5 model)NllbTokenizer
or NllbTokenizerFast
(NLLB model)NllbTokenizer
or NllbTokenizerFast
(NLLB-MOE model)AlbertTokenizer
or AlbertTokenizerFast
(Nyströmformer model)GPTNeoXTokenizerFast
(OLMo model)GPTNeoXTokenizerFast
(OLMo2 model)GPTNeoXTokenizerFast
(OLMoE model)GPT2Tokenizer
or GPT2TokenizerFast
(OPT model)PegasusTokenizer
or PegasusTokenizerFast
(Pegasus model)PegasusTokenizer
or PegasusTokenizerFast
(PEGASUS-X model)PerceiverTokenizer
(Perceiver model)CodeGenTokenizer
or CodeGenTokenizerFast
(Phi model)PhobertTokenizer
(PhoBERT model)T5Tokenizer
or T5TokenizerFast
(Pix2Struct model)PreTrainedTokenizerFast
(Pixtral model)PLBartTokenizer
(PLBart model)ProphetNetTokenizer
(ProphetNet model)Qwen2Tokenizer
or Qwen2TokenizerFast
(Qwen2 model)Qwen2Tokenizer
or Qwen2TokenizerFast
(Qwen2Audio model)Qwen2Tokenizer
or Qwen2TokenizerFast
(Qwen2MoE model)Qwen2Tokenizer
or Qwen2TokenizerFast
(Qwen2VL model)RealmTokenizer
or RealmTokenizerFast
(REALM model)ReformerTokenizer
or ReformerTokenizerFast
(Reformer model)RemBertTokenizer
or RemBertTokenizerFast
(RemBERT model)RetriBertTokenizer
or RetriBertTokenizerFast
(RetriBERT model)RobertaTokenizer
or RobertaTokenizerFast
(RoBERTa model)RobertaTokenizer
or RobertaTokenizerFast
(RoBERTa-PreLayerNorm model)RoCBertTokenizer
(RoCBert model)RoFormerTokenizer
or RoFormerTokenizerFast
(RoFormer model)GPTNeoXTokenizerFast
(RWKV model)SeamlessM4TTokenizer
or SeamlessM4TTokenizerFast
(SeamlessM4T model)SeamlessM4TTokenizer
or SeamlessM4TTokenizerFast
(SeamlessM4Tv2 model)SiglipTokenizer
(SigLIP model)Speech2TextTokenizer
(Speech2Text model)Speech2Text2Tokenizer
(Speech2Text2 model)SpeechT5Tokenizer
(SpeechT5 model)SplinterTokenizer
or SplinterTokenizerFast
(Splinter model)SqueezeBertTokenizer
or SqueezeBertTokenizerFast
(SqueezeBERT model)GPTNeoXTokenizerFast
(StableLm model)GPT2Tokenizer
or GPT2TokenizerFast
(Starcoder2 model)T5Tokenizer
or T5TokenizerFast
(SwitchTransformers model)T5Tokenizer
or T5TokenizerFast
(T5 model)TapasTokenizer
(TAPAS model)TapexTokenizer
(TAPEX model)TransfoXLTokenizer
(Transformer-XL model)UdopTokenizer
or UdopTokenizerFast
(UDOP model)T5Tokenizer
or T5TokenizerFast
(UMT5 model)VitsTokenizer
(VITS model)Wav2Vec2CTCTokenizer
(Wav2Vec2 model)Wav2Vec2CTCTokenizer
(Wav2Vec2-BERT model)Wav2Vec2CTCTokenizer
(Wav2Vec2-Conformer model)Wav2Vec2PhonemeCTCTokenizer
(Wav2Vec2Phoneme model)XGLMTokenizer
or XGLMTokenizerFast
(XGLM model)XLMTokenizer
(XLM model)XLMProphetNetTokenizer
(XLM-ProphetNet model)XLMRobertaTokenizer
or XLMRobertaTokenizerFast
(XLM-RoBERTa model)XLMRobertaTokenizer
or XLMRobertaTokenizerFast
(XLM-RoBERTa-XL model)XLNetTokenizer
or XLNetTokenizerFast
(XLNet model)XLMRobertaTokenizer
or XLMRobertaTokenizerFast
(X-MOD model)AlbertTokenizer
or AlbertTokenizerFast
(YOSO model)Examples:
>>> from transformers import AutoTokenizer
>>> # Download vocabulary from huggingface.co and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> # Download vocabulary from huggingface.co (user-uploaded) and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
>>> # If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*)
>>> # tokenizer = AutoTokenizer.from_pretrained("./test/bert_saved_model/")
>>> # Download vocabulary from huggingface.co and define model-specific arguments
>>> tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base", add_prefix_space=True)
( config_class slow_tokenizer_class = None fast_tokenizer_class = None exist_ok = False )
Parameters
PretrainedTokenizer
, optional) —
The slow tokenizer to register. PretrainedTokenizerFast
, optional) —
The fast tokenizer to register. Register a new tokenizer in this mapping.
This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the library when created with the AutoFeatureExtractor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path **kwargs )
Parameters
str
or os.PathLike
) —
This can be either:
./my_model_directory/
../my_model_directory/preprocessor_config.json
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model feature extractor should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Whether or not to force to (re-)download the feature extractor files and override the cached versions
if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request. str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, will use the token generated
when running huggingface-cli login
(stored in ~/.huggingface
). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
If False
, then this function returns just the final feature extractor object. If True
, then this
functions returns a Tuple(feature_extractor, unused_kwargs)
where unused_kwargs is a dictionary
consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of
kwargs
which has not been used to update feature_extractor
and is otherwise ignored. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. Dict[str, Any]
, optional) —
The values in kwargs of any keys which are feature extractor attributes will be used to override the
loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is
controlled by the return_unused_kwargs
keyword parameter. Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary.
The feature extractor class to instantiate is selected based on the model_type
property of the config object
(either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s
missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
ASTFeatureExtractor
(Audio Spectrogram Transformer model)BeitFeatureExtractor
(BEiT model)ChineseCLIPFeatureExtractor
(Chinese-CLIP model)ClapFeatureExtractor
(CLAP model)ClvpFeatureExtractor
(CLVP model)ConditionalDetrFeatureExtractor
(Conditional DETR model)ConvNextFeatureExtractor
(ConvNeXT model)ConvNextFeatureExtractor
(CvT model)DacFeatureExtractor
(DAC model)Wav2Vec2FeatureExtractor
(Data2VecAudio model)BeitFeatureExtractor
(Data2VecVision model)DeformableDetrFeatureExtractor
(Deformable DETR model)DeiTFeatureExtractor
(DeiT model)DetrFeatureExtractor
(DETR model)DonutFeatureExtractor
(DonutSwin model)DPTFeatureExtractor
(DPT model)EncodecFeatureExtractor
(EnCodec model)FlavaFeatureExtractor
(FLAVA model)GLPNFeatureExtractor
(GLPN model)Wav2Vec2FeatureExtractor
(Hubert model)ImageGPTFeatureExtractor
(ImageGPT model)LayoutLMv2FeatureExtractor
(LayoutLMv2 model)LayoutLMv3FeatureExtractor
(LayoutLMv3 model)LevitFeatureExtractor
(LeViT model)MaskFormerFeatureExtractor
(MaskFormer model)MCTCTFeatureExtractor
(M-CTC-T model)EncodecFeatureExtractor
(Mimi model)MobileNetV1FeatureExtractor
(MobileNetV1 model)MobileNetV2FeatureExtractor
(MobileNetV2 model)MobileViTFeatureExtractor
(MobileViT model)EncodecFeatureExtractor
(Moshi model)OwlViTFeatureExtractor
(OWL-ViT model)PerceiverFeatureExtractor
(Perceiver model)PoolFormerFeatureExtractor
(PoolFormer model)Pop2PianoFeatureExtractor
(Pop2Piano model)ConvNextFeatureExtractor
(RegNet model)ConvNextFeatureExtractor
(ResNet model)SeamlessM4TFeatureExtractor
(SeamlessM4T model)SeamlessM4TFeatureExtractor
(SeamlessM4Tv2 model)SegformerFeatureExtractor
(SegFormer model)Wav2Vec2FeatureExtractor
(SEW model)Wav2Vec2FeatureExtractor
(SEW-D model)Speech2TextFeatureExtractor
(Speech2Text model)SpeechT5FeatureExtractor
(SpeechT5 model)DetrFeatureExtractor
(Table Transformer model)VideoMAEFeatureExtractor
(TimeSformer model)TvltFeatureExtractor
(TVLT model)Wav2Vec2FeatureExtractor
(UniSpeech model)Wav2Vec2FeatureExtractor
(UniSpeechSat model)UnivNetFeatureExtractor
(UnivNet model)ConvNextFeatureExtractor
(VAN model)VideoMAEFeatureExtractor
(VideoMAE model)ViltFeatureExtractor
(ViLT model)Wav2Vec2FeatureExtractor
(Wav2Vec2 model)Wav2Vec2FeatureExtractor
(Wav2Vec2-BERT model)Wav2Vec2FeatureExtractor
(Wav2Vec2-Conformer model)Wav2Vec2FeatureExtractor
(WavLM model)YolosFeatureExtractor
(YOLOS model)Passing token=True
is required when you want to use a private model.
Examples:
>>> from transformers import AutoFeatureExtractor
>>> # Download feature extractor from huggingface.co and cache.
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
>>> # If feature extractor files are in a directory (e.g. feature extractor was saved using *save_pretrained('./test/saved_model/')*)
>>> # feature_extractor = AutoFeatureExtractor.from_pretrained("./test/saved_model/")
( config_class feature_extractor_class exist_ok = False )
Parameters
FeatureExtractorMixin
) — The feature extractor to register. Register a new feature extractor for this class.
This is a generic image processor class that will be instantiated as one of the image processor classes of the library when created with the AutoImageProcessor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path *inputs **kwargs )
Parameters
str
or os.PathLike
) —
This can be either:
./my_model_directory/
../my_model_directory/preprocessor_config.json
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model image processor should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Whether or not to force to (re-)download the image processor files and override the cached versions if
they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request. str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, will use the token generated
when running huggingface-cli login
(stored in ~/.huggingface
). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Use a fast torchvision-base image processor if it is supported for a given model.
If a fast image processor is not available for a given model, a normal numpy-based image processor
is returned instead. bool
, optional, defaults to False
) —
If False
, then this function returns just the final image processor object. If True
, then this
functions returns a Tuple(image_processor, unused_kwargs)
where unused_kwargs is a dictionary
consisting of the key/value pairs whose keys are not image processor attributes: i.e., the part of
kwargs
which has not been used to update image_processor
and is otherwise ignored. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "config.json"
) —
The name of the file in the model directory to use for the image processor config. Dict[str, Any]
, optional) —
The values in kwargs of any keys which are image processor attributes will be used to override the
loaded values. Behavior concerning key/value pairs whose keys are not image processor attributes is
controlled by the return_unused_kwargs
keyword parameter. Instantiate one of the image processor classes of the library from a pretrained model vocabulary.
The image processor class to instantiate is selected based on the model_type
property of the config object
(either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s
missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
EfficientNetImageProcessor
(ALIGN model)A
or r
(Aria model)BeitImageProcessor
(BEiT model)BitImageProcessor
(BiT model)BridgeTowerImageProcessor
(BridgeTower model)ChineseCLIPImageProcessor
(Chinese-CLIP model)ConditionalDetrImageProcessor
(Conditional DETR model)ConvNextImageProcessor
(ConvNeXT model)ConvNextImageProcessor
(ConvNeXTV2 model)ConvNextImageProcessor
(CvT model)BeitImageProcessor
(Data2VecVision model)DeformableDetrImageProcessor
or DeformableDetrImageProcessorFast
(Deformable DETR model)DeiTImageProcessor
(DeiT model)DPTImageProcessor
(Depth Anything model)DetaImageProcessor
(DETA model)DetrImageProcessor
or DetrImageProcessorFast
(DETR model)BitImageProcessor
(DINOv2 model)DonutImageProcessor
(DonutSwin model)DPTImageProcessor
(DPT model)EfficientFormerImageProcessor
(EfficientFormer model)EfficientNetImageProcessor
(EfficientNet model)FlavaImageProcessor
(FLAVA model)BitImageProcessor
(FocalNet model)FuyuImageProcessor
(Fuyu model)GLPNImageProcessor
(GLPN model)GroundingDinoImageProcessor
(Grounding DINO model)BitImageProcessor
(Hiera model)IdeficsImageProcessor
(IDEFICS model)Idefics2ImageProcessor
(Idefics2 model)Idefics3ImageProcessor
(Idefics3 model)ImageGPTImageProcessor
(ImageGPT model)InstructBlipVideoImageProcessor
(InstructBlipVideo model)LayoutLMv2ImageProcessor
(LayoutLMv2 model)LayoutLMv3ImageProcessor
(LayoutLMv3 model)LevitImageProcessor
(LeViT model)LlavaNextImageProcessor
(LLaVA-NeXT model)LlavaNextVideoImageProcessor
(LLaVa-NeXT-Video model)LlavaOnevisionImageProcessor
(LLaVA-Onevision model)Mask2FormerImageProcessor
(Mask2Former model)MaskFormerImageProcessor
(MaskFormer model)MllamaImageProcessor
(Mllama model)MobileNetV1ImageProcessor
(MobileNetV1 model)MobileNetV2ImageProcessor
(MobileNetV2 model)MobileViTImageProcessor
(MobileViT model)MobileViTImageProcessor
(MobileViTV2 model)NougatImageProcessor
(Nougat model)OneFormerImageProcessor
(OneFormer model)Owlv2ImageProcessor
(OWLv2 model)OwlViTImageProcessor
(OWL-ViT model)SiglipImageProcessor
(PaliGemma model)PerceiverImageProcessor
(Perceiver model)Pix2StructImageProcessor
(Pix2Struct model)PixtralImageProcessor
or PixtralImageProcessorFast
(Pixtral model)PoolFormerImageProcessor
(PoolFormer model)PvtImageProcessor
(PVT model)PvtImageProcessor
(PVTv2 model)Qwen2VLImageProcessor
(Qwen2VL model)ConvNextImageProcessor
(RegNet model)ConvNextImageProcessor
(ResNet model)RTDetrImageProcessor
or RTDetrImageProcessorFast
(RT-DETR model)SamImageProcessor
(SAM model)SegformerImageProcessor
(SegFormer model)SegGptImageProcessor
(SegGPT model)SiglipImageProcessor
(SigLIP model)DetrImageProcessor
(Table Transformer model)VideoMAEImageProcessor
(TimeSformer model)TimmWrapperImageProcessor
(TimmWrapperModel model)TvltImageProcessor
(TVLT model)TvpImageProcessor
(TVP model)LayoutLMv3ImageProcessor
(UDOP model)SegformerImageProcessor
(UPerNet model)ConvNextImageProcessor
(VAN model)VideoMAEImageProcessor
(VideoMAE model)ViltImageProcessor
(ViLT model)ViTHybridImageProcessor
(ViT Hybrid model)VitMatteImageProcessor
(ViTMatte model)YolosImageProcessor
(YOLOS model)ZoeDepthImageProcessor
(ZoeDepth model)Passing token=True
is required when you want to use a private model.
Examples:
>>> from transformers import AutoImageProcessor
>>> # Download image processor from huggingface.co and cache.
>>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
>>> # If image processor files are in a directory (e.g. image processor was saved using *save_pretrained('./test/saved_model/')*)
>>> # image_processor = AutoImageProcessor.from_pretrained("./test/saved_model/")
( config_class image_processor_class = None slow_image_processor_class = None fast_image_processor_class = None exist_ok = False )
Parameters
Register a new image processor for this class.
This is a generic processor class that will be instantiated as one of the processor classes of the library when created with the AutoProcessor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path **kwargs )
Parameters
str
or os.PathLike
) —
This can be either:
save_pretrained()
method,
e.g., ./my_model_directory/
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model feature extractor should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Whether or not to force to (re-)download the feature extractor files and override the cached versions
if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request. str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, will use the token generated
when running huggingface-cli login
(stored in ~/.huggingface
). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
If False
, then this function returns just the final feature extractor object. If True
, then this
functions returns a Tuple(feature_extractor, unused_kwargs)
where unused_kwargs is a dictionary
consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of
kwargs
which has not been used to update feature_extractor
and is otherwise ignored. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. Dict[str, Any]
, optional) —
The values in kwargs of any keys which are feature extractor attributes will be used to override the
loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is
controlled by the return_unused_kwargs
keyword parameter. Instantiate one of the processor classes of the library from a pretrained model vocabulary.
The processor class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible):
AlignProcessor
(ALIGN model)AriaProcessor
(Aria model)BarkProcessor
(Bark model)BridgeTowerProcessor
(BridgeTower model)ChineseCLIPProcessor
(Chinese-CLIP model)ClapProcessor
(CLAP model)CLIPSegProcessor
(CLIPSeg model)ClvpProcessor
(CLVP model)ColPaliProcessor
(ColPali model)FlavaProcessor
(FLAVA model)FuyuProcessor
(Fuyu model)GitProcessor
(GIT model)GroundingDinoProcessor
(Grounding DINO model)Wav2Vec2Processor
(Hubert model)IdeficsProcessor
(IDEFICS model)Idefics2Processor
(Idefics2 model)Idefics3Processor
(Idefics3 model)InstructBlipProcessor
(InstructBLIP model)InstructBlipVideoProcessor
(InstructBlipVideo model)Kosmos2Processor
(KOSMOS-2 model)LayoutLMv2Processor
(LayoutLMv2 model)LayoutLMv3Processor
(LayoutLMv3 model)LlavaProcessor
(LLaVa model)LlavaNextProcessor
(LLaVA-NeXT model)LlavaNextVideoProcessor
(LLaVa-NeXT-Video model)LlavaOnevisionProcessor
(LLaVA-Onevision model)MarkupLMProcessor
(MarkupLM model)MCTCTProcessor
(M-CTC-T model)MgpstrProcessor
(MGP-STR model)MllamaProcessor
(Mllama model)OneFormerProcessor
(OneFormer model)Owlv2Processor
(OWLv2 model)OwlViTProcessor
(OWL-ViT model)Pix2StructProcessor
(Pix2Struct model)PixtralProcessor
(Pixtral model)Pop2PianoProcessor
(Pop2Piano model)Qwen2AudioProcessor
(Qwen2Audio model)Qwen2VLProcessor
(Qwen2VL model)SamProcessor
(SAM model)SeamlessM4TProcessor
(SeamlessM4T model)Wav2Vec2Processor
(SEW model)Wav2Vec2Processor
(SEW-D model)SiglipProcessor
(SigLIP model)Speech2TextProcessor
(Speech2Text model)Speech2Text2Processor
(Speech2Text2 model)SpeechT5Processor
(SpeechT5 model)TrOCRProcessor
(TrOCR model)TvltProcessor
(TVLT model)TvpProcessor
(TVP model)UdopProcessor
(UDOP model)Wav2Vec2Processor
(UniSpeech model)Wav2Vec2Processor
(UniSpeechSat model)VideoLlavaProcessor
(VideoLlava model)ViltProcessor
(ViLT model)LlavaProcessor
(VipLlava model)VisionTextDualEncoderProcessor
(VisionTextDualEncoder model)Wav2Vec2Processor
(Wav2Vec2 model)Wav2Vec2Processor
(Wav2Vec2-BERT model)Wav2Vec2Processor
(Wav2Vec2-Conformer model)Wav2Vec2Processor
(WavLM model)XCLIPProcessor
(X-CLIP model)Passing token=True
is required when you want to use a private model.
Examples:
>>> from transformers import AutoProcessor
>>> # Download processor from huggingface.co and cache.
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
>>> # If processor files are in a directory (e.g. processor was saved using *save_pretrained('./test/saved_model/')*)
>>> # processor = AutoProcessor.from_pretrained("./test/saved_model/")
( config_class processor_class exist_ok = False )
Parameters
FeatureExtractorMixin
) — The processor to register. Register a new processor for this class.
다음 자동 클래스들은 특정 헤드 없이 기본 모델 클래스를 인스턴스화하는 데 사용할 수 있습니다.
This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
ASTConfig
configuration class: ASTModel
(Audio Spectrogram Transformer model)AlbertConfig
configuration class: AlbertModel
(ALBERT model)AlignConfig
configuration class: AlignModel
(ALIGN model)AriaConfig
configuration class: AriaForConditionalGeneration
(Aria model)AriaTextConfig
configuration class: AriaTextModel
(AriaText model)BambaConfig
configuration class: BambaModel
(Bamba model)BarkConfig
configuration class: BarkModel
(Bark model)BeitConfig
configuration class: BeitModel
(BEiT model)BertGenerationConfig
configuration class: BertGenerationEncoder
(Bert Generation model)BigBirdConfig
configuration class: BigBirdModel
(BigBird model)BigBirdPegasusConfig
configuration class: BigBirdPegasusModel
(BigBird-Pegasus model)BitConfig
configuration class: BitModel
(BiT model)BlenderbotConfig
configuration class: BlenderbotModel
(Blenderbot model)BlenderbotSmallConfig
configuration class: BlenderbotSmallModel
(BlenderbotSmall model)BloomConfig
configuration class: BloomModel
(BLOOM model)BridgeTowerConfig
configuration class: BridgeTowerModel
(BridgeTower model)BrosConfig
configuration class: BrosModel
(BROS model)CLIPSegConfig
configuration class: CLIPSegModel
(CLIPSeg model)CTRLConfig
configuration class: CTRLModel
(CTRL model)CamembertConfig
configuration class: CamembertModel
(CamemBERT model)CanineConfig
configuration class: CanineModel
(CANINE model)ChineseCLIPConfig
configuration class: ChineseCLIPModel
(Chinese-CLIP model)ChineseCLIPVisionConfig
configuration class: ChineseCLIPVisionModel
(ChineseCLIPVisionModel model)ClapConfig
configuration class: ClapModel
(CLAP model)ClvpConfig
configuration class: ClvpModelForConditionalGeneration
(CLVP model)CodeGenConfig
configuration class: CodeGenModel
(CodeGen model)Cohere2Config
configuration class: Cohere2Model
(Cohere2 model)ConditionalDetrConfig
configuration class: ConditionalDetrModel
(Conditional DETR model)ConvNextConfig
configuration class: ConvNextModel
(ConvNeXT model)ConvNextV2Config
configuration class: ConvNextV2Model
(ConvNeXTV2 model)CpmAntConfig
configuration class: CpmAntModel
(CPM-Ant model)CvtConfig
configuration class: CvtModel
(CvT model)DPRConfig
configuration class: DPRQuestionEncoder
(DPR model)DPTConfig
configuration class: DPTModel
(DPT model)DacConfig
configuration class: DacModel
(DAC model)Data2VecAudioConfig
configuration class: Data2VecAudioModel
(Data2VecAudio model)Data2VecTextConfig
configuration class: Data2VecTextModel
(Data2VecText model)Data2VecVisionConfig
configuration class: Data2VecVisionModel
(Data2VecVision model)DecisionTransformerConfig
configuration class: DecisionTransformerModel
(Decision Transformer model)DeformableDetrConfig
configuration class: DeformableDetrModel
(Deformable DETR model)DeiTConfig
configuration class: DeiTModel
(DeiT model)DetaConfig
configuration class: DetaModel
(DETA model)DetrConfig
configuration class: DetrModel
(DETR model)DiffLlamaConfig
configuration class: DiffLlamaModel
(DiffLlama model)DinatConfig
configuration class: DinatModel
(DiNAT model)Dinov2Config
configuration class: Dinov2Model
(DINOv2 model)Dinov2WithRegistersConfig
configuration class: Dinov2WithRegistersModel
(DINOv2 with Registers model)DistilBertConfig
configuration class: DistilBertModel
(DistilBERT model)DonutSwinConfig
configuration class: DonutSwinModel
(DonutSwin model)EfficientFormerConfig
configuration class: EfficientFormerModel
(EfficientFormer model)EfficientNetConfig
configuration class: EfficientNetModel
(EfficientNet model)ElectraConfig
configuration class: ElectraModel
(ELECTRA model)EncodecConfig
configuration class: EncodecModel
(EnCodec model)ErnieConfig
configuration class: ErnieModel
(ERNIE model)ErnieMConfig
configuration class: ErnieMModel
(ErnieM model)FNetConfig
configuration class: FNetModel
(FNet model)FSMTConfig
configuration class: FSMTModel
(FairSeq Machine-Translation model)FalconConfig
configuration class: FalconModel
(Falcon model)FalconMambaConfig
configuration class: FalconMambaModel
(FalconMamba model)FastSpeech2ConformerConfig
configuration class: FastSpeech2ConformerModel
(FastSpeech2Conformer model)FlaubertConfig
configuration class: FlaubertModel
(FlauBERT model)FlavaConfig
configuration class: FlavaModel
(FLAVA model)FocalNetConfig
configuration class: FocalNetModel
(FocalNet model)FunnelConfig
configuration class: FunnelModel
or FunnelBaseModel
(Funnel Transformer model)GLPNConfig
configuration class: GLPNModel
(GLPN model)GPT2Config
configuration class: GPT2Model
(OpenAI GPT-2 model)GPTBigCodeConfig
configuration class: GPTBigCodeModel
(GPTBigCode model)GPTJConfig
configuration class: GPTJModel
(GPT-J model)GPTNeoConfig
configuration class: GPTNeoModel
(GPT Neo model)GPTNeoXConfig
configuration class: GPTNeoXModel
(GPT NeoX model)GPTSanJapaneseConfig
configuration class: GPTSanJapaneseForConditionalGeneration
(GPTSAN-japanese model)GitConfig
configuration class: GitModel
(GIT model)GlmConfig
configuration class: GlmModel
(GLM model)GraniteConfig
configuration class: GraniteModel
(Granite model)GraniteMoeConfig
configuration class: GraniteMoeModel
(GraniteMoeMoe model)GroundingDinoConfig
configuration class: GroundingDinoModel
(Grounding DINO model)GroupViTConfig
configuration class: GroupViTModel
(GroupViT model)HieraConfig
configuration class: HieraModel
(Hiera model)HubertConfig
configuration class: HubertModel
(Hubert model)IBertConfig
configuration class: IBertModel
(I-BERT model)IJepaConfig
configuration class: IJepaModel
(I-JEPA model)Idefics2Config
configuration class: Idefics2Model
(Idefics2 model)Idefics3Config
configuration class: Idefics3Model
(Idefics3 model)Idefics3VisionConfig
configuration class: Idefics3VisionTransformer
(Idefics3VisionTransformer model)IdeficsConfig
configuration class: IdeficsModel
(IDEFICS model)ImageGPTConfig
configuration class: ImageGPTModel
(ImageGPT model)JambaConfig
configuration class: JambaModel
(Jamba model)JetMoeConfig
configuration class: JetMoeModel
(JetMoe model)JukeboxConfig
configuration class: JukeboxModel
(Jukebox model)Kosmos2Config
configuration class: Kosmos2Model
(KOSMOS-2 model)LEDConfig
configuration class: LEDModel
(LED model)LayoutLMConfig
configuration class: LayoutLMModel
(LayoutLM model)LayoutLMv2Config
configuration class: LayoutLMv2Model
(LayoutLMv2 model)LayoutLMv3Config
configuration class: LayoutLMv3Model
(LayoutLMv3 model)LevitConfig
configuration class: LevitModel
(LeViT model)LiltConfig
configuration class: LiltModel
(LiLT model)LongT5Config
configuration class: LongT5Model
(LongT5 model)LongformerConfig
configuration class: LongformerModel
(Longformer model)LukeConfig
configuration class: LukeModel
(LUKE model)LxmertConfig
configuration class: LxmertModel
(LXMERT model)M2M100Config
configuration class: M2M100Model
(M2M100 model)MBartConfig
configuration class: MBartModel
(mBART model)MCTCTConfig
configuration class: MCTCTModel
(M-CTC-T model)MPNetConfig
configuration class: MPNetModel
(MPNet model)MT5Config
configuration class: MT5Model
(MT5 model)MarkupLMConfig
configuration class: MarkupLMModel
(MarkupLM model)Mask2FormerConfig
configuration class: Mask2FormerModel
(Mask2Former model)MaskFormerConfig
configuration class: MaskFormerModel
(MaskFormer model)MaskFormerSwinConfig
configuration class: MaskFormerSwinModel
(MaskFormerSwin model)MegaConfig
configuration class: MegaModel
(MEGA model)MegatronBertConfig
configuration class: MegatronBertModel
(Megatron-BERT model)MgpstrConfig
configuration class: MgpstrForSceneTextRecognition
(MGP-STR model)MimiConfig
configuration class: MimiModel
(Mimi model)MixtralConfig
configuration class: MixtralModel
(Mixtral model)MobileBertConfig
configuration class: MobileBertModel
(MobileBERT model)MobileNetV1Config
configuration class: MobileNetV1Model
(MobileNetV1 model)MobileNetV2Config
configuration class: MobileNetV2Model
(MobileNetV2 model)MobileViTConfig
configuration class: MobileViTModel
(MobileViT model)MobileViTV2Config
configuration class: MobileViTV2Model
(MobileViTV2 model)ModernBertConfig
configuration class: ModernBertModel
(ModernBERT model)MoshiConfig
configuration class: MoshiModel
(Moshi model)MptConfig
configuration class: MptModel
(MPT model)MraConfig
configuration class: MraModel
(MRA model)MusicgenConfig
configuration class: MusicgenModel
(MusicGen model)MusicgenMelodyConfig
configuration class: MusicgenMelodyModel
(MusicGen Melody model)MvpConfig
configuration class: MvpModel
(MVP model)NatConfig
configuration class: NatModel
(NAT model)NemotronConfig
configuration class: NemotronModel
(Nemotron model)NezhaConfig
configuration class: NezhaModel
(Nezha model)NllbMoeConfig
configuration class: NllbMoeModel
(NLLB-MOE model)NystromformerConfig
configuration class: NystromformerModel
(Nyströmformer model)OPTConfig
configuration class: OPTModel
(OPT model)Olmo2Config
configuration class: Olmo2Model
(OLMo2 model)OlmoConfig
configuration class: OlmoModel
(OLMo model)OlmoeConfig
configuration class: OlmoeModel
(OLMoE model)OmDetTurboConfig
configuration class: OmDetTurboForObjectDetection
(OmDet-Turbo model)OneFormerConfig
configuration class: OneFormerModel
(OneFormer model)OpenLlamaConfig
configuration class: OpenLlamaModel
(OpenLlama model)OwlViTConfig
configuration class: OwlViTModel
(OWL-ViT model)Owlv2Config
configuration class: Owlv2Model
(OWLv2 model)PLBartConfig
configuration class: PLBartModel
(PLBart model)PegasusConfig
configuration class: PegasusModel
(Pegasus model)PegasusXConfig
configuration class: PegasusXModel
(PEGASUS-X model)PerceiverConfig
configuration class: PerceiverModel
(Perceiver model)PersimmonConfig
configuration class: PersimmonModel
(Persimmon model)Phi3Config
configuration class: Phi3Model
(Phi3 model)PhiConfig
configuration class: PhiModel
(Phi model)PhimoeConfig
configuration class: PhimoeModel
(Phimoe model)PixtralVisionConfig
configuration class: PixtralVisionModel
(Pixtral model)PoolFormerConfig
configuration class: PoolFormerModel
(PoolFormer model)ProphetNetConfig
configuration class: ProphetNetModel
(ProphetNet model)PvtConfig
configuration class: PvtModel
(PVT model)PvtV2Config
configuration class: PvtV2Model
(PVTv2 model)QDQBertConfig
configuration class: QDQBertModel
(QDQBert model)Qwen2AudioEncoderConfig
configuration class: Qwen2AudioEncoder
(Qwen2AudioEncoder model)Qwen2Config
configuration class: Qwen2Model
(Qwen2 model)Qwen2MoeConfig
configuration class: Qwen2MoeModel
(Qwen2MoE model)Qwen2VLConfig
configuration class: Qwen2VLModel
(Qwen2VL model)RTDetrConfig
configuration class: RTDetrModel
(RT-DETR model)RecurrentGemmaConfig
configuration class: RecurrentGemmaModel
(RecurrentGemma model)ReformerConfig
configuration class: ReformerModel
(Reformer model)RegNetConfig
configuration class: RegNetModel
(RegNet model)RemBertConfig
configuration class: RemBertModel
(RemBERT model)ResNetConfig
configuration class: ResNetModel
(ResNet model)RetriBertConfig
configuration class: RetriBertModel
(RetriBERT model)RoCBertConfig
configuration class: RoCBertModel
(RoCBert model)RoFormerConfig
configuration class: RoFormerModel
(RoFormer model)RobertaConfig
configuration class: RobertaModel
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormModel
(RoBERTa-PreLayerNorm model)RwkvConfig
configuration class: RwkvModel
(RWKV model)SEWConfig
configuration class: SEWModel
(SEW model)SEWDConfig
configuration class: SEWDModel
(SEW-D model)SamConfig
configuration class: SamModel
(SAM model)SeamlessM4TConfig
configuration class: SeamlessM4TModel
(SeamlessM4T model)SeamlessM4Tv2Config
configuration class: SeamlessM4Tv2Model
(SeamlessM4Tv2 model)SegGptConfig
configuration class: SegGptModel
(SegGPT model)SegformerConfig
configuration class: SegformerModel
(SegFormer model)SiglipConfig
configuration class: SiglipModel
(SigLIP model)SiglipVisionConfig
configuration class: SiglipVisionModel
(SiglipVisionModel model)Speech2TextConfig
configuration class: Speech2TextModel
(Speech2Text model)SpeechT5Config
configuration class: SpeechT5Model
(SpeechT5 model)SplinterConfig
configuration class: SplinterModel
(Splinter model)SqueezeBertConfig
configuration class: SqueezeBertModel
(SqueezeBERT model)StableLmConfig
configuration class: StableLmModel
(StableLm model)Starcoder2Config
configuration class: Starcoder2Model
(Starcoder2 model)SwiftFormerConfig
configuration class: SwiftFormerModel
(SwiftFormer model)SwitchTransformersConfig
configuration class: SwitchTransformersModel
(SwitchTransformers model)T5Config
configuration class: T5Model
(T5 model)TableTransformerConfig
configuration class: TableTransformerModel
(Table Transformer model)TapasConfig
configuration class: TapasModel
(TAPAS model)TextNetConfig
configuration class: TextNetModel
(TextNet model)TimmBackboneConfig
configuration class: TimmBackbone
(TimmBackbone model)TimmWrapperConfig
configuration class: TimmWrapperModel
(TimmWrapperModel model)TransfoXLConfig
configuration class: TransfoXLModel
(Transformer-XL model)TvltConfig
configuration class: TvltModel
(TVLT model)TvpConfig
configuration class: TvpModel
(TVP model)UMT5Config
configuration class: UMT5Model
(UMT5 model)UdopConfig
configuration class: UdopModel
(UDOP model)UniSpeechConfig
configuration class: UniSpeechModel
(UniSpeech model)UniSpeechSatConfig
configuration class: UniSpeechSatModel
(UniSpeechSat model)UnivNetConfig
configuration class: UnivNetModel
(UnivNet model)VanConfig
configuration class: VanModel
(VAN model)ViTHybridConfig
configuration class: ViTHybridModel
(ViT Hybrid model)ViTMAEConfig
configuration class: ViTMAEModel
(ViTMAE model)ViTMSNConfig
configuration class: ViTMSNModel
(ViTMSN model)VideoMAEConfig
configuration class: VideoMAEModel
(VideoMAE model)ViltConfig
configuration class: ViltModel
(ViLT model)VisionTextDualEncoderConfig
configuration class: VisionTextDualEncoderModel
(VisionTextDualEncoder model)VisualBertConfig
configuration class: VisualBertModel
(VisualBERT model)VitDetConfig
configuration class: VitDetModel
(VitDet model)VitsConfig
configuration class: VitsModel
(VITS model)Wav2Vec2BertConfig
configuration class: Wav2Vec2BertModel
(Wav2Vec2-BERT model)Wav2Vec2Config
configuration class: Wav2Vec2Model
(Wav2Vec2 model)Wav2Vec2ConformerConfig
configuration class: Wav2Vec2ConformerModel
(Wav2Vec2-Conformer model)WavLMConfig
configuration class: WavLMModel
(WavLM model)XCLIPConfig
configuration class: XCLIPModel
(X-CLIP model)XGLMConfig
configuration class: XGLMModel
(XGLM model)XLMConfig
configuration class: XLMModel
(XLM model)XLMProphetNetConfig
configuration class: XLMProphetNetModel
(XLM-ProphetNet model)XLMRobertaConfig
configuration class: XLMRobertaModel
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLModel
(XLM-RoBERTa-XL model)XLNetConfig
configuration class: XLNetModel
(XLNet model)XmodConfig
configuration class: XmodModel
(X-MOD model)YolosConfig
configuration class: YolosModel
(YOLOS model)YosoConfig
configuration class: YosoModel
(YOSO model)ZambaConfig
configuration class: ZambaModel
(Zamba model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the base model classes of the library from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertModel
(ALBERT model)AlignModel
(ALIGN model)AriaForConditionalGeneration
(Aria model)AriaTextModel
(AriaText model)ASTModel
(Audio Spectrogram Transformer model)BambaModel
(Bamba model)BarkModel
(Bark model)BeitModel
(BEiT model)BertGenerationEncoder
(Bert Generation model)BigBirdModel
(BigBird model)BigBirdPegasusModel
(BigBird-Pegasus model)BitModel
(BiT model)BlenderbotModel
(Blenderbot model)BlenderbotSmallModel
(BlenderbotSmall model)BloomModel
(BLOOM model)BridgeTowerModel
(BridgeTower model)BrosModel
(BROS model)CamembertModel
(CamemBERT model)CanineModel
(CANINE model)ChineseCLIPModel
(Chinese-CLIP model)ChineseCLIPVisionModel
(ChineseCLIPVisionModel model)ClapModel
(CLAP model)CLIPSegModel
(CLIPSeg model)ClvpModelForConditionalGeneration
(CLVP model)CodeGenModel
(CodeGen model)Cohere2Model
(Cohere2 model)ConditionalDetrModel
(Conditional DETR model)ConvNextModel
(ConvNeXT model)ConvNextV2Model
(ConvNeXTV2 model)CpmAntModel
(CPM-Ant model)CTRLModel
(CTRL model)CvtModel
(CvT model)DacModel
(DAC model)Data2VecAudioModel
(Data2VecAudio model)Data2VecTextModel
(Data2VecText model)Data2VecVisionModel
(Data2VecVision model)DecisionTransformerModel
(Decision Transformer model)DeformableDetrModel
(Deformable DETR model)DeiTModel
(DeiT model)DetaModel
(DETA model)DetrModel
(DETR model)DiffLlamaModel
(DiffLlama model)DinatModel
(DiNAT model)Dinov2Model
(DINOv2 model)Dinov2WithRegistersModel
(DINOv2 with Registers model)DistilBertModel
(DistilBERT model)DonutSwinModel
(DonutSwin model)DPRQuestionEncoder
(DPR model)DPTModel
(DPT model)EfficientFormerModel
(EfficientFormer model)EfficientNetModel
(EfficientNet model)ElectraModel
(ELECTRA model)EncodecModel
(EnCodec model)ErnieModel
(ERNIE model)ErnieMModel
(ErnieM model)FalconModel
(Falcon model)FalconMambaModel
(FalconMamba model)FastSpeech2ConformerModel
(FastSpeech2Conformer model)FlaubertModel
(FlauBERT model)FlavaModel
(FLAVA model)FNetModel
(FNet model)FocalNetModel
(FocalNet model)FSMTModel
(FairSeq Machine-Translation model)FunnelModel
or FunnelBaseModel
(Funnel Transformer model)GitModel
(GIT model)GlmModel
(GLM model)GLPNModel
(GLPN model)GPT2Model
(GPT-Sw3 model)GPT2Model
(OpenAI GPT-2 model)GPTBigCodeModel
(GPTBigCode model)GPTNeoModel
(GPT Neo model)GPTNeoXModel
(GPT NeoX model)GPTJModel
(GPT-J model)GPTSanJapaneseForConditionalGeneration
(GPTSAN-japanese model)GraniteModel
(Granite model)GraniteMoeModel
(GraniteMoeMoe model)GroundingDinoModel
(Grounding DINO model)GroupViTModel
(GroupViT model)HieraModel
(Hiera model)HubertModel
(Hubert model)IBertModel
(I-BERT model)IdeficsModel
(IDEFICS model)Idefics2Model
(Idefics2 model)Idefics3Model
(Idefics3 model)Idefics3VisionTransformer
(Idefics3VisionTransformer model)IJepaModel
(I-JEPA model)ImageGPTModel
(ImageGPT model)JambaModel
(Jamba model)JetMoeModel
(JetMoe model)JukeboxModel
(Jukebox model)Kosmos2Model
(KOSMOS-2 model)LayoutLMModel
(LayoutLM model)LayoutLMv2Model
(LayoutLMv2 model)LayoutLMv3Model
(LayoutLMv3 model)LEDModel
(LED model)LevitModel
(LeViT model)LiltModel
(LiLT model)LongformerModel
(Longformer model)LongT5Model
(LongT5 model)LukeModel
(LUKE model)LxmertModel
(LXMERT model)M2M100Model
(M2M100 model)MarkupLMModel
(MarkupLM model)Mask2FormerModel
(Mask2Former model)MaskFormerModel
(MaskFormer model)MaskFormerSwinModel
(MaskFormerSwin model)MBartModel
(mBART model)MCTCTModel
(M-CTC-T model)MegaModel
(MEGA model)MegatronBertModel
(Megatron-BERT model)MgpstrForSceneTextRecognition
(MGP-STR model)MimiModel
(Mimi model)MixtralModel
(Mixtral model)MobileBertModel
(MobileBERT model)MobileNetV1Model
(MobileNetV1 model)MobileNetV2Model
(MobileNetV2 model)MobileViTModel
(MobileViT model)MobileViTV2Model
(MobileViTV2 model)ModernBertModel
(ModernBERT model)MoshiModel
(Moshi model)MPNetModel
(MPNet model)MptModel
(MPT model)MraModel
(MRA model)MT5Model
(MT5 model)MusicgenModel
(MusicGen model)MusicgenMelodyModel
(MusicGen Melody model)MvpModel
(MVP model)NatModel
(NAT model)NemotronModel
(Nemotron model)NezhaModel
(Nezha model)NllbMoeModel
(NLLB-MOE model)NystromformerModel
(Nyströmformer model)OlmoModel
(OLMo model)Olmo2Model
(OLMo2 model)OlmoeModel
(OLMoE model)OmDetTurboForObjectDetection
(OmDet-Turbo model)OneFormerModel
(OneFormer model)OpenLlamaModel
(OpenLlama model)OPTModel
(OPT model)Owlv2Model
(OWLv2 model)OwlViTModel
(OWL-ViT model)PegasusModel
(Pegasus model)PegasusXModel
(PEGASUS-X model)PerceiverModel
(Perceiver model)PersimmonModel
(Persimmon model)PhiModel
(Phi model)Phi3Model
(Phi3 model)PhimoeModel
(Phimoe model)PixtralVisionModel
(Pixtral model)PLBartModel
(PLBart model)PoolFormerModel
(PoolFormer model)ProphetNetModel
(ProphetNet model)PvtModel
(PVT model)PvtV2Model
(PVTv2 model)QDQBertModel
(QDQBert model)Qwen2Model
(Qwen2 model)Qwen2AudioEncoder
(Qwen2AudioEncoder model)Qwen2MoeModel
(Qwen2MoE model)Qwen2VLModel
(Qwen2VL model)RecurrentGemmaModel
(RecurrentGemma model)ReformerModel
(Reformer model)RegNetModel
(RegNet model)RemBertModel
(RemBERT model)ResNetModel
(ResNet model)RetriBertModel
(RetriBERT model)RobertaModel
(RoBERTa model)RobertaPreLayerNormModel
(RoBERTa-PreLayerNorm model)RoCBertModel
(RoCBert model)RoFormerModel
(RoFormer model)RTDetrModel
(RT-DETR model)RwkvModel
(RWKV model)SamModel
(SAM model)SeamlessM4TModel
(SeamlessM4T model)SeamlessM4Tv2Model
(SeamlessM4Tv2 model)SegformerModel
(SegFormer model)SegGptModel
(SegGPT model)SEWModel
(SEW model)SEWDModel
(SEW-D model)SiglipModel
(SigLIP model)SiglipVisionModel
(SiglipVisionModel model)Speech2TextModel
(Speech2Text model)SpeechT5Model
(SpeechT5 model)SplinterModel
(Splinter model)SqueezeBertModel
(SqueezeBERT model)StableLmModel
(StableLm model)Starcoder2Model
(Starcoder2 model)SwiftFormerModel
(SwiftFormer model)SwitchTransformersModel
(SwitchTransformers model)T5Model
(T5 model)TableTransformerModel
(Table Transformer model)TapasModel
(TAPAS model)TextNetModel
(TextNet model)TimmBackbone
(TimmBackbone model)TimmWrapperModel
(TimmWrapperModel model)TransfoXLModel
(Transformer-XL model)TvltModel
(TVLT model)TvpModel
(TVP model)UdopModel
(UDOP model)UMT5Model
(UMT5 model)UniSpeechModel
(UniSpeech model)UniSpeechSatModel
(UniSpeechSat model)UnivNetModel
(UnivNet model)VanModel
(VAN model)VideoMAEModel
(VideoMAE model)ViltModel
(ViLT model)VisionTextDualEncoderModel
(VisionTextDualEncoder model)VisualBertModel
(VisualBERT model)ViTHybridModel
(ViT Hybrid model)ViTMAEModel
(ViTMAE model)ViTMSNModel
(ViTMSN model)VitDetModel
(VitDet model)VitsModel
(VITS model)Wav2Vec2Model
(Wav2Vec2 model)Wav2Vec2BertModel
(Wav2Vec2-BERT model)Wav2Vec2ConformerModel
(Wav2Vec2-Conformer model)WavLMModel
(WavLM model)XCLIPModel
(X-CLIP model)XGLMModel
(XGLM model)XLMModel
(XLM model)XLMProphetNetModel
(XLM-ProphetNet model)XLMRobertaModel
(XLM-RoBERTa model)XLMRobertaXLModel
(XLM-RoBERTa-XL model)XLNetModel
(XLNet model)XmodModel
(X-MOD model)YolosModel
(YOLOS model)YosoModel
(YOSO model)ZambaModel
(Zamba model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModel.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModel.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModel.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: TFAlbertModel
(ALBERT model)BlenderbotConfig
configuration class: TFBlenderbotModel
(Blenderbot model)BlenderbotSmallConfig
configuration class: TFBlenderbotSmallModel
(BlenderbotSmall model)CTRLConfig
configuration class: TFCTRLModel
(CTRL model)CamembertConfig
configuration class: TFCamembertModel
(CamemBERT model)ConvNextConfig
configuration class: TFConvNextModel
(ConvNeXT model)ConvNextV2Config
configuration class: TFConvNextV2Model
(ConvNeXTV2 model)CvtConfig
configuration class: TFCvtModel
(CvT model)DPRConfig
configuration class: TFDPRQuestionEncoder
(DPR model)Data2VecVisionConfig
configuration class: TFData2VecVisionModel
(Data2VecVision model)DeiTConfig
configuration class: TFDeiTModel
(DeiT model)DistilBertConfig
configuration class: TFDistilBertModel
(DistilBERT model)EfficientFormerConfig
configuration class: TFEfficientFormerModel
(EfficientFormer model)ElectraConfig
configuration class: TFElectraModel
(ELECTRA model)FlaubertConfig
configuration class: TFFlaubertModel
(FlauBERT model)FunnelConfig
configuration class: TFFunnelModel
or TFFunnelBaseModel
(Funnel Transformer model)GPT2Config
configuration class: TFGPT2Model
(OpenAI GPT-2 model)GPTJConfig
configuration class: TFGPTJModel
(GPT-J model)GroupViTConfig
configuration class: TFGroupViTModel
(GroupViT model)HubertConfig
configuration class: TFHubertModel
(Hubert model)IdeficsConfig
configuration class: TFIdeficsModel
(IDEFICS model)LEDConfig
configuration class: TFLEDModel
(LED model)LayoutLMConfig
configuration class: TFLayoutLMModel
(LayoutLM model)LayoutLMv3Config
configuration class: TFLayoutLMv3Model
(LayoutLMv3 model)LongformerConfig
configuration class: TFLongformerModel
(Longformer model)LxmertConfig
configuration class: TFLxmertModel
(LXMERT model)MBartConfig
configuration class: TFMBartModel
(mBART model)MPNetConfig
configuration class: TFMPNetModel
(MPNet model)MT5Config
configuration class: TFMT5Model
(MT5 model)MobileBertConfig
configuration class: TFMobileBertModel
(MobileBERT model)MobileViTConfig
configuration class: TFMobileViTModel
(MobileViT model)OPTConfig
configuration class: TFOPTModel
(OPT model)PegasusConfig
configuration class: TFPegasusModel
(Pegasus model)RegNetConfig
configuration class: TFRegNetModel
(RegNet model)RemBertConfig
configuration class: TFRemBertModel
(RemBERT model)ResNetConfig
configuration class: TFResNetModel
(ResNet model)RoFormerConfig
configuration class: TFRoFormerModel
(RoFormer model)RobertaConfig
configuration class: TFRobertaModel
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormModel
(RoBERTa-PreLayerNorm model)SamConfig
configuration class: TFSamModel
(SAM model)SegformerConfig
configuration class: TFSegformerModel
(SegFormer model)Speech2TextConfig
configuration class: TFSpeech2TextModel
(Speech2Text model)SwiftFormerConfig
configuration class: TFSwiftFormerModel
(SwiftFormer model)T5Config
configuration class: TFT5Model
(T5 model)TapasConfig
configuration class: TFTapasModel
(TAPAS model)TransfoXLConfig
configuration class: TFTransfoXLModel
(Transformer-XL model)ViTMAEConfig
configuration class: TFViTMAEModel
(ViTMAE model)VisionTextDualEncoderConfig
configuration class: TFVisionTextDualEncoderModel
(VisionTextDualEncoder model)Wav2Vec2Config
configuration class: TFWav2Vec2Model
(Wav2Vec2 model)XGLMConfig
configuration class: TFXGLMModel
(XGLM model)XLMConfig
configuration class: TFXLMModel
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaModel
(XLM-RoBERTa model)XLNetConfig
configuration class: TFXLNetModel
(XLNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the base model classes of the library from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFAlbertModel
(ALBERT model)TFBlenderbotModel
(Blenderbot model)TFBlenderbotSmallModel
(BlenderbotSmall model)TFCamembertModel
(CamemBERT model)TFConvNextModel
(ConvNeXT model)TFConvNextV2Model
(ConvNeXTV2 model)TFCTRLModel
(CTRL model)TFCvtModel
(CvT model)TFData2VecVisionModel
(Data2VecVision model)TFDeiTModel
(DeiT model)TFDistilBertModel
(DistilBERT model)TFDPRQuestionEncoder
(DPR model)TFEfficientFormerModel
(EfficientFormer model)TFElectraModel
(ELECTRA model)TFFlaubertModel
(FlauBERT model)TFFunnelModel
or TFFunnelBaseModel
(Funnel Transformer model)TFGPT2Model
(GPT-Sw3 model)TFGPT2Model
(OpenAI GPT-2 model)TFGPTJModel
(GPT-J model)TFGroupViTModel
(GroupViT model)TFHubertModel
(Hubert model)TFIdeficsModel
(IDEFICS model)TFLayoutLMModel
(LayoutLM model)TFLayoutLMv3Model
(LayoutLMv3 model)TFLEDModel
(LED model)TFLongformerModel
(Longformer model)TFLxmertModel
(LXMERT model)TFMBartModel
(mBART model)TFMobileBertModel
(MobileBERT model)TFMobileViTModel
(MobileViT model)TFMPNetModel
(MPNet model)TFMT5Model
(MT5 model)TFOPTModel
(OPT model)TFPegasusModel
(Pegasus model)TFRegNetModel
(RegNet model)TFRemBertModel
(RemBERT model)TFResNetModel
(ResNet model)TFRobertaModel
(RoBERTa model)TFRobertaPreLayerNormModel
(RoBERTa-PreLayerNorm model)TFRoFormerModel
(RoFormer model)TFSamModel
(SAM model)TFSegformerModel
(SegFormer model)TFSpeech2TextModel
(Speech2Text model)TFSwiftFormerModel
(SwiftFormer model)TFT5Model
(T5 model)TFTapasModel
(TAPAS model)TFTransfoXLModel
(Transformer-XL model)TFVisionTextDualEncoderModel
(VisionTextDualEncoder model)TFViTMAEModel
(ViTMAE model)TFWav2Vec2Model
(Wav2Vec2 model)TFXGLMModel
(XGLM model)TFXLMModel
(XLM model)TFXLMRobertaModel
(XLM-RoBERTa model)TFXLNetModel
(XLNet model)Examples:
>>> from transformers import AutoConfig, TFAutoModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModel.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModel.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: FlaxAlbertModel
(ALBERT model)BeitConfig
configuration class: FlaxBeitModel
(BEiT model)BigBirdConfig
configuration class: FlaxBigBirdModel
(BigBird model)BlenderbotConfig
configuration class: FlaxBlenderbotModel
(Blenderbot model)BlenderbotSmallConfig
configuration class: FlaxBlenderbotSmallModel
(BlenderbotSmall model)BloomConfig
configuration class: FlaxBloomModel
(BLOOM model)Dinov2Config
configuration class: FlaxDinov2Model
(DINOv2 model)DistilBertConfig
configuration class: FlaxDistilBertModel
(DistilBERT model)ElectraConfig
configuration class: FlaxElectraModel
(ELECTRA model)GPT2Config
configuration class: FlaxGPT2Model
(OpenAI GPT-2 model)GPTJConfig
configuration class: FlaxGPTJModel
(GPT-J model)GPTNeoConfig
configuration class: FlaxGPTNeoModel
(GPT Neo model)FlaxLlamaModel
(LLaMA model)LongT5Config
configuration class: FlaxLongT5Model
(LongT5 model)MBartConfig
configuration class: FlaxMBartModel
(mBART model)MT5Config
configuration class: FlaxMT5Model
(MT5 model)OPTConfig
configuration class: FlaxOPTModel
(OPT model)PegasusConfig
configuration class: FlaxPegasusModel
(Pegasus model)RegNetConfig
configuration class: FlaxRegNetModel
(RegNet model)ResNetConfig
configuration class: FlaxResNetModel
(ResNet model)RoFormerConfig
configuration class: FlaxRoFormerModel
(RoFormer model)RobertaConfig
configuration class: FlaxRobertaModel
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormModel
(RoBERTa-PreLayerNorm model)T5Config
configuration class: FlaxT5Model
(T5 model)VisionTextDualEncoderConfig
configuration class: FlaxVisionTextDualEncoderModel
(VisionTextDualEncoder model)Wav2Vec2Config
configuration class: FlaxWav2Vec2Model
(Wav2Vec2 model)XGLMConfig
configuration class: FlaxXGLMModel
(XGLM model)XLMRobertaConfig
configuration class: FlaxXLMRobertaModel
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the base model classes of the library from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxAlbertModel
(ALBERT model)FlaxBeitModel
(BEiT model)FlaxBigBirdModel
(BigBird model)FlaxBlenderbotModel
(Blenderbot model)FlaxBlenderbotSmallModel
(BlenderbotSmall model)FlaxBloomModel
(BLOOM model)FlaxDinov2Model
(DINOv2 model)FlaxDistilBertModel
(DistilBERT model)FlaxElectraModel
(ELECTRA model)FlaxGPT2Model
(GPT-Sw3 model)FlaxGPT2Model
(OpenAI GPT-2 model)FlaxGPTNeoModel
(GPT Neo model)FlaxGPTJModel
(GPT-J model)FlaxLlamaModel
(LLaMA model)FlaxLongT5Model
(LongT5 model)FlaxMBartModel
(mBART model)FlaxMT5Model
(MT5 model)FlaxOPTModel
(OPT model)FlaxPegasusModel
(Pegasus model)FlaxRegNetModel
(RegNet model)FlaxResNetModel
(ResNet model)FlaxRobertaModel
(RoBERTa model)FlaxRobertaPreLayerNormModel
(RoBERTa-PreLayerNorm model)FlaxRoFormerModel
(RoFormer model)FlaxT5Model
(T5 model)FlaxVisionTextDualEncoderModel
(VisionTextDualEncoder model)FlaxWav2Vec2Model
(Wav2Vec2 model)FlaxXGLMModel
(XGLM model)FlaxXLMRobertaModel
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModel.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModel.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModel.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
다음 자동 클래스들은 사전 훈련 헤드가 포함된 모델을 인스턴스화하는 데 사용할 수 있습니다.
This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: AlbertForPreTraining
(ALBERT model)BigBirdConfig
configuration class: BigBirdForPreTraining
(BigBird model)BloomConfig
configuration class: BloomForCausalLM
(BLOOM model)CTRLConfig
configuration class: CTRLLMHeadModel
(CTRL model)CamembertConfig
configuration class: CamembertForMaskedLM
(CamemBERT model)ColPaliConfig
configuration class: ColPaliForRetrieval
(ColPali model)Data2VecTextConfig
configuration class: Data2VecTextForMaskedLM
(Data2VecText model)DistilBertConfig
configuration class: DistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class: ElectraForPreTraining
(ELECTRA model)ErnieConfig
configuration class: ErnieForPreTraining
(ERNIE model)FNetConfig
configuration class: FNetForPreTraining
(FNet model)FSMTConfig
configuration class: FSMTForConditionalGeneration
(FairSeq Machine-Translation model)FalconMambaConfig
configuration class: FalconMambaForCausalLM
(FalconMamba model)FlaubertConfig
configuration class: FlaubertWithLMHeadModel
(FlauBERT model)FlavaConfig
configuration class: FlavaForPreTraining
(FLAVA model)FunnelConfig
configuration class: FunnelForPreTraining
(Funnel Transformer model)GPT2Config
configuration class: GPT2LMHeadModel
(OpenAI GPT-2 model)GPTBigCodeConfig
configuration class: GPTBigCodeForCausalLM
(GPTBigCode model)GPTSanJapaneseConfig
configuration class: GPTSanJapaneseForConditionalGeneration
(GPTSAN-japanese model)HieraConfig
configuration class: HieraForPreTraining
(Hiera model)IBertConfig
configuration class: IBertForMaskedLM
(I-BERT model)Idefics2Config
configuration class: Idefics2ForConditionalGeneration
(Idefics2 model)Idefics3Config
configuration class: Idefics3ForConditionalGeneration
(Idefics3 model)IdeficsConfig
configuration class: IdeficsForVisionText2Text
(IDEFICS model)LayoutLMConfig
configuration class: LayoutLMForMaskedLM
(LayoutLM model)LlavaConfig
configuration class: LlavaForConditionalGeneration
(LLaVa model)LlavaNextConfig
configuration class: LlavaNextForConditionalGeneration
(LLaVA-NeXT model)LlavaNextVideoConfig
configuration class: LlavaNextVideoForConditionalGeneration
(LLaVa-NeXT-Video model)LlavaOnevisionConfig
configuration class: LlavaOnevisionForConditionalGeneration
(LLaVA-Onevision model)LongformerConfig
configuration class: LongformerForMaskedLM
(Longformer model)LukeConfig
configuration class: LukeForMaskedLM
(LUKE model)LxmertConfig
configuration class: LxmertForPreTraining
(LXMERT model)MPNetConfig
configuration class: MPNetForMaskedLM
(MPNet model)MegaConfig
configuration class: MegaForMaskedLM
(MEGA model)MegatronBertConfig
configuration class: MegatronBertForPreTraining
(Megatron-BERT model)MllamaConfig
configuration class: MllamaForConditionalGeneration
(Mllama model)MobileBertConfig
configuration class: MobileBertForPreTraining
(MobileBERT model)MptConfig
configuration class: MptForCausalLM
(MPT model)MraConfig
configuration class: MraForMaskedLM
(MRA model)MvpConfig
configuration class: MvpForConditionalGeneration
(MVP model)NezhaConfig
configuration class: NezhaForPreTraining
(Nezha model)NllbMoeConfig
configuration class: NllbMoeForConditionalGeneration
(NLLB-MOE model)Qwen2AudioConfig
configuration class: Qwen2AudioForConditionalGeneration
(Qwen2Audio model)RetriBertConfig
configuration class: RetriBertModel
(RetriBERT model)RoCBertConfig
configuration class: RoCBertForPreTraining
(RoCBert model)RobertaConfig
configuration class: RobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)RwkvConfig
configuration class: RwkvForCausalLM
(RWKV model)SplinterConfig
configuration class: SplinterForPreTraining
(Splinter model)SqueezeBertConfig
configuration class: SqueezeBertForMaskedLM
(SqueezeBERT model)SwitchTransformersConfig
configuration class: SwitchTransformersForConditionalGeneration
(SwitchTransformers model)T5Config
configuration class: T5ForConditionalGeneration
(T5 model)TapasConfig
configuration class: TapasForMaskedLM
(TAPAS model)TransfoXLConfig
configuration class: TransfoXLLMHeadModel
(Transformer-XL model)TvltConfig
configuration class: TvltForPreTraining
(TVLT model)UniSpeechConfig
configuration class: UniSpeechForPreTraining
(UniSpeech model)UniSpeechSatConfig
configuration class: UniSpeechSatForPreTraining
(UniSpeechSat model)ViTMAEConfig
configuration class: ViTMAEForPreTraining
(ViTMAE model)VideoLlavaConfig
configuration class: VideoLlavaForConditionalGeneration
(VideoLlava model)VideoMAEConfig
configuration class: VideoMAEForPreTraining
(VideoMAE model)VipLlavaConfig
configuration class: VipLlavaForConditionalGeneration
(VipLlava model)VisualBertConfig
configuration class: VisualBertForPreTraining
(VisualBERT model)Wav2Vec2Config
configuration class: Wav2Vec2ForPreTraining
(Wav2Vec2 model)Wav2Vec2ConformerConfig
configuration class: Wav2Vec2ConformerForPreTraining
(Wav2Vec2-Conformer model)XLMConfig
configuration class: XLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class: XLMRobertaForMaskedLM
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLForMaskedLM
(XLM-RoBERTa-XL model)XLNetConfig
configuration class: XLNetLMHeadModel
(XLNet model)XmodConfig
configuration class: XmodForMaskedLM
(X-MOD model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertForPreTraining
(ALBERT model)BigBirdForPreTraining
(BigBird model)BloomForCausalLM
(BLOOM model)CamembertForMaskedLM
(CamemBERT model)ColPaliForRetrieval
(ColPali model)CTRLLMHeadModel
(CTRL model)Data2VecTextForMaskedLM
(Data2VecText model)DistilBertForMaskedLM
(DistilBERT model)ElectraForPreTraining
(ELECTRA model)ErnieForPreTraining
(ERNIE model)FalconMambaForCausalLM
(FalconMamba model)FlaubertWithLMHeadModel
(FlauBERT model)FlavaForPreTraining
(FLAVA model)FNetForPreTraining
(FNet model)FSMTForConditionalGeneration
(FairSeq Machine-Translation model)FunnelForPreTraining
(Funnel Transformer model)GPT2LMHeadModel
(GPT-Sw3 model)GPT2LMHeadModel
(OpenAI GPT-2 model)GPTBigCodeForCausalLM
(GPTBigCode model)GPTSanJapaneseForConditionalGeneration
(GPTSAN-japanese model)HieraForPreTraining
(Hiera model)IBertForMaskedLM
(I-BERT model)IdeficsForVisionText2Text
(IDEFICS model)Idefics2ForConditionalGeneration
(Idefics2 model)Idefics3ForConditionalGeneration
(Idefics3 model)LayoutLMForMaskedLM
(LayoutLM model)LlavaForConditionalGeneration
(LLaVa model)LlavaNextForConditionalGeneration
(LLaVA-NeXT model)LlavaNextVideoForConditionalGeneration
(LLaVa-NeXT-Video model)LlavaOnevisionForConditionalGeneration
(LLaVA-Onevision model)LongformerForMaskedLM
(Longformer model)LukeForMaskedLM
(LUKE model)LxmertForPreTraining
(LXMERT model)MegaForMaskedLM
(MEGA model)MegatronBertForPreTraining
(Megatron-BERT model)MllamaForConditionalGeneration
(Mllama model)MobileBertForPreTraining
(MobileBERT model)MPNetForMaskedLM
(MPNet model)MptForCausalLM
(MPT model)MraForMaskedLM
(MRA model)MvpForConditionalGeneration
(MVP model)NezhaForPreTraining
(Nezha model)NllbMoeForConditionalGeneration
(NLLB-MOE model)Qwen2AudioForConditionalGeneration
(Qwen2Audio model)RetriBertModel
(RetriBERT model)RobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)RoCBertForPreTraining
(RoCBert model)RwkvForCausalLM
(RWKV model)SplinterForPreTraining
(Splinter model)SqueezeBertForMaskedLM
(SqueezeBERT model)SwitchTransformersForConditionalGeneration
(SwitchTransformers model)T5ForConditionalGeneration
(T5 model)TapasForMaskedLM
(TAPAS model)TransfoXLLMHeadModel
(Transformer-XL model)TvltForPreTraining
(TVLT model)UniSpeechForPreTraining
(UniSpeech model)UniSpeechSatForPreTraining
(UniSpeechSat model)VideoLlavaForConditionalGeneration
(VideoLlava model)VideoMAEForPreTraining
(VideoMAE model)VipLlavaForConditionalGeneration
(VipLlava model)VisualBertForPreTraining
(VisualBERT model)ViTMAEForPreTraining
(ViTMAE model)Wav2Vec2ForPreTraining
(Wav2Vec2 model)Wav2Vec2ConformerForPreTraining
(Wav2Vec2-Conformer model)XLMWithLMHeadModel
(XLM model)XLMRobertaForMaskedLM
(XLM-RoBERTa model)XLMRobertaXLForMaskedLM
(XLM-RoBERTa-XL model)XLNetLMHeadModel
(XLNet model)XmodForMaskedLM
(X-MOD model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForPreTraining
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForPreTraining.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: TFAlbertForPreTraining
(ALBERT model)CTRLConfig
configuration class: TFCTRLLMHeadModel
(CTRL model)CamembertConfig
configuration class: TFCamembertForMaskedLM
(CamemBERT model)DistilBertConfig
configuration class: TFDistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class: TFElectraForPreTraining
(ELECTRA model)FlaubertConfig
configuration class: TFFlaubertWithLMHeadModel
(FlauBERT model)FunnelConfig
configuration class: TFFunnelForPreTraining
(Funnel Transformer model)GPT2Config
configuration class: TFGPT2LMHeadModel
(OpenAI GPT-2 model)IdeficsConfig
configuration class: TFIdeficsForVisionText2Text
(IDEFICS model)LayoutLMConfig
configuration class: TFLayoutLMForMaskedLM
(LayoutLM model)LxmertConfig
configuration class: TFLxmertForPreTraining
(LXMERT model)MPNetConfig
configuration class: TFMPNetForMaskedLM
(MPNet model)MobileBertConfig
configuration class: TFMobileBertForPreTraining
(MobileBERT model)RobertaConfig
configuration class: TFRobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)T5Config
configuration class: TFT5ForConditionalGeneration
(T5 model)TapasConfig
configuration class: TFTapasForMaskedLM
(TAPAS model)TransfoXLConfig
configuration class: TFTransfoXLLMHeadModel
(Transformer-XL model)ViTMAEConfig
configuration class: TFViTMAEForPreTraining
(ViTMAE model)XLMConfig
configuration class: TFXLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)XLNetConfig
configuration class: TFXLNetLMHeadModel
(XLNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFAlbertForPreTraining
(ALBERT model)TFCamembertForMaskedLM
(CamemBERT model)TFCTRLLMHeadModel
(CTRL model)TFDistilBertForMaskedLM
(DistilBERT model)TFElectraForPreTraining
(ELECTRA model)TFFlaubertWithLMHeadModel
(FlauBERT model)TFFunnelForPreTraining
(Funnel Transformer model)TFGPT2LMHeadModel
(GPT-Sw3 model)TFGPT2LMHeadModel
(OpenAI GPT-2 model)TFIdeficsForVisionText2Text
(IDEFICS model)TFLayoutLMForMaskedLM
(LayoutLM model)TFLxmertForPreTraining
(LXMERT model)TFMobileBertForPreTraining
(MobileBERT model)TFMPNetForMaskedLM
(MPNet model)TFRobertaForMaskedLM
(RoBERTa model)TFRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)TFT5ForConditionalGeneration
(T5 model)TFTapasForMaskedLM
(TAPAS model)TFTransfoXLLMHeadModel
(Transformer-XL model)TFViTMAEForPreTraining
(ViTMAE model)TFXLMWithLMHeadModel
(XLM model)TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)TFXLNetLMHeadModel
(XLNet model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForPreTraining
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForPreTraining.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: FlaxAlbertForPreTraining
(ALBERT model)BigBirdConfig
configuration class: FlaxBigBirdForPreTraining
(BigBird model)ElectraConfig
configuration class: FlaxElectraForPreTraining
(ELECTRA model)LongT5Config
configuration class: FlaxLongT5ForConditionalGeneration
(LongT5 model)MBartConfig
configuration class: FlaxMBartForConditionalGeneration
(mBART model)MT5Config
configuration class: FlaxMT5ForConditionalGeneration
(MT5 model)RoFormerConfig
configuration class: FlaxRoFormerForMaskedLM
(RoFormer model)RobertaConfig
configuration class: FlaxRobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)T5Config
configuration class: FlaxT5ForConditionalGeneration
(T5 model)Wav2Vec2Config
configuration class: FlaxWav2Vec2ForPreTraining
(Wav2Vec2 model)XLMRobertaConfig
configuration class: FlaxXLMRobertaForMaskedLM
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxAlbertForPreTraining
(ALBERT model)FlaxBigBirdForPreTraining
(BigBird model)FlaxElectraForPreTraining
(ELECTRA model)FlaxLongT5ForConditionalGeneration
(LongT5 model)FlaxMBartForConditionalGeneration
(mBART model)FlaxMT5ForConditionalGeneration
(MT5 model)FlaxRobertaForMaskedLM
(RoBERTa model)FlaxRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)FlaxRoFormerForMaskedLM
(RoFormer model)FlaxT5ForConditionalGeneration
(T5 model)FlaxWav2Vec2ForPreTraining
(Wav2Vec2 model)FlaxXLMRobertaForMaskedLM
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForPreTraining
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForPreTraining.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForPreTraining.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
다음 자동 클래스들은 아래의 자연어 처리 작업에 사용할 수 있습니다.
This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AriaTextConfig
configuration class: AriaTextForCausalLM
(AriaText model)BambaConfig
configuration class: BambaForCausalLM
(Bamba model)BertGenerationConfig
configuration class: BertGenerationDecoder
(Bert Generation model)BigBirdConfig
configuration class: BigBirdForCausalLM
(BigBird model)BigBirdPegasusConfig
configuration class: BigBirdPegasusForCausalLM
(BigBird-Pegasus model)BlenderbotConfig
configuration class: BlenderbotForCausalLM
(Blenderbot model)BlenderbotSmallConfig
configuration class: BlenderbotSmallForCausalLM
(BlenderbotSmall model)BloomConfig
configuration class: BloomForCausalLM
(BLOOM model)CTRLConfig
configuration class: CTRLLMHeadModel
(CTRL model)CamembertConfig
configuration class: CamembertForCausalLM
(CamemBERT model)CodeGenConfig
configuration class: CodeGenForCausalLM
(CodeGen model)Cohere2Config
configuration class: Cohere2ForCausalLM
(Cohere2 model)CpmAntConfig
configuration class: CpmAntForCausalLM
(CPM-Ant model)Data2VecTextConfig
configuration class: Data2VecTextForCausalLM
(Data2VecText model)DiffLlamaConfig
configuration class: DiffLlamaForCausalLM
(DiffLlama model)ElectraConfig
configuration class: ElectraForCausalLM
(ELECTRA model)ErnieConfig
configuration class: ErnieForCausalLM
(ERNIE model)FalconConfig
configuration class: FalconForCausalLM
(Falcon model)FalconMambaConfig
configuration class: FalconMambaForCausalLM
(FalconMamba model)FuyuConfig
configuration class: FuyuForCausalLM
(Fuyu model)GPT2Config
configuration class: GPT2LMHeadModel
(OpenAI GPT-2 model)GPTBigCodeConfig
configuration class: GPTBigCodeForCausalLM
(GPTBigCode model)GPTJConfig
configuration class: GPTJForCausalLM
(GPT-J model)GPTNeoConfig
configuration class: GPTNeoForCausalLM
(GPT Neo model)GPTNeoXConfig
configuration class: GPTNeoXForCausalLM
(GPT NeoX model)GitConfig
configuration class: GitForCausalLM
(GIT model)GlmConfig
configuration class: GlmForCausalLM
(GLM model)GraniteConfig
configuration class: GraniteForCausalLM
(Granite model)GraniteMoeConfig
configuration class: GraniteMoeForCausalLM
(GraniteMoeMoe model)JambaConfig
configuration class: JambaForCausalLM
(Jamba model)JetMoeConfig
configuration class: JetMoeForCausalLM
(JetMoe model)MBartConfig
configuration class: MBartForCausalLM
(mBART model)MegaConfig
configuration class: MegaForCausalLM
(MEGA model)MegatronBertConfig
configuration class: MegatronBertForCausalLM
(Megatron-BERT model)MixtralConfig
configuration class: MixtralForCausalLM
(Mixtral model)MllamaConfig
configuration class: MllamaForCausalLM
(Mllama model)MoshiConfig
configuration class: MoshiForCausalLM
(Moshi model)MptConfig
configuration class: MptForCausalLM
(MPT model)MusicgenConfig
configuration class: MusicgenForCausalLM
(MusicGen model)MusicgenMelodyConfig
configuration class: MusicgenMelodyForCausalLM
(MusicGen Melody model)MvpConfig
configuration class: MvpForCausalLM
(MVP model)NemotronConfig
configuration class: NemotronForCausalLM
(Nemotron model)OPTConfig
configuration class: OPTForCausalLM
(OPT model)Olmo2Config
configuration class: Olmo2ForCausalLM
(OLMo2 model)OlmoConfig
configuration class: OlmoForCausalLM
(OLMo model)OlmoeConfig
configuration class: OlmoeForCausalLM
(OLMoE model)OpenLlamaConfig
configuration class: OpenLlamaForCausalLM
(OpenLlama model)PLBartConfig
configuration class: PLBartForCausalLM
(PLBart model)PegasusConfig
configuration class: PegasusForCausalLM
(Pegasus model)PersimmonConfig
configuration class: PersimmonForCausalLM
(Persimmon model)Phi3Config
configuration class: Phi3ForCausalLM
(Phi3 model)PhiConfig
configuration class: PhiForCausalLM
(Phi model)PhimoeConfig
configuration class: PhimoeForCausalLM
(Phimoe model)ProphetNetConfig
configuration class: ProphetNetForCausalLM
(ProphetNet model)QDQBertConfig
configuration class: QDQBertLMHeadModel
(QDQBert model)Qwen2Config
configuration class: Qwen2ForCausalLM
(Qwen2 model)Qwen2MoeConfig
configuration class: Qwen2MoeForCausalLM
(Qwen2MoE model)RecurrentGemmaConfig
configuration class: RecurrentGemmaForCausalLM
(RecurrentGemma model)ReformerConfig
configuration class: ReformerModelWithLMHead
(Reformer model)RemBertConfig
configuration class: RemBertForCausalLM
(RemBERT model)RoCBertConfig
configuration class: RoCBertForCausalLM
(RoCBert model)RoFormerConfig
configuration class: RoFormerForCausalLM
(RoFormer model)RobertaConfig
configuration class: RobertaForCausalLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormForCausalLM
(RoBERTa-PreLayerNorm model)RwkvConfig
configuration class: RwkvForCausalLM
(RWKV model)Speech2Text2Config
configuration class: Speech2Text2ForCausalLM
(Speech2Text2 model)StableLmConfig
configuration class: StableLmForCausalLM
(StableLm model)Starcoder2Config
configuration class: Starcoder2ForCausalLM
(Starcoder2 model)TrOCRConfig
configuration class: TrOCRForCausalLM
(TrOCR model)TransfoXLConfig
configuration class: TransfoXLLMHeadModel
(Transformer-XL model)WhisperForCausalLM
(Whisper model)XGLMConfig
configuration class: XGLMForCausalLM
(XGLM model)XLMConfig
configuration class: XLMWithLMHeadModel
(XLM model)XLMProphetNetConfig
configuration class: XLMProphetNetForCausalLM
(XLM-ProphetNet model)XLMRobertaConfig
configuration class: XLMRobertaForCausalLM
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLForCausalLM
(XLM-RoBERTa-XL model)XLNetConfig
configuration class: XLNetLMHeadModel
(XLNet model)XmodConfig
configuration class: XmodForCausalLM
(X-MOD model)ZambaConfig
configuration class: ZambaForCausalLM
(Zamba model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AriaTextForCausalLM
(AriaText model)BambaForCausalLM
(Bamba model)BertGenerationDecoder
(Bert Generation model)BigBirdForCausalLM
(BigBird model)BigBirdPegasusForCausalLM
(BigBird-Pegasus model)BlenderbotForCausalLM
(Blenderbot model)BlenderbotSmallForCausalLM
(BlenderbotSmall model)BloomForCausalLM
(BLOOM model)CamembertForCausalLM
(CamemBERT model)CodeGenForCausalLM
(CodeGen model)Cohere2ForCausalLM
(Cohere2 model)CpmAntForCausalLM
(CPM-Ant model)CTRLLMHeadModel
(CTRL model)Data2VecTextForCausalLM
(Data2VecText model)DiffLlamaForCausalLM
(DiffLlama model)ElectraForCausalLM
(ELECTRA model)ErnieForCausalLM
(ERNIE model)FalconForCausalLM
(Falcon model)FalconMambaForCausalLM
(FalconMamba model)FuyuForCausalLM
(Fuyu model)GitForCausalLM
(GIT model)GlmForCausalLM
(GLM model)GPT2LMHeadModel
(GPT-Sw3 model)GPT2LMHeadModel
(OpenAI GPT-2 model)GPTBigCodeForCausalLM
(GPTBigCode model)GPTNeoForCausalLM
(GPT Neo model)GPTNeoXForCausalLM
(GPT NeoX model)GPTJForCausalLM
(GPT-J model)GraniteForCausalLM
(Granite model)GraniteMoeForCausalLM
(GraniteMoeMoe model)JambaForCausalLM
(Jamba model)JetMoeForCausalLM
(JetMoe model)MBartForCausalLM
(mBART model)MegaForCausalLM
(MEGA model)MegatronBertForCausalLM
(Megatron-BERT model)MixtralForCausalLM
(Mixtral model)MllamaForCausalLM
(Mllama model)MoshiForCausalLM
(Moshi model)MptForCausalLM
(MPT model)MusicgenForCausalLM
(MusicGen model)MusicgenMelodyForCausalLM
(MusicGen Melody model)MvpForCausalLM
(MVP model)NemotronForCausalLM
(Nemotron model)OlmoForCausalLM
(OLMo model)Olmo2ForCausalLM
(OLMo2 model)OlmoeForCausalLM
(OLMoE model)OpenLlamaForCausalLM
(OpenLlama model)OPTForCausalLM
(OPT model)PegasusForCausalLM
(Pegasus model)PersimmonForCausalLM
(Persimmon model)PhiForCausalLM
(Phi model)Phi3ForCausalLM
(Phi3 model)PhimoeForCausalLM
(Phimoe model)PLBartForCausalLM
(PLBart model)ProphetNetForCausalLM
(ProphetNet model)QDQBertLMHeadModel
(QDQBert model)Qwen2ForCausalLM
(Qwen2 model)Qwen2MoeForCausalLM
(Qwen2MoE model)RecurrentGemmaForCausalLM
(RecurrentGemma model)ReformerModelWithLMHead
(Reformer model)RemBertForCausalLM
(RemBERT model)RobertaForCausalLM
(RoBERTa model)RobertaPreLayerNormForCausalLM
(RoBERTa-PreLayerNorm model)RoCBertForCausalLM
(RoCBert model)RoFormerForCausalLM
(RoFormer model)RwkvForCausalLM
(RWKV model)Speech2Text2ForCausalLM
(Speech2Text2 model)StableLmForCausalLM
(StableLm model)Starcoder2ForCausalLM
(Starcoder2 model)TransfoXLLMHeadModel
(Transformer-XL model)TrOCRForCausalLM
(TrOCR model)WhisperForCausalLM
(Whisper model)XGLMForCausalLM
(XGLM model)XLMWithLMHeadModel
(XLM model)XLMProphetNetForCausalLM
(XLM-ProphetNet model)XLMRobertaForCausalLM
(XLM-RoBERTa model)XLMRobertaXLForCausalLM
(XLM-RoBERTa-XL model)XLNetLMHeadModel
(XLNet model)XmodForCausalLM
(X-MOD model)ZambaForCausalLM
(Zamba model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForCausalLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForCausalLM.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
CTRLConfig
configuration class: TFCTRLLMHeadModel
(CTRL model)CamembertConfig
configuration class: TFCamembertForCausalLM
(CamemBERT model)GPT2Config
configuration class: TFGPT2LMHeadModel
(OpenAI GPT-2 model)GPTJConfig
configuration class: TFGPTJForCausalLM
(GPT-J model)OPTConfig
configuration class: TFOPTForCausalLM
(OPT model)RemBertConfig
configuration class: TFRemBertForCausalLM
(RemBERT model)RoFormerConfig
configuration class: TFRoFormerForCausalLM
(RoFormer model)RobertaConfig
configuration class: TFRobertaForCausalLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormForCausalLM
(RoBERTa-PreLayerNorm model)TransfoXLConfig
configuration class: TFTransfoXLLMHeadModel
(Transformer-XL model)XGLMConfig
configuration class: TFXGLMForCausalLM
(XGLM model)XLMConfig
configuration class: TFXLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaForCausalLM
(XLM-RoBERTa model)XLNetConfig
configuration class: TFXLNetLMHeadModel
(XLNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFCamembertForCausalLM
(CamemBERT model)TFCTRLLMHeadModel
(CTRL model)TFGPT2LMHeadModel
(GPT-Sw3 model)TFGPT2LMHeadModel
(OpenAI GPT-2 model)TFGPTJForCausalLM
(GPT-J model)TFOPTForCausalLM
(OPT model)TFRemBertForCausalLM
(RemBERT model)TFRobertaForCausalLM
(RoBERTa model)TFRobertaPreLayerNormForCausalLM
(RoBERTa-PreLayerNorm model)TFRoFormerForCausalLM
(RoFormer model)TFTransfoXLLMHeadModel
(Transformer-XL model)TFXGLMForCausalLM
(XGLM model)TFXLMWithLMHeadModel
(XLM model)TFXLMRobertaForCausalLM
(XLM-RoBERTa model)TFXLNetLMHeadModel
(XLNet model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForCausalLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForCausalLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
BigBirdConfig
configuration class: FlaxBigBirdForCausalLM
(BigBird model)BloomConfig
configuration class: FlaxBloomForCausalLM
(BLOOM model)ElectraConfig
configuration class: FlaxElectraForCausalLM
(ELECTRA model)GPT2Config
configuration class: FlaxGPT2LMHeadModel
(OpenAI GPT-2 model)GPTJConfig
configuration class: FlaxGPTJForCausalLM
(GPT-J model)GPTNeoConfig
configuration class: FlaxGPTNeoForCausalLM
(GPT Neo model)FlaxLlamaForCausalLM
(LLaMA model)OPTConfig
configuration class: FlaxOPTForCausalLM
(OPT model)RobertaConfig
configuration class: FlaxRobertaForCausalLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormForCausalLM
(RoBERTa-PreLayerNorm model)XGLMConfig
configuration class: FlaxXGLMForCausalLM
(XGLM model)XLMRobertaConfig
configuration class: FlaxXLMRobertaForCausalLM
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxBigBirdForCausalLM
(BigBird model)FlaxBloomForCausalLM
(BLOOM model)FlaxElectraForCausalLM
(ELECTRA model)FlaxGPT2LMHeadModel
(GPT-Sw3 model)FlaxGPT2LMHeadModel
(OpenAI GPT-2 model)FlaxGPTNeoForCausalLM
(GPT Neo model)FlaxGPTJForCausalLM
(GPT-J model)FlaxLlamaForCausalLM
(LLaMA model)FlaxOPTForCausalLM
(OPT model)FlaxRobertaForCausalLM
(RoBERTa model)FlaxRobertaPreLayerNormForCausalLM
(RoBERTa-PreLayerNorm model)FlaxXGLMForCausalLM
(XGLM model)FlaxXLMRobertaForCausalLM
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForCausalLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForCausalLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForCausalLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: AlbertForMaskedLM
(ALBERT model)BigBirdConfig
configuration class: BigBirdForMaskedLM
(BigBird model)CamembertConfig
configuration class: CamembertForMaskedLM
(CamemBERT model)Data2VecTextConfig
configuration class: Data2VecTextForMaskedLM
(Data2VecText model)DistilBertConfig
configuration class: DistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class: ElectraForMaskedLM
(ELECTRA model)ErnieConfig
configuration class: ErnieForMaskedLM
(ERNIE model)FNetConfig
configuration class: FNetForMaskedLM
(FNet model)FlaubertConfig
configuration class: FlaubertWithLMHeadModel
(FlauBERT model)FunnelConfig
configuration class: FunnelForMaskedLM
(Funnel Transformer model)IBertConfig
configuration class: IBertForMaskedLM
(I-BERT model)LayoutLMConfig
configuration class: LayoutLMForMaskedLM
(LayoutLM model)LongformerConfig
configuration class: LongformerForMaskedLM
(Longformer model)LukeConfig
configuration class: LukeForMaskedLM
(LUKE model)MBartConfig
configuration class: MBartForConditionalGeneration
(mBART model)MPNetConfig
configuration class: MPNetForMaskedLM
(MPNet model)MegaConfig
configuration class: MegaForMaskedLM
(MEGA model)MegatronBertConfig
configuration class: MegatronBertForMaskedLM
(Megatron-BERT model)MobileBertConfig
configuration class: MobileBertForMaskedLM
(MobileBERT model)ModernBertConfig
configuration class: ModernBertForMaskedLM
(ModernBERT model)MraConfig
configuration class: MraForMaskedLM
(MRA model)MvpConfig
configuration class: MvpForConditionalGeneration
(MVP model)NezhaConfig
configuration class: NezhaForMaskedLM
(Nezha model)NystromformerConfig
configuration class: NystromformerForMaskedLM
(Nyströmformer model)PerceiverConfig
configuration class: PerceiverForMaskedLM
(Perceiver model)QDQBertConfig
configuration class: QDQBertForMaskedLM
(QDQBert model)ReformerConfig
configuration class: ReformerForMaskedLM
(Reformer model)RemBertConfig
configuration class: RemBertForMaskedLM
(RemBERT model)RoCBertConfig
configuration class: RoCBertForMaskedLM
(RoCBert model)RoFormerConfig
configuration class: RoFormerForMaskedLM
(RoFormer model)RobertaConfig
configuration class: RobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)SqueezeBertConfig
configuration class: SqueezeBertForMaskedLM
(SqueezeBERT model)TapasConfig
configuration class: TapasForMaskedLM
(TAPAS model)Wav2Vec2Config
configuration class: Wav2Vec2ForMaskedLM
(Wav2Vec2 model)XLMConfig
configuration class: XLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class: XLMRobertaForMaskedLM
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLForMaskedLM
(XLM-RoBERTa-XL model)XmodConfig
configuration class: XmodForMaskedLM
(X-MOD model)YosoConfig
configuration class: YosoForMaskedLM
(YOSO model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertForMaskedLM
(ALBERT model)BigBirdForMaskedLM
(BigBird model)CamembertForMaskedLM
(CamemBERT model)Data2VecTextForMaskedLM
(Data2VecText model)DistilBertForMaskedLM
(DistilBERT model)ElectraForMaskedLM
(ELECTRA model)ErnieForMaskedLM
(ERNIE model)FlaubertWithLMHeadModel
(FlauBERT model)FNetForMaskedLM
(FNet model)FunnelForMaskedLM
(Funnel Transformer model)IBertForMaskedLM
(I-BERT model)LayoutLMForMaskedLM
(LayoutLM model)LongformerForMaskedLM
(Longformer model)LukeForMaskedLM
(LUKE model)MBartForConditionalGeneration
(mBART model)MegaForMaskedLM
(MEGA model)MegatronBertForMaskedLM
(Megatron-BERT model)MobileBertForMaskedLM
(MobileBERT model)ModernBertForMaskedLM
(ModernBERT model)MPNetForMaskedLM
(MPNet model)MraForMaskedLM
(MRA model)MvpForConditionalGeneration
(MVP model)NezhaForMaskedLM
(Nezha model)NystromformerForMaskedLM
(Nyströmformer model)PerceiverForMaskedLM
(Perceiver model)QDQBertForMaskedLM
(QDQBert model)ReformerForMaskedLM
(Reformer model)RemBertForMaskedLM
(RemBERT model)RobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)RoCBertForMaskedLM
(RoCBert model)RoFormerForMaskedLM
(RoFormer model)SqueezeBertForMaskedLM
(SqueezeBERT model)TapasForMaskedLM
(TAPAS model)Wav2Vec2ForMaskedLM
(Wav2Vec2 model)XLMWithLMHeadModel
(XLM model)XLMRobertaForMaskedLM
(XLM-RoBERTa model)XLMRobertaXLForMaskedLM
(XLM-RoBERTa-XL model)XmodForMaskedLM
(X-MOD model)YosoForMaskedLM
(YOSO model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMaskedLM.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: TFAlbertForMaskedLM
(ALBERT model)CamembertConfig
configuration class: TFCamembertForMaskedLM
(CamemBERT model)DistilBertConfig
configuration class: TFDistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class: TFElectraForMaskedLM
(ELECTRA model)FlaubertConfig
configuration class: TFFlaubertWithLMHeadModel
(FlauBERT model)FunnelConfig
configuration class: TFFunnelForMaskedLM
(Funnel Transformer model)LayoutLMConfig
configuration class: TFLayoutLMForMaskedLM
(LayoutLM model)LongformerConfig
configuration class: TFLongformerForMaskedLM
(Longformer model)MPNetConfig
configuration class: TFMPNetForMaskedLM
(MPNet model)MobileBertConfig
configuration class: TFMobileBertForMaskedLM
(MobileBERT model)RemBertConfig
configuration class: TFRemBertForMaskedLM
(RemBERT model)RoFormerConfig
configuration class: TFRoFormerForMaskedLM
(RoFormer model)RobertaConfig
configuration class: TFRobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)TapasConfig
configuration class: TFTapasForMaskedLM
(TAPAS model)XLMConfig
configuration class: TFXLMWithLMHeadModel
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFAlbertForMaskedLM
(ALBERT model)TFCamembertForMaskedLM
(CamemBERT model)TFDistilBertForMaskedLM
(DistilBERT model)TFElectraForMaskedLM
(ELECTRA model)TFFlaubertWithLMHeadModel
(FlauBERT model)TFFunnelForMaskedLM
(Funnel Transformer model)TFLayoutLMForMaskedLM
(LayoutLM model)TFLongformerForMaskedLM
(Longformer model)TFMobileBertForMaskedLM
(MobileBERT model)TFMPNetForMaskedLM
(MPNet model)TFRemBertForMaskedLM
(RemBERT model)TFRobertaForMaskedLM
(RoBERTa model)TFRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)TFRoFormerForMaskedLM
(RoFormer model)TFTapasForMaskedLM
(TAPAS model)TFXLMWithLMHeadModel
(XLM model)TFXLMRobertaForMaskedLM
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForMaskedLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMaskedLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: FlaxAlbertForMaskedLM
(ALBERT model)BigBirdConfig
configuration class: FlaxBigBirdForMaskedLM
(BigBird model)DistilBertConfig
configuration class: FlaxDistilBertForMaskedLM
(DistilBERT model)ElectraConfig
configuration class: FlaxElectraForMaskedLM
(ELECTRA model)MBartConfig
configuration class: FlaxMBartForConditionalGeneration
(mBART model)RoFormerConfig
configuration class: FlaxRoFormerForMaskedLM
(RoFormer model)RobertaConfig
configuration class: FlaxRobertaForMaskedLM
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)XLMRobertaConfig
configuration class: FlaxXLMRobertaForMaskedLM
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxAlbertForMaskedLM
(ALBERT model)FlaxBigBirdForMaskedLM
(BigBird model)FlaxDistilBertForMaskedLM
(DistilBERT model)FlaxElectraForMaskedLM
(ELECTRA model)FlaxMBartForConditionalGeneration
(mBART model)FlaxRobertaForMaskedLM
(RoBERTa model)FlaxRobertaPreLayerNormForMaskedLM
(RoBERTa-PreLayerNorm model)FlaxRoFormerForMaskedLM
(RoFormer model)FlaxXLMRobertaForMaskedLM
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForMaskedLM.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForMaskedLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
BigBirdPegasusConfig
configuration class: BigBirdPegasusForConditionalGeneration
(BigBird-Pegasus model)BlenderbotConfig
configuration class: BlenderbotForConditionalGeneration
(Blenderbot model)BlenderbotSmallConfig
configuration class: BlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)FSMTConfig
configuration class: FSMTForConditionalGeneration
(FairSeq Machine-Translation model)GPTSanJapaneseConfig
configuration class: GPTSanJapaneseForConditionalGeneration
(GPTSAN-japanese model)LEDConfig
configuration class: LEDForConditionalGeneration
(LED model)LongT5Config
configuration class: LongT5ForConditionalGeneration
(LongT5 model)M2M100Config
configuration class: M2M100ForConditionalGeneration
(M2M100 model)MBartConfig
configuration class: MBartForConditionalGeneration
(mBART model)MT5Config
configuration class: MT5ForConditionalGeneration
(MT5 model)MvpConfig
configuration class: MvpForConditionalGeneration
(MVP model)NllbMoeConfig
configuration class: NllbMoeForConditionalGeneration
(NLLB-MOE model)PLBartConfig
configuration class: PLBartForConditionalGeneration
(PLBart model)PegasusConfig
configuration class: PegasusForConditionalGeneration
(Pegasus model)PegasusXConfig
configuration class: PegasusXForConditionalGeneration
(PEGASUS-X model)ProphetNetConfig
configuration class: ProphetNetForConditionalGeneration
(ProphetNet model)Qwen2AudioConfig
configuration class: Qwen2AudioForConditionalGeneration
(Qwen2Audio model)SeamlessM4TConfig
configuration class: SeamlessM4TForTextToText
(SeamlessM4T model)SeamlessM4Tv2Config
configuration class: SeamlessM4Tv2ForTextToText
(SeamlessM4Tv2 model)SwitchTransformersConfig
configuration class: SwitchTransformersForConditionalGeneration
(SwitchTransformers model)T5Config
configuration class: T5ForConditionalGeneration
(T5 model)UMT5Config
configuration class: UMT5ForConditionalGeneration
(UMT5 model)XLMProphetNetConfig
configuration class: XLMProphetNetForConditionalGeneration
(XLM-ProphetNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
BigBirdPegasusForConditionalGeneration
(BigBird-Pegasus model)BlenderbotForConditionalGeneration
(Blenderbot model)BlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)FSMTForConditionalGeneration
(FairSeq Machine-Translation model)GPTSanJapaneseForConditionalGeneration
(GPTSAN-japanese model)LEDForConditionalGeneration
(LED model)LongT5ForConditionalGeneration
(LongT5 model)M2M100ForConditionalGeneration
(M2M100 model)MBartForConditionalGeneration
(mBART model)MT5ForConditionalGeneration
(MT5 model)MvpForConditionalGeneration
(MVP model)NllbMoeForConditionalGeneration
(NLLB-MOE model)PegasusForConditionalGeneration
(Pegasus model)PegasusXForConditionalGeneration
(PEGASUS-X model)PLBartForConditionalGeneration
(PLBart model)ProphetNetForConditionalGeneration
(ProphetNet model)Qwen2AudioForConditionalGeneration
(Qwen2Audio model)SeamlessM4TForTextToText
(SeamlessM4T model)SeamlessM4Tv2ForTextToText
(SeamlessM4Tv2 model)SwitchTransformersForConditionalGeneration
(SwitchTransformers model)T5ForConditionalGeneration
(T5 model)UMT5ForConditionalGeneration
(UMT5 model)XLMProphetNetForConditionalGeneration
(XLM-ProphetNet model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base")
>>> # Update configuration during loading
>>> model = AutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/t5_tf_model_config.json")
>>> model = AutoModelForSeq2SeqLM.from_pretrained(
... "./tf_model/t5_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
BlenderbotConfig
configuration class: TFBlenderbotForConditionalGeneration
(Blenderbot model)BlenderbotSmallConfig
configuration class: TFBlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)LEDConfig
configuration class: TFLEDForConditionalGeneration
(LED model)MBartConfig
configuration class: TFMBartForConditionalGeneration
(mBART model)MT5Config
configuration class: TFMT5ForConditionalGeneration
(MT5 model)PegasusConfig
configuration class: TFPegasusForConditionalGeneration
(Pegasus model)T5Config
configuration class: TFT5ForConditionalGeneration
(T5 model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFBlenderbotForConditionalGeneration
(Blenderbot model)TFBlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)TFLEDForConditionalGeneration
(LED model)TFMBartForConditionalGeneration
(mBART model)TFMT5ForConditionalGeneration
(MT5 model)TFPegasusForConditionalGeneration
(Pegasus model)TFT5ForConditionalGeneration
(T5 model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base")
>>> # Update configuration during loading
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json")
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(
... "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
BlenderbotConfig
configuration class: FlaxBlenderbotForConditionalGeneration
(Blenderbot model)BlenderbotSmallConfig
configuration class: FlaxBlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)LongT5Config
configuration class: FlaxLongT5ForConditionalGeneration
(LongT5 model)MBartConfig
configuration class: FlaxMBartForConditionalGeneration
(mBART model)MT5Config
configuration class: FlaxMT5ForConditionalGeneration
(MT5 model)PegasusConfig
configuration class: FlaxPegasusForConditionalGeneration
(Pegasus model)T5Config
configuration class: FlaxT5ForConditionalGeneration
(T5 model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxBlenderbotForConditionalGeneration
(Blenderbot model)FlaxBlenderbotSmallForConditionalGeneration
(BlenderbotSmall model)FlaxLongT5ForConditionalGeneration
(LongT5 model)FlaxMBartForConditionalGeneration
(mBART model)FlaxMT5ForConditionalGeneration
(MT5 model)FlaxPegasusForConditionalGeneration
(Pegasus model)FlaxT5ForConditionalGeneration
(T5 model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("google-t5/t5-base", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json")
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained(
... "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: AlbertForSequenceClassification
(ALBERT model)BigBirdConfig
configuration class: BigBirdForSequenceClassification
(BigBird model)BigBirdPegasusConfig
configuration class: BigBirdPegasusForSequenceClassification
(BigBird-Pegasus model)BloomConfig
configuration class: BloomForSequenceClassification
(BLOOM model)CTRLConfig
configuration class: CTRLForSequenceClassification
(CTRL model)CamembertConfig
configuration class: CamembertForSequenceClassification
(CamemBERT model)CanineConfig
configuration class: CanineForSequenceClassification
(CANINE model)Data2VecTextConfig
configuration class: Data2VecTextForSequenceClassification
(Data2VecText model)DiffLlamaConfig
configuration class: DiffLlamaForSequenceClassification
(DiffLlama model)DistilBertConfig
configuration class: DistilBertForSequenceClassification
(DistilBERT model)ElectraConfig
configuration class: ElectraForSequenceClassification
(ELECTRA model)ErnieConfig
configuration class: ErnieForSequenceClassification
(ERNIE model)ErnieMConfig
configuration class: ErnieMForSequenceClassification
(ErnieM model)FNetConfig
configuration class: FNetForSequenceClassification
(FNet model)FalconConfig
configuration class: FalconForSequenceClassification
(Falcon model)FlaubertConfig
configuration class: FlaubertForSequenceClassification
(FlauBERT model)FunnelConfig
configuration class: FunnelForSequenceClassification
(Funnel Transformer model)GPT2Config
configuration class: GPT2ForSequenceClassification
(OpenAI GPT-2 model)GPTBigCodeConfig
configuration class: GPTBigCodeForSequenceClassification
(GPTBigCode model)GPTJConfig
configuration class: GPTJForSequenceClassification
(GPT-J model)GPTNeoConfig
configuration class: GPTNeoForSequenceClassification
(GPT Neo model)GPTNeoXConfig
configuration class: GPTNeoXForSequenceClassification
(GPT NeoX model)GlmConfig
configuration class: GlmForSequenceClassification
(GLM model)IBertConfig
configuration class: IBertForSequenceClassification
(I-BERT model)JambaConfig
configuration class: JambaForSequenceClassification
(Jamba model)JetMoeConfig
configuration class: JetMoeForSequenceClassification
(JetMoe model)LEDConfig
configuration class: LEDForSequenceClassification
(LED model)LayoutLMConfig
configuration class: LayoutLMForSequenceClassification
(LayoutLM model)LayoutLMv2Config
configuration class: LayoutLMv2ForSequenceClassification
(LayoutLMv2 model)LayoutLMv3Config
configuration class: LayoutLMv3ForSequenceClassification
(LayoutLMv3 model)LiltConfig
configuration class: LiltForSequenceClassification
(LiLT model)LongformerConfig
configuration class: LongformerForSequenceClassification
(Longformer model)LukeConfig
configuration class: LukeForSequenceClassification
(LUKE model)MBartConfig
configuration class: MBartForSequenceClassification
(mBART model)MPNetConfig
configuration class: MPNetForSequenceClassification
(MPNet model)MT5Config
configuration class: MT5ForSequenceClassification
(MT5 model)MarkupLMConfig
configuration class: MarkupLMForSequenceClassification
(MarkupLM model)MegaConfig
configuration class: MegaForSequenceClassification
(MEGA model)MegatronBertConfig
configuration class: MegatronBertForSequenceClassification
(Megatron-BERT model)MixtralConfig
configuration class: MixtralForSequenceClassification
(Mixtral model)MobileBertConfig
configuration class: MobileBertForSequenceClassification
(MobileBERT model)ModernBertConfig
configuration class: ModernBertForSequenceClassification
(ModernBERT model)MptConfig
configuration class: MptForSequenceClassification
(MPT model)MraConfig
configuration class: MraForSequenceClassification
(MRA model)MvpConfig
configuration class: MvpForSequenceClassification
(MVP model)NemotronConfig
configuration class: NemotronForSequenceClassification
(Nemotron model)NezhaConfig
configuration class: NezhaForSequenceClassification
(Nezha model)NystromformerConfig
configuration class: NystromformerForSequenceClassification
(Nyströmformer model)OPTConfig
configuration class: OPTForSequenceClassification
(OPT model)OpenLlamaConfig
configuration class: OpenLlamaForSequenceClassification
(OpenLlama model)PLBartConfig
configuration class: PLBartForSequenceClassification
(PLBart model)PerceiverConfig
configuration class: PerceiverForSequenceClassification
(Perceiver model)PersimmonConfig
configuration class: PersimmonForSequenceClassification
(Persimmon model)Phi3Config
configuration class: Phi3ForSequenceClassification
(Phi3 model)PhiConfig
configuration class: PhiForSequenceClassification
(Phi model)PhimoeConfig
configuration class: PhimoeForSequenceClassification
(Phimoe model)QDQBertConfig
configuration class: QDQBertForSequenceClassification
(QDQBert model)Qwen2Config
configuration class: Qwen2ForSequenceClassification
(Qwen2 model)Qwen2MoeConfig
configuration class: Qwen2MoeForSequenceClassification
(Qwen2MoE model)ReformerConfig
configuration class: ReformerForSequenceClassification
(Reformer model)RemBertConfig
configuration class: RemBertForSequenceClassification
(RemBERT model)RoCBertConfig
configuration class: RoCBertForSequenceClassification
(RoCBert model)RoFormerConfig
configuration class: RoFormerForSequenceClassification
(RoFormer model)RobertaConfig
configuration class: RobertaForSequenceClassification
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormForSequenceClassification
(RoBERTa-PreLayerNorm model)SqueezeBertConfig
configuration class: SqueezeBertForSequenceClassification
(SqueezeBERT model)StableLmConfig
configuration class: StableLmForSequenceClassification
(StableLm model)Starcoder2Config
configuration class: Starcoder2ForSequenceClassification
(Starcoder2 model)T5Config
configuration class: T5ForSequenceClassification
(T5 model)TapasConfig
configuration class: TapasForSequenceClassification
(TAPAS model)TransfoXLConfig
configuration class: TransfoXLForSequenceClassification
(Transformer-XL model)UMT5Config
configuration class: UMT5ForSequenceClassification
(UMT5 model)XLMConfig
configuration class: XLMForSequenceClassification
(XLM model)XLMRobertaConfig
configuration class: XLMRobertaForSequenceClassification
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLForSequenceClassification
(XLM-RoBERTa-XL model)XLNetConfig
configuration class: XLNetForSequenceClassification
(XLNet model)XmodConfig
configuration class: XmodForSequenceClassification
(X-MOD model)YosoConfig
configuration class: YosoForSequenceClassification
(YOSO model)ZambaConfig
configuration class: ZambaForSequenceClassification
(Zamba model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertForSequenceClassification
(ALBERT model)BigBirdForSequenceClassification
(BigBird model)BigBirdPegasusForSequenceClassification
(BigBird-Pegasus model)BloomForSequenceClassification
(BLOOM model)CamembertForSequenceClassification
(CamemBERT model)CanineForSequenceClassification
(CANINE model)CTRLForSequenceClassification
(CTRL model)Data2VecTextForSequenceClassification
(Data2VecText model)DiffLlamaForSequenceClassification
(DiffLlama model)DistilBertForSequenceClassification
(DistilBERT model)ElectraForSequenceClassification
(ELECTRA model)ErnieForSequenceClassification
(ERNIE model)ErnieMForSequenceClassification
(ErnieM model)FalconForSequenceClassification
(Falcon model)FlaubertForSequenceClassification
(FlauBERT model)FNetForSequenceClassification
(FNet model)FunnelForSequenceClassification
(Funnel Transformer model)GlmForSequenceClassification
(GLM model)GPT2ForSequenceClassification
(GPT-Sw3 model)GPT2ForSequenceClassification
(OpenAI GPT-2 model)GPTBigCodeForSequenceClassification
(GPTBigCode model)GPTNeoForSequenceClassification
(GPT Neo model)GPTNeoXForSequenceClassification
(GPT NeoX model)GPTJForSequenceClassification
(GPT-J model)IBertForSequenceClassification
(I-BERT model)JambaForSequenceClassification
(Jamba model)JetMoeForSequenceClassification
(JetMoe model)LayoutLMForSequenceClassification
(LayoutLM model)LayoutLMv2ForSequenceClassification
(LayoutLMv2 model)LayoutLMv3ForSequenceClassification
(LayoutLMv3 model)LEDForSequenceClassification
(LED model)LiltForSequenceClassification
(LiLT model)LongformerForSequenceClassification
(Longformer model)LukeForSequenceClassification
(LUKE model)MarkupLMForSequenceClassification
(MarkupLM model)MBartForSequenceClassification
(mBART model)MegaForSequenceClassification
(MEGA model)MegatronBertForSequenceClassification
(Megatron-BERT model)MixtralForSequenceClassification
(Mixtral model)MobileBertForSequenceClassification
(MobileBERT model)ModernBertForSequenceClassification
(ModernBERT model)MPNetForSequenceClassification
(MPNet model)MptForSequenceClassification
(MPT model)MraForSequenceClassification
(MRA model)MT5ForSequenceClassification
(MT5 model)MvpForSequenceClassification
(MVP model)NemotronForSequenceClassification
(Nemotron model)NezhaForSequenceClassification
(Nezha model)NystromformerForSequenceClassification
(Nyströmformer model)OpenLlamaForSequenceClassification
(OpenLlama model)OPTForSequenceClassification
(OPT model)PerceiverForSequenceClassification
(Perceiver model)PersimmonForSequenceClassification
(Persimmon model)PhiForSequenceClassification
(Phi model)Phi3ForSequenceClassification
(Phi3 model)PhimoeForSequenceClassification
(Phimoe model)PLBartForSequenceClassification
(PLBart model)QDQBertForSequenceClassification
(QDQBert model)Qwen2ForSequenceClassification
(Qwen2 model)Qwen2MoeForSequenceClassification
(Qwen2MoE model)ReformerForSequenceClassification
(Reformer model)RemBertForSequenceClassification
(RemBERT model)RobertaForSequenceClassification
(RoBERTa model)RobertaPreLayerNormForSequenceClassification
(RoBERTa-PreLayerNorm model)RoCBertForSequenceClassification
(RoCBert model)RoFormerForSequenceClassification
(RoFormer model)SqueezeBertForSequenceClassification
(SqueezeBERT model)StableLmForSequenceClassification
(StableLm model)Starcoder2ForSequenceClassification
(Starcoder2 model)T5ForSequenceClassification
(T5 model)TapasForSequenceClassification
(TAPAS model)TransfoXLForSequenceClassification
(Transformer-XL model)UMT5ForSequenceClassification
(UMT5 model)XLMForSequenceClassification
(XLM model)XLMRobertaForSequenceClassification
(XLM-RoBERTa model)XLMRobertaXLForSequenceClassification
(XLM-RoBERTa-XL model)XLNetForSequenceClassification
(XLNet model)XmodForSequenceClassification
(X-MOD model)YosoForSequenceClassification
(YOSO model)ZambaForSequenceClassification
(Zamba model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSequenceClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSequenceClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: TFAlbertForSequenceClassification
(ALBERT model)CTRLConfig
configuration class: TFCTRLForSequenceClassification
(CTRL model)CamembertConfig
configuration class: TFCamembertForSequenceClassification
(CamemBERT model)DistilBertConfig
configuration class: TFDistilBertForSequenceClassification
(DistilBERT model)ElectraConfig
configuration class: TFElectraForSequenceClassification
(ELECTRA model)FlaubertConfig
configuration class: TFFlaubertForSequenceClassification
(FlauBERT model)FunnelConfig
configuration class: TFFunnelForSequenceClassification
(Funnel Transformer model)GPT2Config
configuration class: TFGPT2ForSequenceClassification
(OpenAI GPT-2 model)GPTJConfig
configuration class: TFGPTJForSequenceClassification
(GPT-J model)LayoutLMConfig
configuration class: TFLayoutLMForSequenceClassification
(LayoutLM model)LayoutLMv3Config
configuration class: TFLayoutLMv3ForSequenceClassification
(LayoutLMv3 model)LongformerConfig
configuration class: TFLongformerForSequenceClassification
(Longformer model)MPNetConfig
configuration class: TFMPNetForSequenceClassification
(MPNet model)MobileBertConfig
configuration class: TFMobileBertForSequenceClassification
(MobileBERT model)RemBertConfig
configuration class: TFRemBertForSequenceClassification
(RemBERT model)RoFormerConfig
configuration class: TFRoFormerForSequenceClassification
(RoFormer model)RobertaConfig
configuration class: TFRobertaForSequenceClassification
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormForSequenceClassification
(RoBERTa-PreLayerNorm model)TapasConfig
configuration class: TFTapasForSequenceClassification
(TAPAS model)TransfoXLConfig
configuration class: TFTransfoXLForSequenceClassification
(Transformer-XL model)XLMConfig
configuration class: TFXLMForSequenceClassification
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaForSequenceClassification
(XLM-RoBERTa model)XLNetConfig
configuration class: TFXLNetForSequenceClassification
(XLNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFAlbertForSequenceClassification
(ALBERT model)TFCamembertForSequenceClassification
(CamemBERT model)TFCTRLForSequenceClassification
(CTRL model)TFDistilBertForSequenceClassification
(DistilBERT model)TFElectraForSequenceClassification
(ELECTRA model)TFFlaubertForSequenceClassification
(FlauBERT model)TFFunnelForSequenceClassification
(Funnel Transformer model)TFGPT2ForSequenceClassification
(GPT-Sw3 model)TFGPT2ForSequenceClassification
(OpenAI GPT-2 model)TFGPTJForSequenceClassification
(GPT-J model)TFLayoutLMForSequenceClassification
(LayoutLM model)TFLayoutLMv3ForSequenceClassification
(LayoutLMv3 model)TFLongformerForSequenceClassification
(Longformer model)TFMobileBertForSequenceClassification
(MobileBERT model)TFMPNetForSequenceClassification
(MPNet model)TFRemBertForSequenceClassification
(RemBERT model)TFRobertaForSequenceClassification
(RoBERTa model)TFRobertaPreLayerNormForSequenceClassification
(RoBERTa-PreLayerNorm model)TFRoFormerForSequenceClassification
(RoFormer model)TFTapasForSequenceClassification
(TAPAS model)TFTransfoXLForSequenceClassification
(Transformer-XL model)TFXLMForSequenceClassification
(XLM model)TFXLMRobertaForSequenceClassification
(XLM-RoBERTa model)TFXLNetForSequenceClassification
(XLNet model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForSequenceClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSequenceClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: FlaxAlbertForSequenceClassification
(ALBERT model)BigBirdConfig
configuration class: FlaxBigBirdForSequenceClassification
(BigBird model)DistilBertConfig
configuration class: FlaxDistilBertForSequenceClassification
(DistilBERT model)ElectraConfig
configuration class: FlaxElectraForSequenceClassification
(ELECTRA model)MBartConfig
configuration class: FlaxMBartForSequenceClassification
(mBART model)RoFormerConfig
configuration class: FlaxRoFormerForSequenceClassification
(RoFormer model)RobertaConfig
configuration class: FlaxRobertaForSequenceClassification
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormForSequenceClassification
(RoBERTa-PreLayerNorm model)XLMRobertaConfig
configuration class: FlaxXLMRobertaForSequenceClassification
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxAlbertForSequenceClassification
(ALBERT model)FlaxBigBirdForSequenceClassification
(BigBird model)FlaxDistilBertForSequenceClassification
(DistilBERT model)FlaxElectraForSequenceClassification
(ELECTRA model)FlaxMBartForSequenceClassification
(mBART model)FlaxRobertaForSequenceClassification
(RoBERTa model)FlaxRobertaPreLayerNormForSequenceClassification
(RoBERTa-PreLayerNorm model)FlaxRoFormerForSequenceClassification
(RoFormer model)FlaxXLMRobertaForSequenceClassification
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: AlbertForMultipleChoice
(ALBERT model)BigBirdConfig
configuration class: BigBirdForMultipleChoice
(BigBird model)CamembertConfig
configuration class: CamembertForMultipleChoice
(CamemBERT model)CanineConfig
configuration class: CanineForMultipleChoice
(CANINE model)Data2VecTextConfig
configuration class: Data2VecTextForMultipleChoice
(Data2VecText model)DistilBertConfig
configuration class: DistilBertForMultipleChoice
(DistilBERT model)ElectraConfig
configuration class: ElectraForMultipleChoice
(ELECTRA model)ErnieConfig
configuration class: ErnieForMultipleChoice
(ERNIE model)ErnieMConfig
configuration class: ErnieMForMultipleChoice
(ErnieM model)FNetConfig
configuration class: FNetForMultipleChoice
(FNet model)FlaubertConfig
configuration class: FlaubertForMultipleChoice
(FlauBERT model)FunnelConfig
configuration class: FunnelForMultipleChoice
(Funnel Transformer model)IBertConfig
configuration class: IBertForMultipleChoice
(I-BERT model)LongformerConfig
configuration class: LongformerForMultipleChoice
(Longformer model)LukeConfig
configuration class: LukeForMultipleChoice
(LUKE model)MPNetConfig
configuration class: MPNetForMultipleChoice
(MPNet model)MegaConfig
configuration class: MegaForMultipleChoice
(MEGA model)MegatronBertConfig
configuration class: MegatronBertForMultipleChoice
(Megatron-BERT model)MobileBertConfig
configuration class: MobileBertForMultipleChoice
(MobileBERT model)MraConfig
configuration class: MraForMultipleChoice
(MRA model)NezhaConfig
configuration class: NezhaForMultipleChoice
(Nezha model)NystromformerConfig
configuration class: NystromformerForMultipleChoice
(Nyströmformer model)QDQBertConfig
configuration class: QDQBertForMultipleChoice
(QDQBert model)RemBertConfig
configuration class: RemBertForMultipleChoice
(RemBERT model)RoCBertConfig
configuration class: RoCBertForMultipleChoice
(RoCBert model)RoFormerConfig
configuration class: RoFormerForMultipleChoice
(RoFormer model)RobertaConfig
configuration class: RobertaForMultipleChoice
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormForMultipleChoice
(RoBERTa-PreLayerNorm model)SqueezeBertConfig
configuration class: SqueezeBertForMultipleChoice
(SqueezeBERT model)XLMConfig
configuration class: XLMForMultipleChoice
(XLM model)XLMRobertaConfig
configuration class: XLMRobertaForMultipleChoice
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLForMultipleChoice
(XLM-RoBERTa-XL model)XLNetConfig
configuration class: XLNetForMultipleChoice
(XLNet model)XmodConfig
configuration class: XmodForMultipleChoice
(X-MOD model)YosoConfig
configuration class: YosoForMultipleChoice
(YOSO model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertForMultipleChoice
(ALBERT model)BigBirdForMultipleChoice
(BigBird model)CamembertForMultipleChoice
(CamemBERT model)CanineForMultipleChoice
(CANINE model)Data2VecTextForMultipleChoice
(Data2VecText model)DistilBertForMultipleChoice
(DistilBERT model)ElectraForMultipleChoice
(ELECTRA model)ErnieForMultipleChoice
(ERNIE model)ErnieMForMultipleChoice
(ErnieM model)FlaubertForMultipleChoice
(FlauBERT model)FNetForMultipleChoice
(FNet model)FunnelForMultipleChoice
(Funnel Transformer model)IBertForMultipleChoice
(I-BERT model)LongformerForMultipleChoice
(Longformer model)LukeForMultipleChoice
(LUKE model)MegaForMultipleChoice
(MEGA model)MegatronBertForMultipleChoice
(Megatron-BERT model)MobileBertForMultipleChoice
(MobileBERT model)MPNetForMultipleChoice
(MPNet model)MraForMultipleChoice
(MRA model)NezhaForMultipleChoice
(Nezha model)NystromformerForMultipleChoice
(Nyströmformer model)QDQBertForMultipleChoice
(QDQBert model)RemBertForMultipleChoice
(RemBERT model)RobertaForMultipleChoice
(RoBERTa model)RobertaPreLayerNormForMultipleChoice
(RoBERTa-PreLayerNorm model)RoCBertForMultipleChoice
(RoCBert model)RoFormerForMultipleChoice
(RoFormer model)SqueezeBertForMultipleChoice
(SqueezeBERT model)XLMForMultipleChoice
(XLM model)XLMRobertaForMultipleChoice
(XLM-RoBERTa model)XLMRobertaXLForMultipleChoice
(XLM-RoBERTa-XL model)XLNetForMultipleChoice
(XLNet model)XmodForMultipleChoice
(X-MOD model)YosoForMultipleChoice
(YOSO model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMultipleChoice
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMultipleChoice.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: TFAlbertForMultipleChoice
(ALBERT model)CamembertConfig
configuration class: TFCamembertForMultipleChoice
(CamemBERT model)DistilBertConfig
configuration class: TFDistilBertForMultipleChoice
(DistilBERT model)ElectraConfig
configuration class: TFElectraForMultipleChoice
(ELECTRA model)FlaubertConfig
configuration class: TFFlaubertForMultipleChoice
(FlauBERT model)FunnelConfig
configuration class: TFFunnelForMultipleChoice
(Funnel Transformer model)LongformerConfig
configuration class: TFLongformerForMultipleChoice
(Longformer model)MPNetConfig
configuration class: TFMPNetForMultipleChoice
(MPNet model)MobileBertConfig
configuration class: TFMobileBertForMultipleChoice
(MobileBERT model)RemBertConfig
configuration class: TFRemBertForMultipleChoice
(RemBERT model)RoFormerConfig
configuration class: TFRoFormerForMultipleChoice
(RoFormer model)RobertaConfig
configuration class: TFRobertaForMultipleChoice
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormForMultipleChoice
(RoBERTa-PreLayerNorm model)XLMConfig
configuration class: TFXLMForMultipleChoice
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaForMultipleChoice
(XLM-RoBERTa model)XLNetConfig
configuration class: TFXLNetForMultipleChoice
(XLNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFAlbertForMultipleChoice
(ALBERT model)TFCamembertForMultipleChoice
(CamemBERT model)TFDistilBertForMultipleChoice
(DistilBERT model)TFElectraForMultipleChoice
(ELECTRA model)TFFlaubertForMultipleChoice
(FlauBERT model)TFFunnelForMultipleChoice
(Funnel Transformer model)TFLongformerForMultipleChoice
(Longformer model)TFMobileBertForMultipleChoice
(MobileBERT model)TFMPNetForMultipleChoice
(MPNet model)TFRemBertForMultipleChoice
(RemBERT model)TFRobertaForMultipleChoice
(RoBERTa model)TFRobertaPreLayerNormForMultipleChoice
(RoBERTa-PreLayerNorm model)TFRoFormerForMultipleChoice
(RoFormer model)TFXLMForMultipleChoice
(XLM model)TFXLMRobertaForMultipleChoice
(XLM-RoBERTa model)TFXLNetForMultipleChoice
(XLNet model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForMultipleChoice
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMultipleChoice.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: FlaxAlbertForMultipleChoice
(ALBERT model)BigBirdConfig
configuration class: FlaxBigBirdForMultipleChoice
(BigBird model)DistilBertConfig
configuration class: FlaxDistilBertForMultipleChoice
(DistilBERT model)ElectraConfig
configuration class: FlaxElectraForMultipleChoice
(ELECTRA model)RoFormerConfig
configuration class: FlaxRoFormerForMultipleChoice
(RoFormer model)RobertaConfig
configuration class: FlaxRobertaForMultipleChoice
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormForMultipleChoice
(RoBERTa-PreLayerNorm model)XLMRobertaConfig
configuration class: FlaxXLMRobertaForMultipleChoice
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxAlbertForMultipleChoice
(ALBERT model)FlaxBigBirdForMultipleChoice
(BigBird model)FlaxDistilBertForMultipleChoice
(DistilBERT model)FlaxElectraForMultipleChoice
(ELECTRA model)FlaxRobertaForMultipleChoice
(RoBERTa model)FlaxRobertaPreLayerNormForMultipleChoice
(RoBERTa-PreLayerNorm model)FlaxRoFormerForMultipleChoice
(RoFormer model)FlaxXLMRobertaForMultipleChoice
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
ErnieConfig
configuration class: ErnieForNextSentencePrediction
(ERNIE model)FNetConfig
configuration class: FNetForNextSentencePrediction
(FNet model)MegatronBertConfig
configuration class: MegatronBertForNextSentencePrediction
(Megatron-BERT model)MobileBertConfig
configuration class: MobileBertForNextSentencePrediction
(MobileBERT model)NezhaConfig
configuration class: NezhaForNextSentencePrediction
(Nezha model)QDQBertConfig
configuration class: QDQBertForNextSentencePrediction
(QDQBert model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
ErnieForNextSentencePrediction
(ERNIE model)FNetForNextSentencePrediction
(FNet model)MegatronBertForNextSentencePrediction
(Megatron-BERT model)MobileBertForNextSentencePrediction
(MobileBERT model)NezhaForNextSentencePrediction
(Nezha model)QDQBertForNextSentencePrediction
(QDQBert model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForNextSentencePrediction.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
MobileBertConfig
configuration class: TFMobileBertForNextSentencePrediction
(MobileBERT model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFMobileBertForNextSentencePrediction
(MobileBERT model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: AlbertForTokenClassification
(ALBERT model)BigBirdConfig
configuration class: BigBirdForTokenClassification
(BigBird model)BloomConfig
configuration class: BloomForTokenClassification
(BLOOM model)BrosConfig
configuration class: BrosForTokenClassification
(BROS model)CamembertConfig
configuration class: CamembertForTokenClassification
(CamemBERT model)CanineConfig
configuration class: CanineForTokenClassification
(CANINE model)Data2VecTextConfig
configuration class: Data2VecTextForTokenClassification
(Data2VecText model)DiffLlamaConfig
configuration class: DiffLlamaForTokenClassification
(DiffLlama model)DistilBertConfig
configuration class: DistilBertForTokenClassification
(DistilBERT model)ElectraConfig
configuration class: ElectraForTokenClassification
(ELECTRA model)ErnieConfig
configuration class: ErnieForTokenClassification
(ERNIE model)ErnieMConfig
configuration class: ErnieMForTokenClassification
(ErnieM model)FNetConfig
configuration class: FNetForTokenClassification
(FNet model)FalconConfig
configuration class: FalconForTokenClassification
(Falcon model)FlaubertConfig
configuration class: FlaubertForTokenClassification
(FlauBERT model)FunnelConfig
configuration class: FunnelForTokenClassification
(Funnel Transformer model)GPT2Config
configuration class: GPT2ForTokenClassification
(OpenAI GPT-2 model)GPTBigCodeConfig
configuration class: GPTBigCodeForTokenClassification
(GPTBigCode model)GPTNeoConfig
configuration class: GPTNeoForTokenClassification
(GPT Neo model)GPTNeoXConfig
configuration class: GPTNeoXForTokenClassification
(GPT NeoX model)GlmConfig
configuration class: GlmForTokenClassification
(GLM model)IBertConfig
configuration class: IBertForTokenClassification
(I-BERT model)LayoutLMConfig
configuration class: LayoutLMForTokenClassification
(LayoutLM model)LayoutLMv2Config
configuration class: LayoutLMv2ForTokenClassification
(LayoutLMv2 model)LayoutLMv3Config
configuration class: LayoutLMv3ForTokenClassification
(LayoutLMv3 model)LiltConfig
configuration class: LiltForTokenClassification
(LiLT model)LlamaForTokenClassification
(LLaMA model)LongformerConfig
configuration class: LongformerForTokenClassification
(Longformer model)LukeConfig
configuration class: LukeForTokenClassification
(LUKE model)MPNetConfig
configuration class: MPNetForTokenClassification
(MPNet model)MT5Config
configuration class: MT5ForTokenClassification
(MT5 model)MarkupLMConfig
configuration class: MarkupLMForTokenClassification
(MarkupLM model)MegaConfig
configuration class: MegaForTokenClassification
(MEGA model)MegatronBertConfig
configuration class: MegatronBertForTokenClassification
(Megatron-BERT model)MixtralConfig
configuration class: MixtralForTokenClassification
(Mixtral model)MobileBertConfig
configuration class: MobileBertForTokenClassification
(MobileBERT model)ModernBertConfig
configuration class: ModernBertForTokenClassification
(ModernBERT model)MptConfig
configuration class: MptForTokenClassification
(MPT model)MraConfig
configuration class: MraForTokenClassification
(MRA model)NemotronConfig
configuration class: NemotronForTokenClassification
(Nemotron model)NezhaConfig
configuration class: NezhaForTokenClassification
(Nezha model)NystromformerConfig
configuration class: NystromformerForTokenClassification
(Nyströmformer model)PersimmonConfig
configuration class: PersimmonForTokenClassification
(Persimmon model)Phi3Config
configuration class: Phi3ForTokenClassification
(Phi3 model)PhiConfig
configuration class: PhiForTokenClassification
(Phi model)QDQBertConfig
configuration class: QDQBertForTokenClassification
(QDQBert model)Qwen2Config
configuration class: Qwen2ForTokenClassification
(Qwen2 model)Qwen2MoeConfig
configuration class: Qwen2MoeForTokenClassification
(Qwen2MoE model)RemBertConfig
configuration class: RemBertForTokenClassification
(RemBERT model)RoCBertConfig
configuration class: RoCBertForTokenClassification
(RoCBert model)RoFormerConfig
configuration class: RoFormerForTokenClassification
(RoFormer model)RobertaConfig
configuration class: RobertaForTokenClassification
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormForTokenClassification
(RoBERTa-PreLayerNorm model)SqueezeBertConfig
configuration class: SqueezeBertForTokenClassification
(SqueezeBERT model)StableLmConfig
configuration class: StableLmForTokenClassification
(StableLm model)Starcoder2Config
configuration class: Starcoder2ForTokenClassification
(Starcoder2 model)T5Config
configuration class: T5ForTokenClassification
(T5 model)UMT5Config
configuration class: UMT5ForTokenClassification
(UMT5 model)XLMConfig
configuration class: XLMForTokenClassification
(XLM model)XLMRobertaConfig
configuration class: XLMRobertaForTokenClassification
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLForTokenClassification
(XLM-RoBERTa-XL model)XLNetConfig
configuration class: XLNetForTokenClassification
(XLNet model)XmodConfig
configuration class: XmodForTokenClassification
(X-MOD model)YosoConfig
configuration class: YosoForTokenClassification
(YOSO model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertForTokenClassification
(ALBERT model)BigBirdForTokenClassification
(BigBird model)BloomForTokenClassification
(BLOOM model)BrosForTokenClassification
(BROS model)CamembertForTokenClassification
(CamemBERT model)CanineForTokenClassification
(CANINE model)Data2VecTextForTokenClassification
(Data2VecText model)DiffLlamaForTokenClassification
(DiffLlama model)DistilBertForTokenClassification
(DistilBERT model)ElectraForTokenClassification
(ELECTRA model)ErnieForTokenClassification
(ERNIE model)ErnieMForTokenClassification
(ErnieM model)FalconForTokenClassification
(Falcon model)FlaubertForTokenClassification
(FlauBERT model)FNetForTokenClassification
(FNet model)FunnelForTokenClassification
(Funnel Transformer model)GlmForTokenClassification
(GLM model)GPT2ForTokenClassification
(GPT-Sw3 model)GPT2ForTokenClassification
(OpenAI GPT-2 model)GPTBigCodeForTokenClassification
(GPTBigCode model)GPTNeoForTokenClassification
(GPT Neo model)GPTNeoXForTokenClassification
(GPT NeoX model)IBertForTokenClassification
(I-BERT model)LayoutLMForTokenClassification
(LayoutLM model)LayoutLMv2ForTokenClassification
(LayoutLMv2 model)LayoutLMv3ForTokenClassification
(LayoutLMv3 model)LiltForTokenClassification
(LiLT model)LlamaForTokenClassification
(LLaMA model)LongformerForTokenClassification
(Longformer model)LukeForTokenClassification
(LUKE model)MarkupLMForTokenClassification
(MarkupLM model)MegaForTokenClassification
(MEGA model)MegatronBertForTokenClassification
(Megatron-BERT model)MixtralForTokenClassification
(Mixtral model)MobileBertForTokenClassification
(MobileBERT model)ModernBertForTokenClassification
(ModernBERT model)MPNetForTokenClassification
(MPNet model)MptForTokenClassification
(MPT model)MraForTokenClassification
(MRA model)MT5ForTokenClassification
(MT5 model)NemotronForTokenClassification
(Nemotron model)NezhaForTokenClassification
(Nezha model)NystromformerForTokenClassification
(Nyströmformer model)PersimmonForTokenClassification
(Persimmon model)PhiForTokenClassification
(Phi model)Phi3ForTokenClassification
(Phi3 model)QDQBertForTokenClassification
(QDQBert model)Qwen2ForTokenClassification
(Qwen2 model)Qwen2MoeForTokenClassification
(Qwen2MoE model)RemBertForTokenClassification
(RemBERT model)RobertaForTokenClassification
(RoBERTa model)RobertaPreLayerNormForTokenClassification
(RoBERTa-PreLayerNorm model)RoCBertForTokenClassification
(RoCBert model)RoFormerForTokenClassification
(RoFormer model)SqueezeBertForTokenClassification
(SqueezeBERT model)StableLmForTokenClassification
(StableLm model)Starcoder2ForTokenClassification
(Starcoder2 model)T5ForTokenClassification
(T5 model)UMT5ForTokenClassification
(UMT5 model)XLMForTokenClassification
(XLM model)XLMRobertaForTokenClassification
(XLM-RoBERTa model)XLMRobertaXLForTokenClassification
(XLM-RoBERTa-XL model)XLNetForTokenClassification
(XLNet model)XmodForTokenClassification
(X-MOD model)YosoForTokenClassification
(YOSO model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForTokenClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForTokenClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: TFAlbertForTokenClassification
(ALBERT model)CamembertConfig
configuration class: TFCamembertForTokenClassification
(CamemBERT model)DistilBertConfig
configuration class: TFDistilBertForTokenClassification
(DistilBERT model)ElectraConfig
configuration class: TFElectraForTokenClassification
(ELECTRA model)FlaubertConfig
configuration class: TFFlaubertForTokenClassification
(FlauBERT model)FunnelConfig
configuration class: TFFunnelForTokenClassification
(Funnel Transformer model)LayoutLMConfig
configuration class: TFLayoutLMForTokenClassification
(LayoutLM model)LayoutLMv3Config
configuration class: TFLayoutLMv3ForTokenClassification
(LayoutLMv3 model)LongformerConfig
configuration class: TFLongformerForTokenClassification
(Longformer model)MPNetConfig
configuration class: TFMPNetForTokenClassification
(MPNet model)MobileBertConfig
configuration class: TFMobileBertForTokenClassification
(MobileBERT model)RemBertConfig
configuration class: TFRemBertForTokenClassification
(RemBERT model)RoFormerConfig
configuration class: TFRoFormerForTokenClassification
(RoFormer model)RobertaConfig
configuration class: TFRobertaForTokenClassification
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormForTokenClassification
(RoBERTa-PreLayerNorm model)XLMConfig
configuration class: TFXLMForTokenClassification
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaForTokenClassification
(XLM-RoBERTa model)XLNetConfig
configuration class: TFXLNetForTokenClassification
(XLNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFAlbertForTokenClassification
(ALBERT model)TFCamembertForTokenClassification
(CamemBERT model)TFDistilBertForTokenClassification
(DistilBERT model)TFElectraForTokenClassification
(ELECTRA model)TFFlaubertForTokenClassification
(FlauBERT model)TFFunnelForTokenClassification
(Funnel Transformer model)TFLayoutLMForTokenClassification
(LayoutLM model)TFLayoutLMv3ForTokenClassification
(LayoutLMv3 model)TFLongformerForTokenClassification
(Longformer model)TFMobileBertForTokenClassification
(MobileBERT model)TFMPNetForTokenClassification
(MPNet model)TFRemBertForTokenClassification
(RemBERT model)TFRobertaForTokenClassification
(RoBERTa model)TFRobertaPreLayerNormForTokenClassification
(RoBERTa-PreLayerNorm model)TFRoFormerForTokenClassification
(RoFormer model)TFXLMForTokenClassification
(XLM model)TFXLMRobertaForTokenClassification
(XLM-RoBERTa model)TFXLNetForTokenClassification
(XLNet model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForTokenClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForTokenClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: FlaxAlbertForTokenClassification
(ALBERT model)BigBirdConfig
configuration class: FlaxBigBirdForTokenClassification
(BigBird model)DistilBertConfig
configuration class: FlaxDistilBertForTokenClassification
(DistilBERT model)ElectraConfig
configuration class: FlaxElectraForTokenClassification
(ELECTRA model)RoFormerConfig
configuration class: FlaxRoFormerForTokenClassification
(RoFormer model)RobertaConfig
configuration class: FlaxRobertaForTokenClassification
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormForTokenClassification
(RoBERTa-PreLayerNorm model)XLMRobertaConfig
configuration class: FlaxXLMRobertaForTokenClassification
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxAlbertForTokenClassification
(ALBERT model)FlaxBigBirdForTokenClassification
(BigBird model)FlaxDistilBertForTokenClassification
(DistilBERT model)FlaxElectraForTokenClassification
(ELECTRA model)FlaxRobertaForTokenClassification
(RoBERTa model)FlaxRobertaPreLayerNormForTokenClassification
(RoBERTa-PreLayerNorm model)FlaxRoFormerForTokenClassification
(RoFormer model)FlaxXLMRobertaForTokenClassification
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForTokenClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForTokenClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: AlbertForQuestionAnswering
(ALBERT model)BigBirdConfig
configuration class: BigBirdForQuestionAnswering
(BigBird model)BigBirdPegasusConfig
configuration class: BigBirdPegasusForQuestionAnswering
(BigBird-Pegasus model)BloomConfig
configuration class: BloomForQuestionAnswering
(BLOOM model)CamembertConfig
configuration class: CamembertForQuestionAnswering
(CamemBERT model)CanineConfig
configuration class: CanineForQuestionAnswering
(CANINE model)Data2VecTextConfig
configuration class: Data2VecTextForQuestionAnswering
(Data2VecText model)DiffLlamaConfig
configuration class: DiffLlamaForQuestionAnswering
(DiffLlama model)DistilBertConfig
configuration class: DistilBertForQuestionAnswering
(DistilBERT model)ElectraConfig
configuration class: ElectraForQuestionAnswering
(ELECTRA model)ErnieConfig
configuration class: ErnieForQuestionAnswering
(ERNIE model)ErnieMConfig
configuration class: ErnieMForQuestionAnswering
(ErnieM model)FNetConfig
configuration class: FNetForQuestionAnswering
(FNet model)FalconConfig
configuration class: FalconForQuestionAnswering
(Falcon model)FlaubertConfig
configuration class: FlaubertForQuestionAnsweringSimple
(FlauBERT model)FunnelConfig
configuration class: FunnelForQuestionAnswering
(Funnel Transformer model)GPT2Config
configuration class: GPT2ForQuestionAnswering
(OpenAI GPT-2 model)GPTJConfig
configuration class: GPTJForQuestionAnswering
(GPT-J model)GPTNeoConfig
configuration class: GPTNeoForQuestionAnswering
(GPT Neo model)GPTNeoXConfig
configuration class: GPTNeoXForQuestionAnswering
(GPT NeoX model)IBertConfig
configuration class: IBertForQuestionAnswering
(I-BERT model)LEDConfig
configuration class: LEDForQuestionAnswering
(LED model)LayoutLMv2Config
configuration class: LayoutLMv2ForQuestionAnswering
(LayoutLMv2 model)LayoutLMv3Config
configuration class: LayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)LiltConfig
configuration class: LiltForQuestionAnswering
(LiLT model)LlamaForQuestionAnswering
(LLaMA model)LongformerConfig
configuration class: LongformerForQuestionAnswering
(Longformer model)LukeConfig
configuration class: LukeForQuestionAnswering
(LUKE model)LxmertConfig
configuration class: LxmertForQuestionAnswering
(LXMERT model)MBartConfig
configuration class: MBartForQuestionAnswering
(mBART model)MPNetConfig
configuration class: MPNetForQuestionAnswering
(MPNet model)MT5Config
configuration class: MT5ForQuestionAnswering
(MT5 model)MarkupLMConfig
configuration class: MarkupLMForQuestionAnswering
(MarkupLM model)MegaConfig
configuration class: MegaForQuestionAnswering
(MEGA model)MegatronBertConfig
configuration class: MegatronBertForQuestionAnswering
(Megatron-BERT model)MistralForQuestionAnswering
(Mistral model)MixtralConfig
configuration class: MixtralForQuestionAnswering
(Mixtral model)MobileBertConfig
configuration class: MobileBertForQuestionAnswering
(MobileBERT model)MptConfig
configuration class: MptForQuestionAnswering
(MPT model)MraConfig
configuration class: MraForQuestionAnswering
(MRA model)MvpConfig
configuration class: MvpForQuestionAnswering
(MVP model)NemotronConfig
configuration class: NemotronForQuestionAnswering
(Nemotron model)NezhaConfig
configuration class: NezhaForQuestionAnswering
(Nezha model)NystromformerConfig
configuration class: NystromformerForQuestionAnswering
(Nyströmformer model)OPTConfig
configuration class: OPTForQuestionAnswering
(OPT model)QDQBertConfig
configuration class: QDQBertForQuestionAnswering
(QDQBert model)Qwen2Config
configuration class: Qwen2ForQuestionAnswering
(Qwen2 model)Qwen2MoeConfig
configuration class: Qwen2MoeForQuestionAnswering
(Qwen2MoE model)ReformerConfig
configuration class: ReformerForQuestionAnswering
(Reformer model)RemBertConfig
configuration class: RemBertForQuestionAnswering
(RemBERT model)RoCBertConfig
configuration class: RoCBertForQuestionAnswering
(RoCBert model)RoFormerConfig
configuration class: RoFormerForQuestionAnswering
(RoFormer model)RobertaConfig
configuration class: RobertaForQuestionAnswering
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: RobertaPreLayerNormForQuestionAnswering
(RoBERTa-PreLayerNorm model)SplinterConfig
configuration class: SplinterForQuestionAnswering
(Splinter model)SqueezeBertConfig
configuration class: SqueezeBertForQuestionAnswering
(SqueezeBERT model)T5Config
configuration class: T5ForQuestionAnswering
(T5 model)UMT5Config
configuration class: UMT5ForQuestionAnswering
(UMT5 model)XLMConfig
configuration class: XLMForQuestionAnsweringSimple
(XLM model)XLMRobertaConfig
configuration class: XLMRobertaForQuestionAnswering
(XLM-RoBERTa model)XLMRobertaXLConfig
configuration class: XLMRobertaXLForQuestionAnswering
(XLM-RoBERTa-XL model)XLNetConfig
configuration class: XLNetForQuestionAnsweringSimple
(XLNet model)XmodConfig
configuration class: XmodForQuestionAnswering
(X-MOD model)YosoConfig
configuration class: YosoForQuestionAnswering
(YOSO model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlbertForQuestionAnswering
(ALBERT model)BigBirdForQuestionAnswering
(BigBird model)BigBirdPegasusForQuestionAnswering
(BigBird-Pegasus model)BloomForQuestionAnswering
(BLOOM model)CamembertForQuestionAnswering
(CamemBERT model)CanineForQuestionAnswering
(CANINE model)Data2VecTextForQuestionAnswering
(Data2VecText model)DiffLlamaForQuestionAnswering
(DiffLlama model)DistilBertForQuestionAnswering
(DistilBERT model)ElectraForQuestionAnswering
(ELECTRA model)ErnieForQuestionAnswering
(ERNIE model)ErnieMForQuestionAnswering
(ErnieM model)FalconForQuestionAnswering
(Falcon model)FlaubertForQuestionAnsweringSimple
(FlauBERT model)FNetForQuestionAnswering
(FNet model)FunnelForQuestionAnswering
(Funnel Transformer model)GPT2ForQuestionAnswering
(OpenAI GPT-2 model)GPTNeoForQuestionAnswering
(GPT Neo model)GPTNeoXForQuestionAnswering
(GPT NeoX model)GPTJForQuestionAnswering
(GPT-J model)IBertForQuestionAnswering
(I-BERT model)LayoutLMv2ForQuestionAnswering
(LayoutLMv2 model)LayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)LEDForQuestionAnswering
(LED model)LiltForQuestionAnswering
(LiLT model)LlamaForQuestionAnswering
(LLaMA model)LongformerForQuestionAnswering
(Longformer model)LukeForQuestionAnswering
(LUKE model)LxmertForQuestionAnswering
(LXMERT model)MarkupLMForQuestionAnswering
(MarkupLM model)MBartForQuestionAnswering
(mBART model)MegaForQuestionAnswering
(MEGA model)MegatronBertForQuestionAnswering
(Megatron-BERT model)MistralForQuestionAnswering
(Mistral model)MixtralForQuestionAnswering
(Mixtral model)MobileBertForQuestionAnswering
(MobileBERT model)MPNetForQuestionAnswering
(MPNet model)MptForQuestionAnswering
(MPT model)MraForQuestionAnswering
(MRA model)MT5ForQuestionAnswering
(MT5 model)MvpForQuestionAnswering
(MVP model)NemotronForQuestionAnswering
(Nemotron model)NezhaForQuestionAnswering
(Nezha model)NystromformerForQuestionAnswering
(Nyströmformer model)OPTForQuestionAnswering
(OPT model)QDQBertForQuestionAnswering
(QDQBert model)Qwen2ForQuestionAnswering
(Qwen2 model)Qwen2MoeForQuestionAnswering
(Qwen2MoE model)ReformerForQuestionAnswering
(Reformer model)RemBertForQuestionAnswering
(RemBERT model)RobertaForQuestionAnswering
(RoBERTa model)RobertaPreLayerNormForQuestionAnswering
(RoBERTa-PreLayerNorm model)RoCBertForQuestionAnswering
(RoCBert model)RoFormerForQuestionAnswering
(RoFormer model)SplinterForQuestionAnswering
(Splinter model)SqueezeBertForQuestionAnswering
(SqueezeBERT model)T5ForQuestionAnswering
(T5 model)UMT5ForQuestionAnswering
(UMT5 model)XLMForQuestionAnsweringSimple
(XLM model)XLMRobertaForQuestionAnswering
(XLM-RoBERTa model)XLMRobertaXLForQuestionAnswering
(XLM-RoBERTa-XL model)XLNetForQuestionAnsweringSimple
(XLNet model)XmodForQuestionAnswering
(X-MOD model)YosoForQuestionAnswering
(YOSO model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForQuestionAnswering.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: TFAlbertForQuestionAnswering
(ALBERT model)CamembertConfig
configuration class: TFCamembertForQuestionAnswering
(CamemBERT model)DistilBertConfig
configuration class: TFDistilBertForQuestionAnswering
(DistilBERT model)ElectraConfig
configuration class: TFElectraForQuestionAnswering
(ELECTRA model)FlaubertConfig
configuration class: TFFlaubertForQuestionAnsweringSimple
(FlauBERT model)FunnelConfig
configuration class: TFFunnelForQuestionAnswering
(Funnel Transformer model)GPTJConfig
configuration class: TFGPTJForQuestionAnswering
(GPT-J model)LayoutLMv3Config
configuration class: TFLayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)LongformerConfig
configuration class: TFLongformerForQuestionAnswering
(Longformer model)MPNetConfig
configuration class: TFMPNetForQuestionAnswering
(MPNet model)MobileBertConfig
configuration class: TFMobileBertForQuestionAnswering
(MobileBERT model)RemBertConfig
configuration class: TFRemBertForQuestionAnswering
(RemBERT model)RoFormerConfig
configuration class: TFRoFormerForQuestionAnswering
(RoFormer model)RobertaConfig
configuration class: TFRobertaForQuestionAnswering
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: TFRobertaPreLayerNormForQuestionAnswering
(RoBERTa-PreLayerNorm model)XLMConfig
configuration class: TFXLMForQuestionAnsweringSimple
(XLM model)XLMRobertaConfig
configuration class: TFXLMRobertaForQuestionAnswering
(XLM-RoBERTa model)XLNetConfig
configuration class: TFXLNetForQuestionAnsweringSimple
(XLNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFAlbertForQuestionAnswering
(ALBERT model)TFCamembertForQuestionAnswering
(CamemBERT model)TFDistilBertForQuestionAnswering
(DistilBERT model)TFElectraForQuestionAnswering
(ELECTRA model)TFFlaubertForQuestionAnsweringSimple
(FlauBERT model)TFFunnelForQuestionAnswering
(Funnel Transformer model)TFGPTJForQuestionAnswering
(GPT-J model)TFLayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)TFLongformerForQuestionAnswering
(Longformer model)TFMobileBertForQuestionAnswering
(MobileBERT model)TFMPNetForQuestionAnswering
(MPNet model)TFRemBertForQuestionAnswering
(RemBERT model)TFRobertaForQuestionAnswering
(RoBERTa model)TFRobertaPreLayerNormForQuestionAnswering
(RoBERTa-PreLayerNorm model)TFRoFormerForQuestionAnswering
(RoFormer model)TFXLMForQuestionAnsweringSimple
(XLM model)TFXLMRobertaForQuestionAnswering
(XLM-RoBERTa model)TFXLNetForQuestionAnsweringSimple
(XLNet model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForQuestionAnswering.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlbertConfig
configuration class: FlaxAlbertForQuestionAnswering
(ALBERT model)BigBirdConfig
configuration class: FlaxBigBirdForQuestionAnswering
(BigBird model)DistilBertConfig
configuration class: FlaxDistilBertForQuestionAnswering
(DistilBERT model)ElectraConfig
configuration class: FlaxElectraForQuestionAnswering
(ELECTRA model)MBartConfig
configuration class: FlaxMBartForQuestionAnswering
(mBART model)RoFormerConfig
configuration class: FlaxRoFormerForQuestionAnswering
(RoFormer model)RobertaConfig
configuration class: FlaxRobertaForQuestionAnswering
(RoBERTa model)RobertaPreLayerNormConfig
configuration class: FlaxRobertaPreLayerNormForQuestionAnswering
(RoBERTa-PreLayerNorm model)XLMRobertaConfig
configuration class: FlaxXLMRobertaForQuestionAnswering
(XLM-RoBERTa model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxAlbertForQuestionAnswering
(ALBERT model)FlaxBigBirdForQuestionAnswering
(BigBird model)FlaxDistilBertForQuestionAnswering
(DistilBERT model)FlaxElectraForQuestionAnswering
(ELECTRA model)FlaxMBartForQuestionAnswering
(mBART model)FlaxRobertaForQuestionAnswering
(RoBERTa model)FlaxRobertaPreLayerNormForQuestionAnswering
(RoBERTa-PreLayerNorm model)FlaxRoFormerForQuestionAnswering
(RoFormer model)FlaxXLMRobertaForQuestionAnswering
(XLM-RoBERTa model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
다음 자동 클래스들은 아래의 컴퓨터 비전 작업에 사용할 수 있습니다.
This is a generic model class that will be instantiated as one of the model classes of the library (with a depth estimation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
DPTConfig
configuration class: DPTForDepthEstimation
(DPT model)DepthAnythingConfig
configuration class: DepthAnythingForDepthEstimation
(Depth Anything model)GLPNConfig
configuration class: GLPNForDepthEstimation
(GLPN model)ZoeDepthConfig
configuration class: ZoeDepthForDepthEstimation
(ZoeDepth model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a depth estimation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a depth estimation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
DepthAnythingForDepthEstimation
(Depth Anything model)DPTForDepthEstimation
(DPT model)GLPNForDepthEstimation
(GLPN model)ZoeDepthForDepthEstimation
(ZoeDepth model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForDepthEstimation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDepthEstimation.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForDepthEstimation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForDepthEstimation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
BeitConfig
configuration class: BeitForImageClassification
(BEiT model)BitConfig
configuration class: BitForImageClassification
(BiT model)ConvNextConfig
configuration class: ConvNextForImageClassification
(ConvNeXT model)ConvNextV2Config
configuration class: ConvNextV2ForImageClassification
(ConvNeXTV2 model)CvtConfig
configuration class: CvtForImageClassification
(CvT model)Data2VecVisionConfig
configuration class: Data2VecVisionForImageClassification
(Data2VecVision model)DeiTConfig
configuration class: DeiTForImageClassification
or DeiTForImageClassificationWithTeacher
(DeiT model)DinatConfig
configuration class: DinatForImageClassification
(DiNAT model)Dinov2Config
configuration class: Dinov2ForImageClassification
(DINOv2 model)Dinov2WithRegistersConfig
configuration class: Dinov2WithRegistersForImageClassification
(DINOv2 with Registers model)EfficientFormerConfig
configuration class: EfficientFormerForImageClassification
or EfficientFormerForImageClassificationWithTeacher
(EfficientFormer model)EfficientNetConfig
configuration class: EfficientNetForImageClassification
(EfficientNet model)FocalNetConfig
configuration class: FocalNetForImageClassification
(FocalNet model)HieraConfig
configuration class: HieraForImageClassification
(Hiera model)IJepaConfig
configuration class: IJepaForImageClassification
(I-JEPA model)ImageGPTConfig
configuration class: ImageGPTForImageClassification
(ImageGPT model)LevitConfig
configuration class: LevitForImageClassification
or LevitForImageClassificationWithTeacher
(LeViT model)MobileNetV1Config
configuration class: MobileNetV1ForImageClassification
(MobileNetV1 model)MobileNetV2Config
configuration class: MobileNetV2ForImageClassification
(MobileNetV2 model)MobileViTConfig
configuration class: MobileViTForImageClassification
(MobileViT model)MobileViTV2Config
configuration class: MobileViTV2ForImageClassification
(MobileViTV2 model)NatConfig
configuration class: NatForImageClassification
(NAT model)PerceiverConfig
configuration class: PerceiverForImageClassificationLearned
or PerceiverForImageClassificationFourier
or PerceiverForImageClassificationConvProcessing
(Perceiver model)PoolFormerConfig
configuration class: PoolFormerForImageClassification
(PoolFormer model)PvtConfig
configuration class: PvtForImageClassification
(PVT model)PvtV2Config
configuration class: PvtV2ForImageClassification
(PVTv2 model)RegNetConfig
configuration class: RegNetForImageClassification
(RegNet model)ResNetConfig
configuration class: ResNetForImageClassification
(ResNet model)SegformerConfig
configuration class: SegformerForImageClassification
(SegFormer model)SiglipConfig
configuration class: SiglipForImageClassification
(SigLIP model)SwiftFormerConfig
configuration class: SwiftFormerForImageClassification
(SwiftFormer model)TextNetConfig
configuration class: TextNetForImageClassification
(TextNet model)TimmWrapperConfig
configuration class: TimmWrapperForImageClassification
(TimmWrapperModel model)VanConfig
configuration class: VanForImageClassification
(VAN model)ViTHybridConfig
configuration class: ViTHybridForImageClassification
(ViT Hybrid model)ViTMSNConfig
configuration class: ViTMSNForImageClassification
(ViTMSN model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
BeitForImageClassification
(BEiT model)BitForImageClassification
(BiT model)ConvNextForImageClassification
(ConvNeXT model)ConvNextV2ForImageClassification
(ConvNeXTV2 model)CvtForImageClassification
(CvT model)Data2VecVisionForImageClassification
(Data2VecVision model)DeiTForImageClassification
or DeiTForImageClassificationWithTeacher
(DeiT model)DinatForImageClassification
(DiNAT model)Dinov2ForImageClassification
(DINOv2 model)Dinov2WithRegistersForImageClassification
(DINOv2 with Registers model)EfficientFormerForImageClassification
or EfficientFormerForImageClassificationWithTeacher
(EfficientFormer model)EfficientNetForImageClassification
(EfficientNet model)FocalNetForImageClassification
(FocalNet model)HieraForImageClassification
(Hiera model)IJepaForImageClassification
(I-JEPA model)ImageGPTForImageClassification
(ImageGPT model)LevitForImageClassification
or LevitForImageClassificationWithTeacher
(LeViT model)MobileNetV1ForImageClassification
(MobileNetV1 model)MobileNetV2ForImageClassification
(MobileNetV2 model)MobileViTForImageClassification
(MobileViT model)MobileViTV2ForImageClassification
(MobileViTV2 model)NatForImageClassification
(NAT model)PerceiverForImageClassificationLearned
or PerceiverForImageClassificationFourier
or PerceiverForImageClassificationConvProcessing
(Perceiver model)PoolFormerForImageClassification
(PoolFormer model)PvtForImageClassification
(PVT model)PvtV2ForImageClassification
(PVTv2 model)RegNetForImageClassification
(RegNet model)ResNetForImageClassification
(ResNet model)SegformerForImageClassification
(SegFormer model)SiglipForImageClassification
(SigLIP model)SwiftFormerForImageClassification
(SwiftFormer model)TextNetForImageClassification
(TextNet model)TimmWrapperForImageClassification
(TimmWrapperModel model)VanForImageClassification
(VAN model)ViTHybridForImageClassification
(ViT Hybrid model)ViTMSNForImageClassification
(ViTMSN model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForImageClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
ConvNextConfig
configuration class: TFConvNextForImageClassification
(ConvNeXT model)ConvNextV2Config
configuration class: TFConvNextV2ForImageClassification
(ConvNeXTV2 model)CvtConfig
configuration class: TFCvtForImageClassification
(CvT model)Data2VecVisionConfig
configuration class: TFData2VecVisionForImageClassification
(Data2VecVision model)DeiTConfig
configuration class: TFDeiTForImageClassification
or TFDeiTForImageClassificationWithTeacher
(DeiT model)EfficientFormerConfig
configuration class: TFEfficientFormerForImageClassification
or TFEfficientFormerForImageClassificationWithTeacher
(EfficientFormer model)MobileViTConfig
configuration class: TFMobileViTForImageClassification
(MobileViT model)RegNetConfig
configuration class: TFRegNetForImageClassification
(RegNet model)ResNetConfig
configuration class: TFResNetForImageClassification
(ResNet model)SegformerConfig
configuration class: TFSegformerForImageClassification
(SegFormer model)SwiftFormerConfig
configuration class: TFSwiftFormerForImageClassification
(SwiftFormer model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFConvNextForImageClassification
(ConvNeXT model)TFConvNextV2ForImageClassification
(ConvNeXTV2 model)TFCvtForImageClassification
(CvT model)TFData2VecVisionForImageClassification
(Data2VecVision model)TFDeiTForImageClassification
or TFDeiTForImageClassificationWithTeacher
(DeiT model)TFEfficientFormerForImageClassification
or TFEfficientFormerForImageClassificationWithTeacher
(EfficientFormer model)TFMobileViTForImageClassification
(MobileViT model)TFRegNetForImageClassification
(RegNet model)TFResNetForImageClassification
(ResNet model)TFSegformerForImageClassification
(SegFormer model)TFSwiftFormerForImageClassification
(SwiftFormer model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForImageClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
BeitConfig
configuration class: FlaxBeitForImageClassification
(BEiT model)Dinov2Config
configuration class: FlaxDinov2ForImageClassification
(DINOv2 model)RegNetConfig
configuration class: FlaxRegNetForImageClassification
(RegNet model)ResNetConfig
configuration class: FlaxResNetForImageClassification
(ResNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxBeitForImageClassification
(BEiT model)FlaxDinov2ForImageClassification
(DINOv2 model)FlaxRegNetForImageClassification
(RegNet model)FlaxResNetForImageClassification
(ResNet model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForImageClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a video classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
VideoMAEConfig
configuration class: VideoMAEForVideoClassification
(VideoMAE model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a video classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a video classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
VideoMAEForVideoClassification
(VideoMAE model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForVideoClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVideoClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForVideoClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForVideoClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
DeiTConfig
configuration class: DeiTForMaskedImageModeling
(DeiT model)FocalNetConfig
configuration class: FocalNetForMaskedImageModeling
(FocalNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
DeiTForMaskedImageModeling
(DeiT model)FocalNetForMaskedImageModeling
(FocalNet model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedImageModeling
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMaskedImageModeling.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
DeiTConfig
configuration class: TFDeiTForMaskedImageModeling
(DeiT model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFDeiTForMaskedImageModeling
(DeiT model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForMaskedImageModeling
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForMaskedImageModeling.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMaskedImageModeling.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a object detection head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
ConditionalDetrConfig
configuration class: ConditionalDetrForObjectDetection
(Conditional DETR model)DeformableDetrConfig
configuration class: DeformableDetrForObjectDetection
(Deformable DETR model)DetaConfig
configuration class: DetaForObjectDetection
(DETA model)DetrConfig
configuration class: DetrForObjectDetection
(DETR model)RTDetrConfig
configuration class: RTDetrForObjectDetection
(RT-DETR model)TableTransformerConfig
configuration class: TableTransformerForObjectDetection
(Table Transformer model)YolosConfig
configuration class: YolosForObjectDetection
(YOLOS model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a object detection head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a object detection head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
ConditionalDetrForObjectDetection
(Conditional DETR model)DeformableDetrForObjectDetection
(Deformable DETR model)DetaForObjectDetection
(DETA model)DetrForObjectDetection
(DETR model)RTDetrForObjectDetection
(RT-DETR model)TableTransformerForObjectDetection
(Table Transformer model)YolosForObjectDetection
(YOLOS model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForObjectDetection
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForObjectDetection.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForObjectDetection.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForObjectDetection.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
DetrConfig
configuration class: DetrForSegmentation
(DETR model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a image segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
DetrForSegmentation
(DETR model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForImageSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageSegmentation.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForImageSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForImageSegmentation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
BeitConfig
configuration class: BeitForSemanticSegmentation
(BEiT model)DPTConfig
configuration class: DPTForSemanticSegmentation
(DPT model)Data2VecVisionConfig
configuration class: Data2VecVisionForSemanticSegmentation
(Data2VecVision model)MobileNetV2Config
configuration class: MobileNetV2ForSemanticSegmentation
(MobileNetV2 model)MobileViTConfig
configuration class: MobileViTForSemanticSegmentation
(MobileViT model)MobileViTV2Config
configuration class: MobileViTV2ForSemanticSegmentation
(MobileViTV2 model)SegformerConfig
configuration class: SegformerForSemanticSegmentation
(SegFormer model)UperNetConfig
configuration class: UperNetForSemanticSegmentation
(UPerNet model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
BeitForSemanticSegmentation
(BEiT model)Data2VecVisionForSemanticSegmentation
(Data2VecVision model)DPTForSemanticSegmentation
(DPT model)MobileNetV2ForSemanticSegmentation
(MobileNetV2 model)MobileViTForSemanticSegmentation
(MobileViT model)MobileViTV2ForSemanticSegmentation
(MobileViTV2 model)SegformerForSemanticSegmentation
(SegFormer model)UperNetForSemanticSegmentation
(UPerNet model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSemanticSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSemanticSegmentation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Data2VecVisionConfig
configuration class: TFData2VecVisionForSemanticSegmentation
(Data2VecVision model)MobileViTConfig
configuration class: TFMobileViTForSemanticSegmentation
(MobileViT model)SegformerConfig
configuration class: TFSegformerForSemanticSegmentation
(SegFormer model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFData2VecVisionForSemanticSegmentation
(Data2VecVision model)TFMobileViTForSemanticSegmentation
(MobileViT model)TFSegformerForSemanticSegmentation
(SegFormer model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForSemanticSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a instance segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
MaskFormerConfig
configuration class: MaskFormerForInstanceSegmentation
(MaskFormer model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a instance segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a instance segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
MaskFormerForInstanceSegmentation
(MaskFormer model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForInstanceSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForInstanceSegmentation.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForInstanceSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForInstanceSegmentation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a universal image segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
DetrConfig
configuration class: DetrForSegmentation
(DETR model)Mask2FormerConfig
configuration class: Mask2FormerForUniversalSegmentation
(Mask2Former model)MaskFormerConfig
configuration class: MaskFormerForInstanceSegmentation
(MaskFormer model)OneFormerConfig
configuration class: OneFormerForUniversalSegmentation
(OneFormer model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a universal image segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a universal image segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
DetrForSegmentation
(DETR model)Mask2FormerForUniversalSegmentation
(Mask2Former model)MaskFormerForInstanceSegmentation
(MaskFormer model)OneFormerForUniversalSegmentation
(OneFormer model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForUniversalSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForUniversalSegmentation.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForUniversalSegmentation.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForUniversalSegmentation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
AlignConfig
configuration class: AlignModel
(ALIGN model)CLIPSegConfig
configuration class: CLIPSegModel
(CLIPSeg model)ChineseCLIPConfig
configuration class: ChineseCLIPModel
(Chinese-CLIP model)SiglipConfig
configuration class: SiglipModel
(SigLIP model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
AlignModel
(ALIGN model)ChineseCLIPModel
(Chinese-CLIP model)CLIPSegModel
(CLIPSeg model)SiglipModel
(SigLIP model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForZeroShotImageClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForZeroShotImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForZeroShotImageClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForZeroShotImageClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot object detection head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
GroundingDinoConfig
configuration class: GroundingDinoForObjectDetection
(Grounding DINO model)OmDetTurboConfig
configuration class: OmDetTurboForObjectDetection
(OmDet-Turbo model)OwlViTConfig
configuration class: OwlViTForObjectDetection
(OWL-ViT model)Owlv2Config
configuration class: Owlv2ForObjectDetection
(OWLv2 model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a zero-shot object detection head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a zero-shot object detection head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
GroundingDinoForObjectDetection
(Grounding DINO model)OmDetTurboForObjectDetection
(OmDet-Turbo model)Owlv2ForObjectDetection
(OWLv2 model)OwlViTForObjectDetection
(OWL-ViT model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
다음 자동 클래스들은 아래의 오디오 작업에 사용할 수 있습니다.
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
ASTConfig
configuration class: ASTForAudioClassification
(Audio Spectrogram Transformer model)Data2VecAudioConfig
configuration class: Data2VecAudioForSequenceClassification
(Data2VecAudio model)HubertConfig
configuration class: HubertForSequenceClassification
(Hubert model)SEWConfig
configuration class: SEWForSequenceClassification
(SEW model)SEWDConfig
configuration class: SEWDForSequenceClassification
(SEW-D model)UniSpeechConfig
configuration class: UniSpeechForSequenceClassification
(UniSpeech model)UniSpeechSatConfig
configuration class: UniSpeechSatForSequenceClassification
(UniSpeechSat model)Wav2Vec2BertConfig
configuration class: Wav2Vec2BertForSequenceClassification
(Wav2Vec2-BERT model)Wav2Vec2Config
configuration class: Wav2Vec2ForSequenceClassification
(Wav2Vec2 model)Wav2Vec2ConformerConfig
configuration class: Wav2Vec2ConformerForSequenceClassification
(Wav2Vec2-Conformer model)WavLMConfig
configuration class: WavLMForSequenceClassification
(WavLM model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a audio classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
ASTForAudioClassification
(Audio Spectrogram Transformer model)Data2VecAudioForSequenceClassification
(Data2VecAudio model)HubertForSequenceClassification
(Hubert model)SEWForSequenceClassification
(SEW model)SEWDForSequenceClassification
(SEW-D model)UniSpeechForSequenceClassification
(UniSpeech model)UniSpeechSatForSequenceClassification
(UniSpeechSat model)Wav2Vec2ForSequenceClassification
(Wav2Vec2 model)Wav2Vec2BertForSequenceClassification
(Wav2Vec2-BERT model)Wav2Vec2ConformerForSequenceClassification
(Wav2Vec2-Conformer model)WavLMForSequenceClassification
(WavLM model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Wav2Vec2Config
configuration class: TFWav2Vec2ForSequenceClassification
(Wav2Vec2 model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a audio classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFWav2Vec2ForSequenceClassification
(Wav2Vec2 model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForAudioClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForAudioClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForAudioClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio frame (token) classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Data2VecAudioConfig
configuration class: Data2VecAudioForAudioFrameClassification
(Data2VecAudio model)UniSpeechSatConfig
configuration class: UniSpeechSatForAudioFrameClassification
(UniSpeechSat model)Wav2Vec2BertConfig
configuration class: Wav2Vec2BertForAudioFrameClassification
(Wav2Vec2-BERT model)Wav2Vec2Config
configuration class: Wav2Vec2ForAudioFrameClassification
(Wav2Vec2 model)Wav2Vec2ConformerConfig
configuration class: Wav2Vec2ConformerForAudioFrameClassification
(Wav2Vec2-Conformer model)WavLMConfig
configuration class: WavLMForAudioFrameClassification
(WavLM model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a audio frame (token) classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a audio frame (token) classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Data2VecAudioForAudioFrameClassification
(Data2VecAudio model)UniSpeechSatForAudioFrameClassification
(UniSpeechSat model)Wav2Vec2ForAudioFrameClassification
(Wav2Vec2 model)Wav2Vec2BertForAudioFrameClassification
(Wav2Vec2-BERT model)Wav2Vec2ConformerForAudioFrameClassification
(Wav2Vec2-Conformer model)WavLMForAudioFrameClassification
(WavLM model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioFrameClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioFrameClassification.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForAudioFrameClassification.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioFrameClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Data2VecAudioConfig
configuration class: Data2VecAudioForCTC
(Data2VecAudio model)HubertConfig
configuration class: HubertForCTC
(Hubert model)MCTCTConfig
configuration class: MCTCTForCTC
(M-CTC-T model)SEWConfig
configuration class: SEWForCTC
(SEW model)SEWDConfig
configuration class: SEWDForCTC
(SEW-D model)UniSpeechConfig
configuration class: UniSpeechForCTC
(UniSpeech model)UniSpeechSatConfig
configuration class: UniSpeechSatForCTC
(UniSpeechSat model)Wav2Vec2BertConfig
configuration class: Wav2Vec2BertForCTC
(Wav2Vec2-BERT model)Wav2Vec2Config
configuration class: Wav2Vec2ForCTC
(Wav2Vec2 model)Wav2Vec2ConformerConfig
configuration class: Wav2Vec2ConformerForCTC
(Wav2Vec2-Conformer model)WavLMConfig
configuration class: WavLMForCTC
(WavLM model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Data2VecAudioForCTC
(Data2VecAudio model)HubertForCTC
(Hubert model)MCTCTForCTC
(M-CTC-T model)SEWForCTC
(SEW model)SEWDForCTC
(SEW-D model)UniSpeechForCTC
(UniSpeech model)UniSpeechSatForCTC
(UniSpeechSat model)Wav2Vec2ForCTC
(Wav2Vec2 model)Wav2Vec2BertForCTC
(Wav2Vec2-BERT model)Wav2Vec2ConformerForCTC
(Wav2Vec2-Conformer model)WavLMForCTC
(WavLM model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForCTC
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCTC.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForCTC.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForCTC.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Pop2PianoConfig
configuration class: Pop2PianoForConditionalGeneration
(Pop2Piano model)SeamlessM4TConfig
configuration class: SeamlessM4TForSpeechToText
(SeamlessM4T model)SeamlessM4Tv2Config
configuration class: SeamlessM4Tv2ForSpeechToText
(SeamlessM4Tv2 model)Speech2TextConfig
configuration class: Speech2TextForConditionalGeneration
(Speech2Text model)SpeechEncoderDecoderConfig
configuration class: SpeechEncoderDecoderModel
(Speech Encoder decoder model)SpeechT5Config
configuration class: SpeechT5ForSpeechToText
(SpeechT5 model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Pop2PianoForConditionalGeneration
(Pop2Piano model)SeamlessM4TForSpeechToText
(SeamlessM4T model)SeamlessM4Tv2ForSpeechToText
(SeamlessM4Tv2 model)SpeechEncoderDecoderModel
(Speech Encoder decoder model)Speech2TextForConditionalGeneration
(Speech2Text model)SpeechT5ForSpeechToText
(SpeechT5 model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Speech2TextConfig
configuration class: TFSpeech2TextForConditionalGeneration
(Speech2Text model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFSpeech2TextForConditionalGeneration
(Speech2Text model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
SpeechEncoderDecoderConfig
configuration class: FlaxSpeechEncoderDecoderModel
(Speech Encoder decoder model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxSpeechEncoderDecoderModel
(Speech Encoder decoder model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSpeechSeq2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio retrieval via x-vector head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Data2VecAudioConfig
configuration class: Data2VecAudioForXVector
(Data2VecAudio model)UniSpeechSatConfig
configuration class: UniSpeechSatForXVector
(UniSpeechSat model)Wav2Vec2BertConfig
configuration class: Wav2Vec2BertForXVector
(Wav2Vec2-BERT model)Wav2Vec2Config
configuration class: Wav2Vec2ForXVector
(Wav2Vec2 model)Wav2Vec2ConformerConfig
configuration class: Wav2Vec2ConformerForXVector
(Wav2Vec2-Conformer model)WavLMConfig
configuration class: WavLMForXVector
(WavLM model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a audio retrieval via x-vector head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a audio retrieval via x-vector head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Data2VecAudioForXVector
(Data2VecAudio model)UniSpeechSatForXVector
(UniSpeechSat model)Wav2Vec2ForXVector
(Wav2Vec2 model)Wav2Vec2BertForXVector
(Wav2Vec2-BERT model)Wav2Vec2ConformerForXVector
(Wav2Vec2-Conformer model)WavLMForXVector
(WavLM model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioXVector
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioXVector.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForAudioXVector.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioXVector.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
다음 자동 클래스들은 아래의 멀티모달 작업에 사용할 수 있습니다.
This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
TapasConfig
configuration class: TapasForQuestionAnswering
(TAPAS model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a table question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TapasForQuestionAnswering
(TAPAS model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")
>>> # Update configuration during loading
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/tapas_tf_model_config.json")
>>> model = AutoModelForTableQuestionAnswering.from_pretrained(
... "./tf_model/tapas_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
TapasConfig
configuration class: TFTapasForQuestionAnswering
(TAPAS model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a table question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFTapasForQuestionAnswering
(TAPAS model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")
>>> # Update configuration during loading
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/tapas_pt_model_config.json")
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained(
... "./pt_model/tapas_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
LayoutLMConfig
configuration class: LayoutLMForQuestionAnswering
(LayoutLM model)LayoutLMv2Config
configuration class: LayoutLMv2ForQuestionAnswering
(LayoutLMv2 model)LayoutLMv3Config
configuration class: LayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a document question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> model = AutoModelForDocumentQuestionAnswering.from_config(config)
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
LayoutLMForQuestionAnswering
(LayoutLM model)LayoutLMv2ForQuestionAnswering
(LayoutLMv2 model)LayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> # Update configuration during loading
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/layoutlm_tf_model_config.json")
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(
... "./tf_model/layoutlm_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
LayoutLMConfig
configuration class: TFLayoutLMForQuestionAnswering
(LayoutLM model)LayoutLMv3Config
configuration class: TFLayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a document question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> model = TFAutoModelForDocumentQuestionAnswering.from_config(config)
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFLayoutLMForQuestionAnswering
(LayoutLM model)TFLayoutLMv3ForQuestionAnswering
(LayoutLMv3 model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> # Update configuration during loading
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/layoutlm_pt_model_config.json")
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained(
... "./pt_model/layoutlm_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a visual question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
ViltConfig
configuration class: ViltForQuestionAnswering
(ViLT model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a visual question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a visual question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
ViltForQuestionAnswering
(ViLT model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
>>> # Update configuration during loading
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/vilt_tf_model_config.json")
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained(
... "./tf_model/vilt_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
GitConfig
configuration class: GitForCausalLM
(GIT model)Idefics2Config
configuration class: Idefics2ForConditionalGeneration
(Idefics2 model)Idefics3Config
configuration class: Idefics3ForConditionalGeneration
(Idefics3 model)InstructBlipConfig
configuration class: InstructBlipForConditionalGeneration
(InstructBLIP model)InstructBlipVideoConfig
configuration class: InstructBlipVideoForConditionalGeneration
(InstructBlipVideo model)Kosmos2Config
configuration class: Kosmos2ForConditionalGeneration
(KOSMOS-2 model)LlavaConfig
configuration class: LlavaForConditionalGeneration
(LLaVa model)LlavaNextConfig
configuration class: LlavaNextForConditionalGeneration
(LLaVA-NeXT model)LlavaNextVideoConfig
configuration class: LlavaNextVideoForConditionalGeneration
(LLaVa-NeXT-Video model)LlavaOnevisionConfig
configuration class: LlavaOnevisionForConditionalGeneration
(LLaVA-Onevision model)MllamaConfig
configuration class: MllamaForConditionalGeneration
(Mllama model)Pix2StructConfig
configuration class: Pix2StructForConditionalGeneration
(Pix2Struct model)Qwen2VLConfig
configuration class: Qwen2VLForConditionalGeneration
(Qwen2VL model)VideoLlavaConfig
configuration class: VideoLlavaForConditionalGeneration
(VideoLlava model)VipLlavaConfig
configuration class: VipLlavaForConditionalGeneration
(VipLlava model)VisionEncoderDecoderConfig
configuration class: VisionEncoderDecoderModel
(Vision Encoder decoder model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
GitForCausalLM
(GIT model)Idefics2ForConditionalGeneration
(Idefics2 model)Idefics3ForConditionalGeneration
(Idefics3 model)InstructBlipForConditionalGeneration
(InstructBLIP model)InstructBlipVideoForConditionalGeneration
(InstructBlipVideo model)Kosmos2ForConditionalGeneration
(KOSMOS-2 model)LlavaForConditionalGeneration
(LLaVa model)LlavaNextForConditionalGeneration
(LLaVA-NeXT model)LlavaNextVideoForConditionalGeneration
(LLaVa-NeXT-Video model)LlavaOnevisionForConditionalGeneration
(LLaVA-Onevision model)MllamaForConditionalGeneration
(Mllama model)Pix2StructForConditionalGeneration
(Pix2Struct model)Qwen2VLForConditionalGeneration
(Qwen2VL model)VideoLlavaForConditionalGeneration
(VideoLlava model)VipLlavaForConditionalGeneration
(VipLlava model)VisionEncoderDecoderModel
(Vision Encoder decoder model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForVision2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForVision2Seq.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
VisionEncoderDecoderConfig
configuration class: TFVisionEncoderDecoderModel
(Vision Encoder decoder model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
TFVisionEncoderDecoderModel
(Vision Encoder decoder model)Examples:
>>> from transformers import AutoConfig, TFAutoModelForVision2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForVision2Seq.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
VisionEncoderDecoderConfig
configuration class: FlaxVisionEncoderDecoderModel
(Vision Encoder decoder model)str
, optional) —
The attention implementation to use in the model (if relevant). Can be any of "eager"
(manual implementation of the attention), "sdpa"
(using F.scaled_dot_product_attention
), or "flash_attention_2"
(using Dao-AILab/flash-attention). By default, if available, SDPA will be used for torch>=2.1.1. The default is otherwise the manual "eager"
implementation. Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
./my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method. pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument). bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git. bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine. str
, optional, defaults to "main"
) —
The specific revision to use for the code on the Hub, if the code leaves in a different repository than
the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based
system for storing models and other artifacts on huggingface.co, so revision
can be any identifier
allowed by git. output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
FlaxVisionEncoderDecoderModel
(Vision Encoder decoder model)Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForVision2Seq.from_pretrained("google-bert/bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForVision2Seq.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )