The CogVLM model was proposed in CogVLM: Visual Expert for Pretrained Language Models by Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, Jie Tang. CogVLM adds separate QKV and MLP weights to a frozen large language model, enabling a strong multimodal foundation model that performs well on various multimodal benchmarks.
The abstract from the paper is the following:
We introduce CogVLM, a powerful open-source visual language foundation model. Different from the popular shallow alignment method which maps image features into the input space of language model, CogVLM bridges the gap between the frozen pretrained language model and image encoder by a trainable visual expert module in the attention and FFN layers. As a result, CogVLM enables deep fusion of vision language features without sacrificing any performance on NLP tasks. CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modal benchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+, RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and ranks the 2nd on VQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X 55B.
Tips:
This model was contributed by nielsr. The original code can be found here.
( vision_config = None vocab_size = 32000 hidden_size = 4096 intermediate_size = 11008 num_hidden_layers = 32 num_attention_heads = 32 hidden_act = 'silu' max_position_embeddings = 2048 initializer_range = 0.02 rms_norm_eps = 1e-05 pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 tie_word_embeddings = False use_cache = True **kwargs )
Parameters
dict
, optional) —
Dictionary of configuration options used to initialize CogvlmVisionConfig. int
, optional, defaults to 32000) —
Vocabulary size of the CogVLM model. Defines the number of different tokens that can be represented by the
inputs_ids
passed when calling CogvlmModel. int
, optional, defaults to 4096) —
Dimension of the hidden representations. int
, optional, defaults to 11008) —
Dimension of the MLP representations. int
, optional, defaults to 32) —
Number of hidden layers in the Transformer decoder. int
, optional, defaults to 32) —
Number of attention heads for each attention layer in the Transformer decoder. str
or function
, optional, defaults to "silu"
) —
The non-linear activation function (function or string) in the decoder. int
, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. float
, optional, defaults to 1e-05) —
The epsilon used by the rms normalization layers. int
, optional, defaults to 0) —
Padding token id. int
, optional, defaults to 1) —
Beginning of stream token id. int
, optional, defaults to 2) —
End of stream token id. bool
, optional, defaults to False
) —
Whether to tie weight embeddings bool
, optional, defaults to True
) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True
. CogvlmConfig is the configuration class to store the configuration of a CogvlmForCausalLM. It is used to instantiate a CogVLM model according to the specified arguments, defining the vision model and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the CogVLM THUDM/cogvlm-chat-hf architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import CogvlmConfig, CogvlmForCausalLM
>>> # Initializing a CogvlmConfig with THUDM/cogvlm-chat-hf style configuration
>>> configuration = CogvlmConfig()
>>> # Initializing a CogvlmForCausalLM (with random weights) from the THUDM/cogvlm-chat-hf style configuration
>>> model = CogvlmForCausalLM(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( image_size = 490 num_channels = 3 patch_size = 14 hidden_size = 1792 intermediate_size = 15360 num_hidden_layers = 63 num_attention_heads = 16 hidden_act = 'gelu' layer_norm_eps = 1e-06 initializer_range = 1e-10 dropout_prob = 0.0 **kwargs )
Parameters
int
, optional, defaults to 490) —
The size (resolution) of each image. int
, optional, defaults to 3) —
The number of channels in each image. int
, optional, defaults to 14) —
The size (resolution) of each patch. int
, optional, defaults to 1792) —
Dimensionality of the encoder layers and the pooler layer. int
, optional, defaults to 15360) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. int
, optional, defaults to 63) —
Number of hidden layers in the Transformer encoder. int
, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder. str
or function
, optional, defaults to "gelu"
) —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu"
,
"relu"
, "selu"
and "gelu_new"
`"gelu"
are supported. float
, optional, defaults to 1e-06) —
The epsilon used for layernorm layers. float
, optional, defaults to 1e-10) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. float
, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. This is the configuration class to store the configuration of a CogvlmVisionModel
. It is used to instantiate a
CogVLM vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration defaults will yield a similar configuration to that of the CogVLM
THUDM/cogvlm-chat-hf architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import CogvlmVisionConfig, CogvlmVisionModel
>>> # Initializing a CogvlmVisionConfig with THUDM/cogvlm-chat-hf style configuration
>>> configuration = CogvlmVisionConfig()
>>> # Initializing a CogvlmVisionModel (with random weights) from the THUDM/cogvlm-chat-hf style configuration
>>> model = CogvlmVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( image_processor tokenizer image_size: int patch_size: int )
Parameters
CLIPImageProcessor
) —
An instance of CLIPImageProcessor. The image processor is a required input. AutoTokenizer
) —
An instance of [‘LlamaTokenizer`]. The tokenizer is a required input. int
) —
The image size used by the model. int
) —
The patch size used by the model. Constructs a CogVLM processor which wraps a CLIP image processor and a LLaMa tokenizer into a single processor.
CogvlmProcessor offers all the functionalities of CLIPImageProcessor and LlamaTokenizer. See the docstring
of __call__()
and decode() for more information.
This method forwards all its arguments to PreTrainedTokenizer’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to PreTrainedTokenizer’s decode(). Please refer to the docstring of this method for more information.
( config )
Parameters
CogVLM model without any head on top, just outputting raw hidden states.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: LongTensor = None pixel_values: List = None token_type_ids: Optional = None attention_mask: Optional = None position_ids: Optional = None past_key_values: Optional = None inputs_embeds: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details. torch.FloatTensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor
of shape (batch_size, sequence_length)
, optional):
Labels for computing the language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored,
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
.
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (CogvlmConfig) and inputs.
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values
is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size)
is output.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and optionally if
config.is_encoder_decoder=True
2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True
in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The CogvlmModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import CogvlmProcessor, CogvlmModel
>>> import torch
>>> import requests
>>> from PIL import Image
>>> processor = CogvlmProcessor.from_pretrained("THUDM/cogvlm-chat-hf")
>>> model = CogvlmModel.from_pretrained("THUDM/cogvlm-chat-hf")
>>> # load image
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> query = "Describe this image"
>>> prompt = f"Question: {query} Answer:"
>>> inputs = processor(images=image, text=prompt, return_tensors="pt")
>>> # forward pass
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
( config )
Parameters
CogVLM model with a language modeling head on top (a linear layer on top of the hidden states).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: LongTensor = None pixel_values: FloatTensor = None token_type_ids: Optional = None attention_mask: Optional = None position_ids: Optional = None past_key_values: Optional = None inputs_embeds: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None labels: Optional = None ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details. torch.FloatTensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor
of shape (batch_size, sequence_length)
, optional):
Labels for computing the language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored,
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
.
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (CogvlmConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
)
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The CogvlmForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import CogvlmProcessor, CogvlmForCausalLM
>>> import torch
>>> import requests
>>> from PIL import Image
>>> processor = CogvlmProcessor.from_pretrained("THUDM/cogvlm-chat-hf")
>>> model = CogvlmForCausalLM.from_pretrained("THUDM/cogvlm-chat-hf")
>>> # load image
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> query = "Describe this image"
>>> prompt = f"Question: {query} Answer:"
>>> inputs = processor(images=image, text=prompt, return_tensors="pt")
>>> outputs = model.generate(**inputs)
>>> generated_text = processor.batch_decode(outputs, skip_special_tokens=True)
( inputs: Optional = None generation_config: Optional = None logits_processor: Optional = None stopping_criteria: Optional = None prefix_allowed_tokens_fn: Optional = None synced_gpus: Optional = None assistant_model: Optional = None streamer: Optional = None negative_prompt_ids: Optional = None negative_prompt_attention_mask: Optional = None **kwargs ) → ModelOutput or torch.LongTensor
Parameters
torch.Tensor
of varying shape depending on the modality, optional) —
The sequence used as a prompt for the generation or as model inputs to the encoder. If None
the
method initializes it with bos_token_id
and a batch size of 1. For decoder-only models inputs
should be in the format of input_ids
. For encoder-decoder models inputs can represent any of
input_ids
, input_values
, input_features
, or pixel_values
. **kwargs
passed to generate matching the attributes of generation_config
will override them. If
generation_config
is not provided, the default will be used, which has the following loading
priority: 1) from the generation_config.json
model file, if it exists; 2) from the model
configuration. Please note that unspecified parameters will inherit GenerationConfig’s
default values, whose documentation should be checked to parameterize generation. LogitsProcessorList
, optional) —
Custom logits processors that complement the default logits processors built from arguments and
generation config. If a logit processor is passed that is already created with the arguments or a
generation config an error is thrown. This feature is intended for advanced users. StoppingCriteriaList
, optional) —
Custom stopping criteria that complements the default stopping criteria built from arguments and a
generation config. If a stopping criteria is passed that is already created with the arguments or a
generation config an error is thrown. If your stopping criteria depends on the scores
input, make
sure you pass return_dict_in_generate=True, output_scores=True
to generate
. This feature is
intended for advanced users. Callable[[int, torch.Tensor], List[int]]
, optional) —
If provided, this function constraints the beam search to allowed tokens only at each step. If not
provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id
and
input_ids
. It has to return a list with the allowed tokens for the next generation step conditioned
on the batch ID batch_id
and the previously generated tokens inputs_ids
. This argument is useful
for constrained generation conditioned on the prefix, as described in Autoregressive Entity
Retrieval. bool
, optional) —
Whether to continue running the while loop until max_length. Unless overridden this flag will be set to
True
under DeepSpeed ZeRO Stage 3 multiple GPUs environment to avoid hanging if one GPU finished
generating before other GPUs. Otherwise it’ll be set to False
. PreTrainedModel
, optional) —
An assistant model that can be used to accelerate generation. The assistant model must have the exact
same tokenizer. The acceleration is achieved when forecasting candidate tokens with the assistent model
is much faster than running generation with the model you’re calling generate from. As such, the
assistant model should be much smaller. BaseStreamer
, optional) —
Streamer object that will be used to stream the generated sequences. Generated tokens are passed
through streamer.put(token_ids)
and the streamer is responsible for any further processing. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
The negative prompt needed for some processors such as CFG. The batch size must match the input batch
size. This is an experimental feature, subject to breaking API changes in future versions. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Attention_mask for negative_prompt_ids
. Dict[str, Any]
, optional) —
Ad hoc parametrization of generation_config
and/or additional model-specific kwargs that will be
forwarded to the forward
function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Returns
ModelOutput or torch.LongTensor
A ModelOutput (if return_dict_in_generate=True
or when config.return_dict_in_generate=True
) or a torch.LongTensor
.
If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False
), the possible
ModelOutput types are:
If the model is an encoder-decoder model (model.config.is_encoder_decoder=True
), the possible
ModelOutput types are:
Generates sequences of token ids for models with a language modeling head.
Most generation-controlling parameters are set in generation_config
which, if not passed, will be set to the
model’s default generation configuration. You can override any generation_config
by passing the corresponding
parameters to generate(), e.g. .generate(inputs, num_beams=4, do_sample=True)
.
For an overview of generation strategies and code examples, check out the following guide.