The ColPali model was proposed in ColPali: Efficient Document Retrieval with Vision Language Models by Manuel Faysse*, Hugues Sibille*, Tony Wu*, Bilel Omrani, Gautier Viaud, Céline Hudelot, Pierre Colombo (* denotes equal contribution). Work lead by ILLUIN Technology.
In our proposed ColPali approach, we leverage VLMs to construct efficient multi-vector embeddings directly from document images (“screenshots”) for document retrieval. We train the model to maximize the similarity between these document embeddings and the corresponding query embeddings, using the late interaction method introduced in ColBERT.
Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, etc.) of a document.
colpali-engine
package can be found here. 🌎This model was contributed by @tonywu71 and @yonigozlan.
This example demonstrates how to use ColPali to embed both queries and images, calculate their similarity scores, and identify the most relevant matches. For a specific query, you can retrieve the top-k most similar images by selecting the ones with the highest similarity scores.
import torch
from PIL import Image
from transformers import ColPaliForRetrieval, ColPaliProcessor
model_name = "vidore/colpali-v1.2-hf"
model = ColPaliForRetrieval.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
).eval()
processor = ColPaliProcessor.from_pretrained(model_name)
# Your inputs (replace dummy images with screenshots of your documents)
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"What is the organizational structure for our R&D department?",
"Can you provide a breakdown of last year’s financial performance?",
]
# Process the inputs
batch_images = processor(images=images).to(model.device)
batch_queries = processor(text=queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images).embeddings
query_embeddings = model(**batch_queries).embeddings
# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)
( vlm_config = None text_config = None embedding_dim: int = 128 **kwargs )
Parameters
PretrainedConfig
, optional) —
Configuration of the VLM backbone model. PretrainedConfig
, optional) —
Configuration of the text backbone model. Overrides the text_config
attribute of the vlm_config
if provided. int
, optional, defaults to 128) —
Dimension of the multi-vector embeddings produced by the model. Configuration class to store the configuration of a ColPaliForRetrieval. It is used to instantiate an instance
of ColPaliForRetrieval
according to the specified arguments, defining the model architecture following the methodology
from the “ColPali: Efficient Document Retrieval with Vision Language Models” paper.
Creating a configuration with the default settings will result in a configuration where the VLM backbone is set to the default PaliGemma configuration, i.e the one from vidore/colpali-v1.2.
The ColPali config is very similar to PaligemmaConfig
, but with an extra attribute defining the embedding dimension.
Note that contrarily to what the class name suggests (actually the name refers to the ColPali methodology), you can use a different VLM backbone model than PaliGemma by passing the corresponding VLM configuration to the class constructor.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
( image_processor = None tokenizer = None chat_template = None **kwargs )
Parameters
str
, optional) — A Jinja template which will be used to convert lists of messages
in a chat into a tokenizable string. Constructs a ColPali processor which wraps a PaliGemmaProcessor and special methods to process images and queries, as well as to compute the late-interaction retrieval score.
ColPaliProcessor offers all the functionalities of PaliGemmaProcessor. See the __call__()
for more information.
This method forwards all its arguments to GemmaTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to GemmaTokenizerFast’s decode(). Please refer to the docstring of this method for more information.
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] = None **kwargs: typing_extensions.Unpack[transformers.models.colpali.processing_colpali.ColPaliProcessorKwargs] ) → BatchFeature
Parameters
PIL.Image.Image
, np.ndarray
, torch.Tensor
, List[PIL.Image.Image]
, List[np.ndarray]
, List[torch.Tensor]
) —
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width. str
or TensorType, optional) —
If set, will return tensors of a particular framework. Acceptable values are:
'tf'
: Return TensorFlow tf.constant
objects.'pt'
: Return PyTorch torch.Tensor
objects.'np'
: Return NumPy np.ndarray
objects.'jax'
: Return JAX jnp.ndarray
objects.Returns
A BatchFeature with the following fields:
return_attention_mask=True
or if “attention_mask” is in self.model_input_names
and if text
is not
None
).images
is not None
.Prepare for the model one or several image(s). This method is a wrapper around the __call__
method of the ColPaliProcessor’s
ColPaliProcessor.__call__()
.
This method forwards the images
and kwargs
arguments to SiglipImageProcessor’s call().
( text: typing.Union[str, typing.List[str]] **kwargs: typing_extensions.Unpack[transformers.models.colpali.processing_colpali.ColPaliProcessorKwargs] ) → BatchFeature
Parameters
str
, List[str]
, List[List[str]]
) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True
(to lift the ambiguity with a batch of sequences). str
or TensorType, optional) —
If set, will return tensors of a particular framework. Acceptable values are:
'tf'
: Return TensorFlow tf.constant
objects.'pt'
: Return PyTorch torch.Tensor
objects.'np'
: Return NumPy np.ndarray
objects.'jax'
: Return JAX jnp.ndarray
objects.Returns
A BatchFeature with the following fields:
return_attention_mask=True
or if “attention_mask” is in self.model_input_names
and if text
is not
None
).Prepare for the model one or several texts. This method is a wrapper around the __call__
method of the ColPaliProcessor’s
ColPaliProcessor.__call__()
.
This method forwards the text
and kwargs
arguments to LlamaTokenizerFast’s call().
( query_embeddings: typing.Union[ForwardRef('torch.Tensor'), typing.List[ForwardRef('torch.Tensor')]] passage_embeddings: typing.Union[ForwardRef('torch.Tensor'), typing.List[ForwardRef('torch.Tensor')]] batch_size: int = 128 output_dtype: typing.Optional[ForwardRef('torch.dtype')] = None output_device: typing.Union[ForwardRef('torch.device'), str] = 'cpu' ) → torch.Tensor
Parameters
Union[torch.Tensor, List[torch.Tensor]
) — Query embeddings. Union[torch.Tensor, List[torch.Tensor]
) — Passage embeddings. int
, optional, defaults to 128) — Batch size for computing scores. torch.dtype
, optional, defaults to torch.float32
) — The dtype of the output tensor.
If None
, the dtype of the input embeddings is used. torch.device
or str
, optional, defaults to “cpu”) — The device of the output tensor. Returns
torch.Tensor
A tensor of shape (n_queries, n_passages)
containing the scores. The score
tensor is saved on the “cpu” device.
Compute the late-interaction/MaxSim score (ColBERT-like) for the given multi-vector
query embeddings (qs
) and passage embeddings (ps
). For ColPali, a passage is the
image of a document page.
Because the embedding tensors are multi-vector and can thus have different shapes, they should be fed as: (1) a list of tensors, where the i-th tensor is of shape (sequence_length_i, embedding_dim) (2) a single tensor of shape (n_passages, max_sequence_length, embedding_dim) -> usually obtained by padding the list of tensors.
In our proposed ColPali approach, we leverage VLMs to construct efficient multi-vector embeddings directly from document images (“screenshots”) for document retrieval. We train the model to maximize the similarity between these document embeddings and the corresponding query embeddings, using the late interaction method introduced in ColBERT.
Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, etc.) of a document.
( input_ids: LongTensor = None pixel_values: FloatTensor = None attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) → transformers.models.colpali.modeling_colpali.ColPaliForRetrievalOutput
or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs? torch.FloatTensor
of shape (batch_size, num_channels, image_size, image_size)) -- The tensors corresponding to the input images. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/main/en/model_doc/auto#transformers.AutoImageProcessor). See [SiglipImageProcessor.__call__()](/docs/transformers/main/en/model_doc/imagegpt#transformers.ImageGPTFeatureExtractor.__call__) for details ([]
PaliGemmaProcessor`] uses
SiglipImageProcessor for processing images). If none, ColPali will only process text (query embeddings). torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:past_key_values
is used, optionally only the last decoder_input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. Dict[str, Any]
, optional) —
Additional key word arguments passed along to the vlm backbone model. Returns
transformers.models.colpali.modeling_colpali.ColPaliForRetrievalOutput
or tuple(torch.FloatTensor)
A transformers.models.colpali.modeling_colpali.ColPaliForRetrievalOutput
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (ColPaliConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss (for next-token prediction).
embeddings (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — The embeddings of the model.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
)
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
image_hidden_states (torch.FloatTensor
, optional) — A torch.FloatTensor
of size (batch_size, num_images, sequence_length, hidden_size)
.
image_hidden_states of the model produced by the vision encoder after projecting last hidden state.
The ColPaliForRetrieval forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.