The TextNet model was proposed in FAST: Faster Arbitrarily-Shaped Text Detector with Minimalist Kernel Representation by Zhe Chen, Jiahao Wang, Wenhai Wang, Guo Chen, Enze Xie, Ping Luo, Tong Lu. TextNet is a vision backbone useful for text detection tasks. It is the result of neural architecture search (NAS) on backbones with reward function as text detection task (to provide powerful features for text detection).
This model was contributed by Raghavan, jadechoghari and nielsr.
TextNet is mainly used as a backbone network for the architecture search of text detection. Each stage of the backbone network is comprised of a stride-2 convolution and searchable blocks. Specifically, we present a layer-level candidate set, defined as {conv3×3, conv1×3, conv3×1, identity}. As the 1×3 and 3×1 convolutions have asymmetric kernels and oriented structure priors, they may help to capture the features of extreme aspect-ratio and rotated text lines.
TextNet is the backbone for Fast, but can also be used as an efficient text/image classification, we add a TextNetForImageClassification
as is it would allow people to train an image classifier on top of the pre-trained textnet weights
( stem_kernel_size = 3 stem_stride = 2 stem_num_channels = 3 stem_out_channels = 64 stem_act_func = 'relu' image_size = [640, 640] conv_layer_kernel_sizes = None conv_layer_strides = None hidden_sizes = [64, 64, 128, 256, 512] batch_norm_eps = 1e-05 initializer_range = 0.02 out_features = None out_indices = None **kwargs )
Parameters
int
, optional, defaults to 3) —
The kernel size for the initial convolution layer. int
, optional, defaults to 2) —
The stride for the initial convolution layer. int
, optional, defaults to 3) —
The num of channels in input for the initial convolution layer. int
, optional, defaults to 64) —
The num of channels in out for the initial convolution layer. str
, optional, defaults to "relu"
) —
The activation function for the initial convolution layer. Tuple[int, int]
, optional, defaults to [640, 640]
) —
The size (resolution) of each image. List[List[List[int]]]
, optional) —
A list of stage-wise kernel sizes. If None
, defaults to:
[[[3, 3], [3, 3], [3, 3]], [[3, 3], [1, 3], [3, 3], [3, 1]], [[3, 3], [3, 3], [3, 1], [1, 3]], [[3, 3], [3, 1], [1, 3], [3, 3]]]
. List[List[int]]
, optional) —
A list of stage-wise strides. If None
, defaults to:
[[1, 2, 1], [2, 1, 1, 1], [2, 1, 1, 1], [2, 1, 1, 1]]
. List[int]
, optional, defaults to [64, 64, 128, 256, 512]
) —
Dimensionality (hidden size) at each stage. float
, optional, defaults to 1e-05) —
The epsilon used by the batch normalization layers. float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. List[str]
, optional) —
If used as backbone, list of features to output. Can be any of "stem"
, "stage1"
, "stage2"
, etc.
(depending on how many stages the model has). If unset and out_indices
is set, will default to the
corresponding stages. If unset and out_indices
is unset, will default to the last stage. List[int]
, optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features
is set, will default to the corresponding stages.
If unset and out_features
is unset, will default to the last stage. This is the configuration class to store the configuration of a TextNextModel
. It is used to instantiate a
TextNext model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the
czczup/textnet-base. Configuration objects inherit from
PretrainedConfig and can be used to control the model outputs.Read the documentation from PretrainedConfig
for more information.
Examples:
>>> from transformers import TextNetConfig, TextNetBackbone
>>> # Initializing a TextNetConfig
>>> configuration = TextNetConfig()
>>> # Initializing a model (with random weights)
>>> model = TextNetBackbone(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( do_resize: bool = True size: typing.Dict[str, int] = None size_divisor: int = 32 resample: Resampling = <Resampling.BILINEAR: 2> do_center_crop: bool = False crop_size: typing.Dict[str, int] = None do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = [0.485, 0.456, 0.406] image_std: typing.Union[float, typing.List[float], NoneType] = [0.229, 0.224, 0.225] do_convert_rgb: bool = True **kwargs )
Parameters
bool
, optional, defaults to True
) —
Whether to resize the image’s (height, width) dimensions to the specified size
. Can be overridden by
do_resize
in the preprocess
method. Dict[str, int]
optional, defaults to {"shortest_edge" -- 224}
):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size
in the preprocess
method. int
, optional, defaults to 32) —
Ensures height and width are rounded to a multiple of this value after resizing. PILImageResampling
, optional, defaults to Resampling.BILINEAR
) —
Resampling filter to use if resizing the image. Can be overridden by resample
in the preprocess
method. bool
, optional, defaults to False
) —
Whether to center crop the image to the specified crop_size
. Can be overridden by do_center_crop
in the
preprocess
method. Dict[str, int]
optional, defaults to 224) —
Size of the output image after applying center_crop
. Can be overridden by crop_size
in the preprocess
method. bool
, optional, defaults to True
) —
Whether to rescale the image by the specified scale rescale_factor
. Can be overridden by do_rescale
in
the preprocess
method. int
or float
, optional, defaults to 1/255
) —
Scale factor to use if rescaling the image. Can be overridden by rescale_factor
in the preprocess
method. bool
, optional, defaults to True
) —
Whether to normalize the image. Can be overridden by do_normalize
in the preprocess
method. float
or List[float]
, optional, defaults to [0.485, 0.456, 0.406]
) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean
parameter in the preprocess
method. float
or List[float]
, optional, defaults to [0.229, 0.224, 0.225]
) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std
parameter in the preprocess
method.
Can be overridden by the image_std
parameter in the preprocess
method. bool
, optional, defaults to True
) —
Whether to convert the image to RGB. Constructs a TextNet image processor.
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None size_divisor: int = None resample: Resampling = None do_center_crop: bool = None crop_size: int = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs )
Parameters
ImageInput
) —
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set do_rescale=False
. bool
, optional, defaults to self.do_resize
) —
Whether to resize the image. Dict[str, int]
, optional, defaults to self.size
) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. int
, optional, defaults to 32
) —
Ensures height and width are rounded to a multiple of this value after resizing. int
, optional, defaults to self.resample
) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling
. Only
has an effect if do_resize
is set to True
. bool
, optional, defaults to self.do_center_crop
) —
Whether to center crop the image. Dict[str, int]
, optional, defaults to self.crop_size
) —
Size of the center crop. Only has an effect if do_center_crop
is set to True
. bool
, optional, defaults to self.do_rescale
) —
Whether to rescale the image. float
, optional, defaults to self.rescale_factor
) —
Rescale factor to rescale the image by if do_rescale
is set to True
. bool
, optional, defaults to self.do_normalize
) —
Whether to normalize the image. float
or List[float]
, optional, defaults to self.image_mean
) —
Image mean to use for normalization. Only has an effect if do_normalize
is set to True
. float
or List[float]
, optional, defaults to self.image_std
) —
Image standard deviation to use for normalization. Only has an effect if do_normalize
is set to
True
. bool
, optional, defaults to self.do_convert_rgb
) —
Whether to convert the image to RGB. str
or TensorType
, optional) —
The type of tensors to return. Can be one of:np.ndarray
.TensorType.TENSORFLOW
or 'tf'
: Return a batch of type tf.Tensor
.TensorType.PYTORCH
or 'pt'
: Return a batch of type torch.Tensor
.TensorType.NUMPY
or 'np'
: Return a batch of type np.ndarray
.TensorType.JAX
or 'jax'
: Return a batch of type jax.numpy.ndarray
.ChannelDimension
or str
, optional, defaults to ChannelDimension.FIRST
) —
The channel dimension format for the output image. Can be one of:"channels_first"
or ChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
or ChannelDimension.LAST
: image in (height, width, num_channels) format.ChannelDimension
or str
, optional) —
The channel dimension format for the input image. If unset, the channel dimension format is inferred
from the input image. Can be one of:"channels_first"
or ChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
or ChannelDimension.LAST
: image in (height, width, num_channels) format."none"
or ChannelDimension.NONE
: image in (height, width) format.Preprocess an image or batch of images.
( config )
Parameters
The bare Textnet model outputting raw features without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( pixel_values: Tensor output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
TextNetImageProcessor.call() for details. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (TextNetConfig) and inputs.
last_hidden_state (torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor
of shape (batch_size, hidden_size)
) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The TextNetModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoImageProcessor, TextNetModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image", trust_remote_code=True)
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("czczup/textnet-base")
>>> model = TextNetModel.from_pretrained("czczup/textnet-base")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 512, 20, 27]
( config )
Parameters
TextNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( pixel_values: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
TextNetImageProcessor.call() for details. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss), If
config.num_labels > 1
a classification loss is computed (Cross-Entropy). Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (TextNetConfig) and inputs.
torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification (or regression if config.num_labels==1) loss.torch.FloatTensor
of shape (batch_size, config.num_labels)
) — Classification (or regression if config.num_labels==1) scores (before SoftMax).tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width)
. Hidden-states (also
called feature maps) of the model at the output of each stage.The TextNetForImageClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> import torch
>>> import requests
>>> from transformers import TextNetForImageClassification, TextNetImageProcessor
>>> from PIL import Image
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> processor = TextNetImageProcessor.from_pretrained("czczup/textnet-base")
>>> model = TextNetForImageClassification.from_pretrained("czczup/textnet-base")
>>> inputs = processor(images=image, return_tensors="pt", size={"height": 640, "width": 640})
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> outputs.logits.shape
torch.Size([1, 2])