There are a number of 🤗 pipelines that have been adapted for use with IPUs. The available IPU pipelines are:
IPUFillMaskPipeline
IPUText2TextGenerationPipeline
IPUSummarizationPipeline
IPUTranslationPipeline
IPUTokenClassificationPipeline
IPUZeroShotClassificationPipeline
Based on the 🤗 FillMaskPipeline pipeline.
( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] tokenizer: typing.Optional[transformers.tokenization_utils.PreTrainedTokenizer] = None feature_extractor: typing.Optional[ForwardRef('SequenceFeatureExtractor')] = None image_processor: typing.Optional[transformers.image_processing_utils.BaseImageProcessor] = None modelcard: typing.Optional[transformers.modelcard.ModelCard] = None framework: typing.Optional[str] = None task: str = '' args_parser: ArgumentHandler = None device: typing.Union[int, str, ForwardRef('torch.device')] = None torch_dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None binary_output: bool = False **kwargs )
Based on the 🤗 Text2TextGenerationPipeline pipeline.
Based on the 🤗 SummarizationPipeline pipeline.
Based on the 🤗 TranslationPipeline pipeline.
Based on the 🤗 TokenClassificationPipeline pipeline.
( args_parser = <transformers.pipelines.token_classification.TokenClassificationArgumentHandler object at 0x7f971d4d81f0> *args **kwargs )
Based on the 🤗 ZeroShotClassificationPipeline pipeline.
( args_parser = <transformers.pipelines.zero_shot_classification.ZeroShotClassificationArgumentHandler object at 0x7f971d4e26a0> *args **kwargs )
Parameters
PreTrainedModel
or TFPreTrainedModel
) —
The model that will be used by the pipeline to make predictions. This needs to be a model inheriting from
PreTrainedModel
for PyTorch and TFPreTrainedModel
for TensorFlow. PreTrainedTokenizer
) —
The tokenizer that will be used by the pipeline to encode data for the model. This object inherits from
PreTrainedTokenizer
. str
or ModelCard
, optional) —
Model card attributed to the model for this pipeline. str
, optional) —
The framework to use, either "pt"
for PyTorch or "tf"
for TensorFlow. The specified framework must be
installed.
If no framework is specified, will default to the one currently installed. If no framework is specified and
both frameworks are installed, will default to the framework of the model
, or to PyTorch if no model is
provided.
str
, defaults to ""
) —
A task-identifier for the pipeline. int
, optional, defaults to 8) —
When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the number of
workers to be used. int
, optional, defaults to 1) —
When the pipeline will use DataLoader (when passing a dataset, on GPU for a Pytorch model), the size of
the batch to use, for inference this is not always beneficial, please read Batching with
pipelines . ~pipelines.ArgumentHandler
, optional) —
Reference to the object in charge of parsing supplied pipeline parameters. int
, optional, defaults to -1) —
Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on
the associated CUDA device id. You can pass native torch.device
or a str
too. bool
, optional, defaults to False
) —
Flag indicating if the output the pipeline should happen in a binary format (i.e., pickle) or as raw text.