( task model: Optional = None vaip_config: Optional = None model_type: Optional = None feature_extractor: Union = None image_processor: Union = None use_fast: bool = True token: Union = None revision: Optional = None **kwargs ) → Pipeline
Parameters
str
) —
The task defining which pipeline will be returned. Available tasks include:Optional[Any]
, defaults to None
) —
The model that will be used by the pipeline to make predictions. This can be a model identifier or an
actual instance of a pretrained model. If not provided, the default model for the specified task will be loaded. Optional[str]
, defaults to None
) —
Runtime configuration file for inference with Ryzen IPU. A default config file can be found in the Ryzen AI VOE package,
extracted during installation under the name vaip_config.json
. Optional[str]
, defaults to None
) —
Model type for the model Union[str, "PreTrainedFeatureExtractor"]
, defaults to None
) —
The feature extractor that will be used by the pipeline to encode data for the model. This can be a model
identifier or an actual pretrained feature extractor. Union[str, BaseImageProcessor]
, defaults to None
) —
The image processor that will be used by the pipeline for image-related tasks. bool
, defaults to True
) —
Whether or not to use a Fast tokenizer if possible. Union[str, bool
], defaults to None
) —
The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when
running huggingface-cli login
(stored in ~/.huggingface
). str
, defaults to None
) —
The specific model version to use, specified as a branch name, tag name, or commit id.
**kwargs —
Additional keyword arguments passed to the underlying pipeline class. Returns
Pipeline
An instance of the specified pipeline for the given task and model.
Utility method to build a pipeline for various RyzenAI tasks.
This function creates a pipeline for a specified task, utilizing a given model or loading the default model for the task. The pipeline includes components such as a image processor and model.
( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )
Example usage:
import requests
from PIL import Image
from optimum.amd.ryzenai import pipeline
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
model_id = "mohitsha/timm-resnet18-onnx-quantized-ryzen"
pipe = pipeline("image-classification", model=model_id, vaip_config="vaip_config.json")
print(pipe(image))
( model: Union tokenizer: Optional = None feature_extractor: Optional = None image_processor: Optional = None processor: Optional = None modelcard: Optional = None framework: Optional = None task: str = '' args_parser: ArgumentHandler = None device: Union = None torch_dtype: Union = None binary_output: bool = False **kwargs )
Supported model types
Example usage:
import requests
from PIL import Image
from optimum.amd.ryzenai import pipeline
img = ".\\image.jpg"
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
img = ".\\image2.jpg"
image = Image.open(img)
model_id = "amd/yolox-s"
detector = pipeline("object-detection", model=model_id, vaip_config="vaip_config.json", model_type="yolox")
detector = pipe(image)