--- license: apache-2.0 language: - zh metrics: - accuracy - cer pipeline_tag: automatic-speech-recognition tags: - Paraformer - FunASR - ASR --- ## Introduce This repo cloned from https://huggingface.co/funasr/Paraformer-large ## Install funasr_onnx ```shell pip install -U funasr_onnx # For the users in China, you could install with the command: # pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple ``` ## Download the model ```shell git clone https://huggingface.co/hoangus0303/paraformer-large-clone-from-funasr ``` ## Inference with runtime ### Speech Recognition #### Paraformer ```python from funasr_onnx import Paraformer model_dir = "./paraformer-large" model = Paraformer(model_dir, batch_size=1, quantize=True) wav_path = ['./funasr/paraformer-large/asr_example.wav'] result = model(wav_path) print(result) ``` - `model_dir`: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn` - `batch_size`: `1` (Default), the batch size duration inference - `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu) - `quantize`: `False` (Default), load the model of `model.onnx` in `model_dir`. If set `True`, load the model of `model_quant.onnx` in `model_dir` - `intra_op_num_threads`: `4` (Default), sets the number of threads used for intraop parallelism on CPU Input: wav formt file, support formats: `str, np.ndarray, List[str]` Output: `List[str]`: recognition result