Whisper large-v3 model for CTranslate2
This repository contains the conversion of whisper-turbo to the CTranslate2 model format.
Example
from huggingface_hub import snapshot_download
from faster_whisper import WhisperModel
repo_id = "jootanehorror/faster-whisper-large-v3-turbo-ct2"
local_dir = "faster-whisper-large-v3-turbo-ct2"
snapshot_download(repo_id=repo_id, local_dir=local_dir, repo_type="model")
model = WhisperModel(local_dir, device='cpu', compute_type='int8')
segments, info = model.transcribe("sample.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
More information
For more information about the model, see its official github page.
- Downloads last month
- 33
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the HF Inference API does not support ctranslate2 models with pipeline type automatic-speech-recognition