This is a quantized version of distil-whisper-medium.en, optimized with ctranslate2 to use 8-bit integers for faster inference while maintaining accuracy. Ideal for speech-to-text tasks where speed is critical.
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for Rejekts/fastest-distil-whisper-medium.en
Base model
distil-whisper/distil-medium.en