File size: 3,037 Bytes
027a13b 69fb91d 027a13b 4aaebb9 027a13b f142b32 027a13b 8c84df3 69fb91d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
license: apache-2.0
pipeline_tag: automatic-speech-recognition
base_model:
- openai/whisper-large-v3
tags:
- inference_endpoints
- audio
- transcription
---
# Inference Endpoint - Multilingual Audio Transcription with Whisper models
**Deploy OpenAI's Whisper Inference Endpoint to transcribe audio files to text in many languages**
Resulting deployment exposes an [OpenAI Platform Transcription](https://platform.openai.com/docs/api-reference/audio/createTranscription) compatible HTTP endpoint
which you can query using the `OpenAi` Libraries or directly through `cURL` for instance.
## Available Routes
| path | description |
|:-----------------------------|:--------------------------------------------------|
| /api/v1/audio/transcriptions | Transcription endpoint to interact with the model |
| /docs | Visual documentation |
## Getting started
- **Getting text output from audio file**
```bash
curl http://localhost:8000/api/v1/audio/transcriptions \
--request POST \
--header 'Content-Type: multipart/form-data' \
-F file=@</path/to/audio/file> \
-F "response_format": "text"
```
- **Getting JSON output from audio file**
```bash
curl http://localhost:8000/api/v1/audio/transcriptions \
--request POST \
--header 'Content-Type: multipart/form-data' \
-F file=@</path/to/audio/file> \
-F "response_format": "json"
```
- **Getting segmented JSON output from audio file**
```bash
curl http://localhost:8000/api/v1/audio/transcriptions \
--request POST \
--header 'Content-Type: multipart/form-data' \
-F file=@</path/to/audio/file> \
-F "response_format": "verbose_json"
```
## Specifications
| spec | value | description |
|:------------------ |:---------------------:|:-----------------------------------------------------------------------------------------------------------|
| Engine | vLLM (v0.8.3) | Underlying inference engine leverages [vLLM](https://docs.vllm.ai/en/latest/) |
| Hardware | GPU (Ada Lovelace) | Requires the target endpoint to run over NVIDIA GPUs with at least compute capabilities 8.9 (Ada Lovelace) |
| Compute data type | `bfloat16` | Computations (matmuls, norms, etc.) are done using `bfloat16` precision |
| KV cache data type | `float8` (e4m3) | Key-Value cache is stored on the GPU using `float8` (`float8_e4m3`) precision to save space |
| PyTorch Compile | ✅ | Enable the use of `torch.compile` to further optimize model's execution with more optimizations |
| CUDA Graphs | ✅ | Enable the use of so called "[CUDA Graphs](https://developer.nvidia.com/blog/cuda-graphs/)" to reduce overhead executing GPU computations | |