Skriller0208 commited on
Commit
55d16cb
·
verified ·
1 Parent(s): c3abaf4

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -832
README.md DELETED
@@ -1,832 +0,0 @@
1
- # whisper.cpp
2
-
3
- ![whisper.cpp](https://user-images.githubusercontent.com/1991296/235238348-05d0f6a4-da44-4900-a1de-d0707e75b763.jpeg)
4
-
5
- [![Actions Status](https://github.com/ggerganov/whisper.cpp/workflows/CI/badge.svg)](https://github.com/ggerganov/whisper.cpp/actions)
6
- [![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
7
- [![Conan Center](https://shields.io/conan/v/whisper-cpp)](https://conan.io/center/whisper-cpp)
8
- [![npm](https://img.shields.io/npm/v/whisper.cpp.svg)](https://www.npmjs.com/package/whisper.cpp/)
9
-
10
- Stable: [v1.6.2](https://github.com/ggerganov/whisper.cpp/releases/tag/v1.6.0) / [Roadmap | F.A.Q.](https://github.com/ggerganov/whisper.cpp/discussions/126)
11
-
12
- High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
13
-
14
- - Plain C/C++ implementation without dependencies
15
- - Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](https://github.com/ggerganov/whisper.cpp#core-ml-support)
16
- - AVX intrinsics support for x86 architectures
17
- - VSX intrinsics support for POWER architectures
18
- - Mixed F16 / F32 precision
19
- - [4-bit and 5-bit integer quantization support](https://github.com/ggerganov/whisper.cpp#quantization)
20
- - Zero memory allocations at runtime
21
- - Support for CPU-only inference
22
- - [Efficient GPU support for NVIDIA](https://github.com/ggerganov/whisper.cpp#nvidia-gpu-support-via-cublas)
23
- - [OpenVINO Support](https://github.com/ggerganov/whisper.cpp#openvino-support)
24
- - [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
25
-
26
- Supported platforms:
27
-
28
- - [x] Mac OS (Intel and Arm)
29
- - [x] [iOS](examples/whisper.objc)
30
- - [x] [Android](examples/whisper.android)
31
- - [x] [Java](bindings/java/README.md)
32
- - [x] Linux / [FreeBSD](https://github.com/ggerganov/whisper.cpp/issues/56#issuecomment-1350920264)
33
- - [x] [WebAssembly](examples/whisper.wasm)
34
- - [x] Windows ([MSVC](https://github.com/ggerganov/whisper.cpp/blob/master/.github/workflows/build.yml#L117-L144) and [MinGW](https://github.com/ggerganov/whisper.cpp/issues/168)]
35
- - [x] [Raspberry Pi](https://github.com/ggerganov/whisper.cpp/discussions/166)
36
- - [x] [Docker](https://github.com/ggerganov/whisper.cpp/pkgs/container/whisper.cpp)
37
-
38
- The entire high-level implementation of the model is contained in [whisper.h](include/whisper.h) and [whisper.cpp](src/whisper.cpp).
39
- The rest of the code is part of the [`ggml`](https://github.com/ggerganov/ggml) machine learning library.
40
-
41
- Having such a lightweight implementation of the model allows to easily integrate it in different platforms and applications.
42
- As an example, here is a video of running the model on an iPhone 13 device - fully offline, on-device: [whisper.objc](examples/whisper.objc)
43
-
44
- https://user-images.githubusercontent.com/1991296/197385372-962a6dea-bca1-4d50-bf96-1d8c27b98c81.mp4
45
-
46
- You can also easily make your own offline voice assistant application: [command](examples/command)
47
-
48
- https://user-images.githubusercontent.com/1991296/204038393-2f846eae-c255-4099-a76d-5735c25c49da.mp4
49
-
50
- On Apple Silicon, the inference runs fully on the GPU via Metal:
51
-
52
- https://github.com/ggerganov/whisper.cpp/assets/1991296/c82e8f86-60dc-49f2-b048-d2fdbd6b5225
53
-
54
- Or you can even run it straight in the browser: [talk.wasm](examples/talk.wasm)
55
-
56
- ## Implementation details
57
-
58
- - The core tensor operations are implemented in C ([ggml.h](ggml/include/ggml.h) / [ggml.c](ggml/src/ggml.c))
59
- - The transformer model and the high-level C-style API are implemented in C++ ([whisper.h](include/whisper.h) / [whisper.cpp](src/whisper.cpp))
60
- - Sample usage is demonstrated in [main.cpp](examples/main)
61
- - Sample real-time audio transcription from the microphone is demonstrated in [stream.cpp](examples/stream)
62
- - Various other examples are available in the [examples](examples) folder
63
-
64
- The tensor operators are optimized heavily for Apple silicon CPUs. Depending on the computation size, Arm Neon SIMD intrinsics or CBLAS Accelerate framework routines are used. The latter are especially effective for bigger sizes since the Accelerate framework utilizes the special-purpose AMX coprocessor available in modern Apple products.
65
-
66
- ## Quick start
67
-
68
- First clone the repository:
69
-
70
- ```bash
71
- git clone https://github.com/ggerganov/whisper.cpp.git
72
- ```
73
-
74
- Then, download one of the Whisper [models](models/README.md) converted in [`ggml` format](#ggml-format). For example:
75
-
76
- ```bash
77
- bash ./models/download-ggml-model.sh base.en
78
- ```
79
-
80
- Now build the [main](examples/main) example and transcribe an audio file like this:
81
-
82
- ```bash
83
- # build the main example
84
- make
85
-
86
- # transcribe an audio file
87
- ./main -f samples/jfk.wav
88
- ```
89
-
90
- ---
91
-
92
- For a quick demo, simply run `make base.en`:
93
-
94
- ```text
95
- $ make base.en
96
-
97
- cc -I. -O3 -std=c11 -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
98
- c++ -I. -I./examples -O3 -std=c++11 -pthread -c whisper.cpp -o whisper.o
99
- c++ -I. -I./examples -O3 -std=c++11 -pthread examples/main/main.cpp whisper.o ggml.o -o main -framework Accelerate
100
- ./main -h
101
-
102
- usage: ./main [options] file0.wav file1.wav ...
103
-
104
- options:
105
- -h, --help [default] show this help message and exit
106
- -t N, --threads N [4 ] number of threads to use during computation
107
- -p N, --processors N [1 ] number of processors to use during computation
108
- -ot N, --offset-t N [0 ] time offset in milliseconds
109
- -on N, --offset-n N [0 ] segment index offset
110
- -d N, --duration N [0 ] duration of audio to process in milliseconds
111
- -mc N, --max-context N [-1 ] maximum number of text context tokens to store
112
- -ml N, --max-len N [0 ] maximum segment length in characters
113
- -sow, --split-on-word [false ] split on word rather than on token
114
- -bo N, --best-of N [5 ] number of best candidates to keep
115
- -bs N, --beam-size N [5 ] beam size for beam search
116
- -wt N, --word-thold N [0.01 ] word timestamp probability threshold
117
- -et N, --entropy-thold N [2.40 ] entropy threshold for decoder fail
118
- -lpt N, --logprob-thold N [-1.00 ] log probability threshold for decoder fail
119
- -debug, --debug-mode [false ] enable debug mode (eg. dump log_mel)
120
- -tr, --translate [false ] translate from source language to english
121
- -di, --diarize [false ] stereo audio diarization
122
- -tdrz, --tinydiarize [false ] enable tinydiarize (requires a tdrz model)
123
- -nf, --no-fallback [false ] do not use temperature fallback while decoding
124
- -otxt, --output-txt [false ] output result in a text file
125
- -ovtt, --output-vtt [false ] output result in a vtt file
126
- -osrt, --output-srt [false ] output result in a srt file
127
- -olrc, --output-lrc [false ] output result in a lrc file
128
- -owts, --output-words [false ] output script for generating karaoke video
129
- -fp, --font-path [/System/Library/Fonts/Supplemental/Courier New Bold.ttf] path to a monospace font for karaoke video
130
- -ocsv, --output-csv [false ] output result in a CSV file
131
- -oj, --output-json [false ] output result in a JSON file
132
- -ojf, --output-json-full [false ] include more information in the JSON file
133
- -of FNAME, --output-file FNAME [ ] output file path (without file extension)
134
- -ps, --print-special [false ] print special tokens
135
- -pc, --print-colors [false ] print colors
136
- -pp, --print-progress [false ] print progress
137
- -nt, --no-timestamps [false ] do not print timestamps
138
- -l LANG, --language LANG [en ] spoken language ('auto' for auto-detect)
139
- -dl, --detect-language [false ] exit after automatically detecting language
140
- --prompt PROMPT [ ] initial prompt
141
- -m FNAME, --model FNAME [models/ggml-base.en.bin] model path
142
- -f FNAME, --file FNAME [ ] input WAV file path
143
- -oved D, --ov-e-device DNAME [CPU ] the OpenVINO device used for encode inference
144
- -ls, --log-score [false ] log best decoder scores of tokens
145
- -ng, --no-gpu [false ] disable GPU
146
-
147
-
148
- bash ./models/download-ggml-model.sh base.en
149
- Downloading ggml model base.en ...
150
- ggml-base.en.bin 100%[========================>] 141.11M 6.34MB/s in 24s
151
- Done! Model 'base.en' saved in 'models/ggml-base.en.bin'
152
- You can now use it like this:
153
-
154
- $ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
155
-
156
-
157
- ===============================================
158
- Running base.en on all samples in ./samples ...
159
- ===============================================
160
-
161
- ----------------------------------------------
162
- [+] Running base.en on samples/jfk.wav ... (run 'ffplay samples/jfk.wav' to listen)
163
- ----------------------------------------------
164
-
165
- whisper_init_from_file: loading model from 'models/ggml-base.en.bin'
166
- whisper_model_load: loading model
167
- whisper_model_load: n_vocab = 51864
168
- whisper_model_load: n_audio_ctx = 1500
169
- whisper_model_load: n_audio_state = 512
170
- whisper_model_load: n_audio_head = 8
171
- whisper_model_load: n_audio_layer = 6
172
- whisper_model_load: n_text_ctx = 448
173
- whisper_model_load: n_text_state = 512
174
- whisper_model_load: n_text_head = 8
175
- whisper_model_load: n_text_layer = 6
176
- whisper_model_load: n_mels = 80
177
- whisper_model_load: f16 = 1
178
- whisper_model_load: type = 2
179
- whisper_model_load: mem required = 215.00 MB (+ 6.00 MB per decoder)
180
- whisper_model_load: kv self size = 5.25 MB
181
- whisper_model_load: kv cross size = 17.58 MB
182
- whisper_model_load: adding 1607 extra tokens
183
- whisper_model_load: model ctx = 140.60 MB
184
- whisper_model_load: model size = 140.54 MB
185
-
186
- system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
187
-
188
- main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
189
-
190
-
191
- [00:00:00.000 --> 00:00:11.000] And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.
192
-
193
-
194
- whisper_print_timings: fallbacks = 0 p / 0 h
195
- whisper_print_timings: load time = 113.81 ms
196
- whisper_print_timings: mel time = 15.40 ms
197
- whisper_print_timings: sample time = 11.58 ms / 27 runs ( 0.43 ms per run)
198
- whisper_print_timings: encode time = 266.60 ms / 1 runs ( 266.60 ms per run)
199
- whisper_print_timings: decode time = 66.11 ms / 27 runs ( 2.45 ms per run)
200
- whisper_print_timings: total time = 476.31 ms
201
- ```
202
-
203
- The command downloads the `base.en` model converted to custom `ggml` format and runs the inference on all `.wav` samples in the folder `samples`.
204
-
205
- For detailed usage instructions, run: `./main -h`
206
-
207
- Note that the [main](examples/main) example currently runs only with 16-bit WAV files, so make sure to convert your input before running the tool.
208
- For example, you can use `ffmpeg` like this:
209
-
210
- ```bash
211
- ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
212
- ```
213
-
214
- ## More audio samples
215
-
216
- If you want some extra audio samples to play with, simply run:
217
-
218
- ```
219
- make samples
220
- ```
221
-
222
- This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via `ffmpeg`.
223
-
224
- You can download and run the other models as follows:
225
-
226
- ```
227
- make tiny.en
228
- make tiny
229
- make base.en
230
- make base
231
- make small.en
232
- make small
233
- make medium.en
234
- make medium
235
- make large-v1
236
- make large-v2
237
- make large-v3
238
- ```
239
-
240
- ## Memory usage
241
-
242
- | Model | Disk | Mem |
243
- | ------ | ------- | ------- |
244
- | tiny | 75 MiB | ~273 MB |
245
- | base | 142 MiB | ~388 MB |
246
- | small | 466 MiB | ~852 MB |
247
- | medium | 1.5 GiB | ~2.1 GB |
248
- | large | 2.9 GiB | ~3.9 GB |
249
-
250
- ## Quantization
251
-
252
- `whisper.cpp` supports integer quantization of the Whisper `ggml` models.
253
- Quantized models require less memory and disk space and depending on the hardware can be processed more efficiently.
254
-
255
- Here are the steps for creating and using a quantized model:
256
-
257
- ```bash
258
- # quantize a model with Q5_0 method
259
- make quantize
260
- ./quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
261
-
262
- # run the examples as usual, specifying the quantized model file
263
- ./main -m models/ggml-base.en-q5_0.bin ./samples/gb0.wav
264
- ```
265
-
266
- ## Core ML support
267
-
268
- On Apple Silicon devices, the Encoder inference can be executed on the Apple Neural Engine (ANE) via Core ML. This can result in significant
269
- speed-up - more than x3 faster compared with CPU-only execution. Here are the instructions for generating a Core ML model and using it with `whisper.cpp`:
270
-
271
- - Install Python dependencies needed for the creation of the Core ML model:
272
-
273
- ```bash
274
- pip install ane_transformers
275
- pip install openai-whisper
276
- pip install coremltools
277
- ```
278
-
279
- - To ensure `coremltools` operates correctly, please confirm that [Xcode](https://developer.apple.com/xcode/) is installed and execute `xcode-select --install` to install the command-line tools.
280
- - Python 3.10 is recommended.
281
- - MacOS Sonoma (version 14) or newer is recommended, as older versions of MacOS might experience issues with transcription hallucination.
282
- - [OPTIONAL] It is recommended to utilize a Python version management system, such as [Miniconda](https://docs.conda.io/en/latest/miniconda.html) for this step:
283
- - To create an environment, use: `conda create -n py310-whisper python=3.10 -y`
284
- - To activate the environment, use: `conda activate py310-whisper`
285
-
286
- - Generate a Core ML model. For example, to generate a `base.en` model, use:
287
-
288
- ```bash
289
- ./models/generate-coreml-model.sh base.en
290
- ```
291
-
292
- This will generate the folder `models/ggml-base.en-encoder.mlmodelc`
293
-
294
- - Build `whisper.cpp` with Core ML support:
295
-
296
- ```bash
297
- # using Makefile
298
- make clean
299
- WHISPER_COREML=1 make -j
300
-
301
- # using CMake
302
- cmake -B build -DWHISPER_COREML=1
303
- cmake --build build -j --config Release
304
- ```
305
-
306
- - Run the examples as usual. For example:
307
-
308
- ```text
309
- $ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
310
-
311
- ...
312
-
313
- whisper_init_state: loading Core ML model from 'models/ggml-base.en-encoder.mlmodelc'
314
- whisper_init_state: first run on a device may take a while ...
315
- whisper_init_state: Core ML model loaded
316
-
317
- system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | COREML = 1 |
318
-
319
- ...
320
- ```
321
-
322
- The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format.
323
- Next runs are faster.
324
-
325
- For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggerganov/whisper.cpp/pull/566).
326
-
327
- ## OpenVINO support
328
-
329
- On platforms that support [OpenVINO](https://github.com/openvinotoolkit/openvino), the Encoder inference can be executed
330
- on OpenVINO-supported devices including x86 CPUs and Intel GPUs (integrated & discrete).
331
-
332
- This can result in significant speedup in encoder performance. Here are the instructions for generating the OpenVINO model and using it with `whisper.cpp`:
333
-
334
- - First, setup python virtual env. and install python dependencies. Python 3.10 is recommended.
335
-
336
- Windows:
337
-
338
- ```powershell
339
- cd models
340
- python -m venv openvino_conv_env
341
- openvino_conv_env\Scripts\activate
342
- python -m pip install --upgrade pip
343
- pip install -r requirements-openvino.txt
344
- ```
345
-
346
- Linux and macOS:
347
-
348
- ```bash
349
- cd models
350
- python3 -m venv openvino_conv_env
351
- source openvino_conv_env/bin/activate
352
- python -m pip install --upgrade pip
353
- pip install -r requirements-openvino.txt
354
- ```
355
-
356
- - Generate an OpenVINO encoder model. For example, to generate a `base.en` model, use:
357
-
358
- ```
359
- python convert-whisper-to-openvino.py --model base.en
360
- ```
361
-
362
- This will produce ggml-base.en-encoder-openvino.xml/.bin IR model files. It's recommended to relocate these to the same folder as `ggml` models, as that
363
- is the default location that the OpenVINO extension will search at runtime.
364
-
365
- - Build `whisper.cpp` with OpenVINO support:
366
-
367
- Download OpenVINO package from [release page](https://github.com/openvinotoolkit/openvino/releases). The recommended version to use is [2023.0.0](https://github.com/openvinotoolkit/openvino/releases/tag/2023.0.0).
368
-
369
- After downloading & extracting package onto your development system, set up required environment by sourcing setupvars script. For example:
370
-
371
- Linux:
372
-
373
- ```bash
374
- source /path/to/l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64/setupvars.sh
375
- ```
376
-
377
- Windows (cmd):
378
-
379
- ```powershell
380
- C:\Path\To\w_openvino_toolkit_windows_2023.0.0.10926.b4452d56304_x86_64\setupvars.bat
381
- ```
382
-
383
- And then build the project using cmake:
384
-
385
- ```bash
386
- cmake -B build -DWHISPER_OPENVINO=1
387
- cmake --build build -j --config Release
388
- ```
389
-
390
- - Run the examples as usual. For example:
391
-
392
- ```text
393
- $ ./main -m models/ggml-base.en.bin -f samples/jfk.wav
394
-
395
- ...
396
-
397
- whisper_ctx_init_openvino_encoder: loading OpenVINO model from 'models/ggml-base.en-encoder-openvino.xml'
398
- whisper_ctx_init_openvino_encoder: first run on a device may take a while ...
399
- whisper_openvino_init: path_model = models/ggml-base.en-encoder-openvino.xml, device = GPU, cache_dir = models/ggml-base.en-encoder-openvino-cache
400
- whisper_ctx_init_openvino_encoder: OpenVINO model loaded
401
-
402
- system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | COREML = 0 | OPENVINO = 1 |
403
-
404
- ...
405
- ```
406
-
407
- The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
408
- cached for the next run.
409
-
410
- For more information about the Core ML implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
411
-
412
- ## NVIDIA GPU support
413
-
414
- With NVIDIA cards the processing of the models is done efficiently on the GPU via cuBLAS and custom CUDA kernels.
415
- First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-downloads
416
-
417
- Now build `whisper.cpp` with CUDA support:
418
-
419
- ```
420
- make clean
421
- GGML_CUDA=1 make -j
422
- ```
423
-
424
- ## BLAS CPU support via OpenBLAS
425
-
426
- Encoder processing can be accelerated on the CPU via OpenBLAS.
427
- First, make sure you have installed `openblas`: https://www.openblas.net/
428
-
429
- Now build `whisper.cpp` with OpenBLAS support:
430
-
431
- ```
432
- make clean
433
- GGML_OPENBLAS=1 make -j
434
- ```
435
-
436
- ## BLAS CPU support via Intel MKL
437
-
438
- Encoder processing can be accelerated on the CPU via the BLAS compatible interface of Intel's Math Kernel Library.
439
- First, make sure you have installed Intel's MKL runtime and development packages: https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl-download.html
440
-
441
- Now build `whisper.cpp` with Intel MKL BLAS support:
442
-
443
- ```
444
- source /opt/intel/oneapi/setvars.sh
445
- mkdir build
446
- cd build
447
- cmake -DWHISPER_MKL=ON ..
448
- WHISPER_MKL=1 make -j
449
- ```
450
-
451
- ## Docker
452
-
453
- ### Prerequisites
454
-
455
- - Docker must be installed and running on your system.
456
- - Create a folder to store big models & intermediate files (ex. /whisper/models)
457
-
458
- ### Images
459
-
460
- We have two Docker images available for this project:
461
-
462
- 1. `ghcr.io/ggerganov/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
463
- 2. `ghcr.io/ggerganov/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
464
-
465
- ### Usage
466
-
467
- ```shell
468
- # download model and persist it in a local folder
469
- docker run -it --rm \
470
- -v path/to/models:/models \
471
- whisper.cpp:main "./models/download-ggml-model.sh base /models"
472
- # transcribe an audio file
473
- docker run -it --rm \
474
- -v path/to/models:/models \
475
- -v path/to/audios:/audios \
476
- whisper.cpp:main "./main -m /models/ggml-base.bin -f /audios/jfk.wav"
477
- # transcribe an audio file in samples folder
478
- docker run -it --rm \
479
- -v path/to/models:/models \
480
- whisper.cpp:main "./main -m /models/ggml-base.bin -f ./samples/jfk.wav"
481
- ```
482
-
483
- ## Installing with Conan
484
-
485
- You can install pre-built binaries for whisper.cpp or build it from source using [Conan](https://conan.io/). Use the following command:
486
-
487
- ```
488
- conan install --requires="whisper-cpp/[*]" --build=missing
489
- ```
490
-
491
- For detailed instructions on how to use Conan, please refer to the [Conan documentation](https://docs.conan.io/2/).
492
-
493
- ## Limitations
494
-
495
- - Inference only
496
-
497
- ## Another example
498
-
499
- Here is another example of transcribing a [3:24 min speech](https://upload.wikimedia.org/wikipedia/commons/1/1f/George_W_Bush_Columbia_FINAL.ogg)
500
- in about half a minute on a MacBook M1 Pro, using `medium.en` model:
501
-
502
- <details>
503
- <summary>Expand to see the result</summary>
504
-
505
- ```text
506
- $ ./main -m models/ggml-medium.en.bin -f samples/gb1.wav -t 8
507
-
508
- whisper_init_from_file: loading model from 'models/ggml-medium.en.bin'
509
- whisper_model_load: loading model
510
- whisper_model_load: n_vocab = 51864
511
- whisper_model_load: n_audio_ctx = 1500
512
- whisper_model_load: n_audio_state = 1024
513
- whisper_model_load: n_audio_head = 16
514
- whisper_model_load: n_audio_layer = 24
515
- whisper_model_load: n_text_ctx = 448
516
- whisper_model_load: n_text_state = 1024
517
- whisper_model_load: n_text_head = 16
518
- whisper_model_load: n_text_layer = 24
519
- whisper_model_load: n_mels = 80
520
- whisper_model_load: f16 = 1
521
- whisper_model_load: type = 4
522
- whisper_model_load: mem required = 1720.00 MB (+ 43.00 MB per decoder)
523
- whisper_model_load: kv self size = 42.00 MB
524
- whisper_model_load: kv cross size = 140.62 MB
525
- whisper_model_load: adding 1607 extra tokens
526
- whisper_model_load: model ctx = 1462.35 MB
527
- whisper_model_load: model size = 1462.12 MB
528
-
529
- system_info: n_threads = 8 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
530
-
531
- main: processing 'samples/gb1.wav' (3179750 samples, 198.7 sec), 8 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
532
-
533
-
534
- [00:00:00.000 --> 00:00:08.000] My fellow Americans, this day has brought terrible news and great sadness to our country.
535
- [00:00:08.000 --> 00:00:17.000] At nine o'clock this morning, Mission Control in Houston lost contact with our Space Shuttle Columbia.
536
- [00:00:17.000 --> 00:00:23.000] A short time later, debris was seen falling from the skies above Texas.
537
- [00:00:23.000 --> 00:00:29.000] The Columbia's lost. There are no survivors.
538
- [00:00:29.000 --> 00:00:32.000] On board was a crew of seven.
539
- [00:00:32.000 --> 00:00:39.000] Colonel Rick Husband, Lieutenant Colonel Michael Anderson, Commander Laurel Clark,
540
- [00:00:39.000 --> 00:00:48.000] Captain David Brown, Commander William McCool, Dr. Kultna Shavla, and Ilan Ramon,
541
- [00:00:48.000 --> 00:00:52.000] a colonel in the Israeli Air Force.
542
- [00:00:52.000 --> 00:00:58.000] These men and women assumed great risk in the service to all humanity.
543
- [00:00:58.000 --> 00:01:03.000] In an age when space flight has come to seem almost routine,
544
- [00:01:03.000 --> 00:01:07.000] it is easy to overlook the dangers of travel by rocket
545
- [00:01:07.000 --> 00:01:12.000] and the difficulties of navigating the fierce outer atmosphere of the Earth.
546
- [00:01:12.000 --> 00:01:18.000] These astronauts knew the dangers, and they faced them willingly,
547
- [00:01:18.000 --> 00:01:23.000] knowing they had a high and noble purpose in life.
548
- [00:01:23.000 --> 00:01:31.000] Because of their courage and daring and idealism, we will miss them all the more.
549
- [00:01:31.000 --> 00:01:36.000] All Americans today are thinking as well of the families of these men and women
550
- [00:01:36.000 --> 00:01:40.000] who have been given this sudden shock and grief.
551
- [00:01:40.000 --> 00:01:45.000] You're not alone. Our entire nation grieves with you,
552
- [00:01:45.000 --> 00:01:52.000] and those you love will always have the respect and gratitude of this country.
553
- [00:01:52.000 --> 00:01:56.000] The cause in which they died will continue.
554
- [00:01:56.000 --> 00:02:04.000] Mankind is led into the darkness beyond our world by the inspiration of discovery
555
- [00:02:04.000 --> 00:02:11.000] and the longing to understand. Our journey into space will go on.
556
- [00:02:11.000 --> 00:02:16.000] In the skies today, we saw destruction and tragedy.
557
- [00:02:16.000 --> 00:02:22.000] Yet farther than we can see, there is comfort and hope.
558
- [00:02:22.000 --> 00:02:29.000] In the words of the prophet Isaiah, "Lift your eyes and look to the heavens
559
- [00:02:29.000 --> 00:02:35.000] who created all these. He who brings out the starry hosts one by one
560
- [00:02:35.000 --> 00:02:39.000] and calls them each by name."
561
- [00:02:39.000 --> 00:02:46.000] Because of His great power and mighty strength, not one of them is missing.
562
- [00:02:46.000 --> 00:02:55.000] The same Creator who names the stars also knows the names of the seven souls we mourn today.
563
- [00:02:55.000 --> 00:03:01.000] The crew of the shuttle Columbia did not return safely to earth,
564
- [00:03:01.000 --> 00:03:05.000] yet we can pray that all are safely home.
565
- [00:03:05.000 --> 00:03:13.000] May God bless the grieving families, and may God continue to bless America.
566
- [00:03:13.000 --> 00:03:19.000] [Silence]
567
-
568
-
569
- whisper_print_timings: fallbacks = 1 p / 0 h
570
- whisper_print_timings: load time = 569.03 ms
571
- whisper_print_timings: mel time = 146.85 ms
572
- whisper_print_timings: sample time = 238.66 ms / 553 runs ( 0.43 ms per run)
573
- whisper_print_timings: encode time = 18665.10 ms / 9 runs ( 2073.90 ms per run)
574
- whisper_print_timings: decode time = 13090.93 ms / 549 runs ( 23.85 ms per run)
575
- whisper_print_timings: total time = 32733.52 ms
576
- ```
577
-
578
- </details>
579
-
580
- ## Real-time audio input example
581
-
582
- This is a naive example of performing real-time inference on audio from your microphone.
583
- The [stream](examples/stream) tool samples the audio every half a second and runs the transcription continuously.
584
- More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
585
-
586
- ```bash
587
- make stream
588
- ./stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
589
- ```
590
-
591
- https://user-images.githubusercontent.com/1991296/194935793-76afede7-cfa8-48d8-a80f-28ba83be7d09.mp4
592
-
593
- ## Confidence color-coding
594
-
595
- Adding the `--print-colors` argument will print the transcribed text using an experimental color coding strategy
596
- to highlight words with high or low confidence:
597
-
598
- ```bash
599
- ./main -m models/ggml-base.en.bin -f samples/gb0.wav --print-colors
600
- ```
601
-
602
- <img width="965" alt="image" src="https://user-images.githubusercontent.com/1991296/197356445-311c8643-9397-4e5e-b46e-0b4b4daa2530.png">
603
-
604
- ## Controlling the length of the generated text segments (experimental)
605
-
606
- For example, to limit the line length to a maximum of 16 characters, simply add `-ml 16`:
607
-
608
- ```text
609
- $ ./main -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 16
610
-
611
- whisper_model_load: loading model from './models/ggml-base.en.bin'
612
- ...
613
- system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |
614
-
615
- main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
616
-
617
- [00:00:00.000 --> 00:00:00.850] And so my
618
- [00:00:00.850 --> 00:00:01.590] fellow
619
- [00:00:01.590 --> 00:00:04.140] Americans, ask
620
- [00:00:04.140 --> 00:00:05.660] not what your
621
- [00:00:05.660 --> 00:00:06.840] country can do
622
- [00:00:06.840 --> 00:00:08.430] for you, ask
623
- [00:00:08.430 --> 00:00:09.440] what you can do
624
- [00:00:09.440 --> 00:00:10.020] for your
625
- [00:00:10.020 --> 00:00:11.000] country.
626
- ```
627
-
628
- ## Word-level timestamp (experimental)
629
-
630
- The `--max-len` argument can be used to obtain word-level timestamps. Simply use `-ml 1`:
631
-
632
- ```text
633
- $ ./main -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 1
634
-
635
- whisper_model_load: loading model from './models/ggml-base.en.bin'
636
- ...
637
- system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |
638
-
639
- main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
640
-
641
- [00:00:00.000 --> 00:00:00.320]
642
- [00:00:00.320 --> 00:00:00.370] And
643
- [00:00:00.370 --> 00:00:00.690] so
644
- [00:00:00.690 --> 00:00:00.850] my
645
- [00:00:00.850 --> 00:00:01.590] fellow
646
- [00:00:01.590 --> 00:00:02.850] Americans
647
- [00:00:02.850 --> 00:00:03.300] ,
648
- [00:00:03.300 --> 00:00:04.140] ask
649
- [00:00:04.140 --> 00:00:04.990] not
650
- [00:00:04.990 --> 00:00:05.410] what
651
- [00:00:05.410 --> 00:00:05.660] your
652
- [00:00:05.660 --> 00:00:06.260] country
653
- [00:00:06.260 --> 00:00:06.600] can
654
- [00:00:06.600 --> 00:00:06.840] do
655
- [00:00:06.840 --> 00:00:07.010] for
656
- [00:00:07.010 --> 00:00:08.170] you
657
- [00:00:08.170 --> 00:00:08.190] ,
658
- [00:00:08.190 --> 00:00:08.430] ask
659
- [00:00:08.430 --> 00:00:08.910] what
660
- [00:00:08.910 --> 00:00:09.040] you
661
- [00:00:09.040 --> 00:00:09.320] can
662
- [00:00:09.320 --> 00:00:09.440] do
663
- [00:00:09.440 --> 00:00:09.760] for
664
- [00:00:09.760 --> 00:00:10.020] your
665
- [00:00:10.020 --> 00:00:10.510] country
666
- [00:00:10.510 --> 00:00:11.000] .
667
- ```
668
-
669
- ## Speaker segmentation via tinydiarize (experimental)
670
-
671
- More information about this approach is available here: https://github.com/ggerganov/whisper.cpp/pull/1058
672
-
673
- Sample usage:
674
-
675
- ```py
676
- # download a tinydiarize compatible model
677
- ./models/download-ggml-model.sh small.en-tdrz
678
-
679
- # run as usual, adding the "-tdrz" command-line argument
680
- ./main -f ./samples/a13.wav -m ./models/ggml-small.en-tdrz.bin -tdrz
681
- ...
682
- main: processing './samples/a13.wav' (480000 samples, 30.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, tdrz = 1, timestamps = 1 ...
683
- ...
684
- [00:00:00.000 --> 00:00:03.800] Okay Houston, we've had a problem here. [SPEAKER_TURN]
685
- [00:00:03.800 --> 00:00:06.200] This is Houston. Say again please. [SPEAKER_TURN]
686
- [00:00:06.200 --> 00:00:08.260] Uh Houston we've had a problem.
687
- [00:00:08.260 --> 00:00:11.320] We've had a main beam up on a volt. [SPEAKER_TURN]
688
- [00:00:11.320 --> 00:00:13.820] Roger main beam interval. [SPEAKER_TURN]
689
- [00:00:13.820 --> 00:00:15.100] Uh uh [SPEAKER_TURN]
690
- [00:00:15.100 --> 00:00:18.020] So okay stand, by thirteen we're looking at it. [SPEAKER_TURN]
691
- [00:00:18.020 --> 00:00:25.740] Okay uh right now uh Houston the uh voltage is uh is looking good um.
692
- [00:00:27.620 --> 00:00:29.940] And we had a a pretty large bank or so.
693
- ```
694
-
695
- ## Karaoke-style movie generation (experimental)
696
-
697
- The [main](examples/main) example provides support for output of karaoke-style movies, where the
698
- currently pronounced word is highlighted. Use the `-wts` argument and run the generated bash script.
699
- This requires to have `ffmpeg` installed.
700
-
701
- Here are a few _"typical"_ examples:
702
-
703
- ```bash
704
- ./main -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -owts
705
- source ./samples/jfk.wav.wts
706
- ffplay ./samples/jfk.wav.mp4
707
- ```
708
-
709
- https://user-images.githubusercontent.com/1991296/199337465-dbee4b5e-9aeb-48a3-b1c6-323ac4db5b2c.mp4
710
-
711
- ---
712
-
713
- ```bash
714
- ./main -m ./models/ggml-base.en.bin -f ./samples/mm0.wav -owts
715
- source ./samples/mm0.wav.wts
716
- ffplay ./samples/mm0.wav.mp4
717
- ```
718
-
719
- https://user-images.githubusercontent.com/1991296/199337504-cc8fd233-0cb7-4920-95f9-4227de3570aa.mp4
720
-
721
- ---
722
-
723
- ```bash
724
- ./main -m ./models/ggml-base.en.bin -f ./samples/gb0.wav -owts
725
- source ./samples/gb0.wav.wts
726
- ffplay ./samples/gb0.wav.mp4
727
- ```
728
-
729
- https://user-images.githubusercontent.com/1991296/199337538-b7b0c7a3-2753-4a88-a0cd-f28a317987ba.mp4
730
-
731
- ---
732
-
733
- ## Video comparison of different models
734
-
735
- Use the [scripts/bench-wts.sh](https://github.com/ggerganov/whisper.cpp/blob/master/scripts/bench-wts.sh) script to generate a video in the following format:
736
-
737
- ```bash
738
- ./scripts/bench-wts.sh samples/jfk.wav
739
- ffplay ./samples/jfk.wav.all.mp4
740
- ```
741
-
742
- https://user-images.githubusercontent.com/1991296/223206245-2d36d903-cf8e-4f09-8c3b-eb9f9c39d6fc.mp4
743
-
744
- ---
745
-
746
- ## Benchmarks
747
-
748
- In order to have an objective comparison of the performance of the inference across different system configurations,
749
- use the [bench](examples/bench) tool. The tool simply runs the Encoder part of the model and prints how much time it
750
- took to execute it. The results are summarized in the following Github issue:
751
-
752
- [Benchmark results](https://github.com/ggerganov/whisper.cpp/issues/89)
753
-
754
- Additionally a script to run whisper.cpp with different models and audio files is provided [bench.py](scripts/bench.py).
755
-
756
- You can run it with the following command, by default it will run against any standard model in the models folder.
757
-
758
- ```bash
759
- python3 scripts/bench.py -f samples/jfk.wav -t 2,4,8 -p 1,2
760
- ```
761
-
762
- It is written in python with the intention of being easy to modify and extend for your benchmarking use case.
763
-
764
- It outputs a csv file with the results of the benchmarking.
765
-
766
- ## `ggml` format
767
-
768
- The original models are converted to a custom binary format. This allows to pack everything needed into a single file:
769
-
770
- - model parameters
771
- - mel filters
772
- - vocabulary
773
- - weights
774
-
775
- You can download the converted models using the [models/download-ggml-model.sh](models/download-ggml-model.sh) script
776
- or manually from here:
777
-
778
- - https://huggingface.co/ggerganov/whisper.cpp
779
- - https://ggml.ggerganov.com
780
-
781
- For more details, see the conversion script [models/convert-pt-to-ggml.py](models/convert-pt-to-ggml.py) or [models/README.md](models/README.md).
782
-
783
- ## [Bindings](https://github.com/ggerganov/whisper.cpp/discussions/categories/bindings)
784
-
785
- - [x] Rust: [tazz4843/whisper-rs](https://github.com/tazz4843/whisper-rs) | [#310](https://github.com/ggerganov/whisper.cpp/discussions/310)
786
- - [x] JavaScript: [bindings/javascript](bindings/javascript) | [#309](https://github.com/ggerganov/whisper.cpp/discussions/309)
787
- - React Native (iOS / Android): [whisper.rn](https://github.com/mybigday/whisper.rn)
788
- - [x] Go: [bindings/go](bindings/go) | [#312](https://github.com/ggerganov/whisper.cpp/discussions/312)
789
- - [x] Java:
790
- - [GiviMAD/whisper-jni](https://github.com/GiviMAD/whisper-jni)
791
- - [x] Ruby: [bindings/ruby](bindings/ruby) | [#507](https://github.com/ggerganov/whisper.cpp/discussions/507)
792
- - [x] Objective-C / Swift: [ggerganov/whisper.spm](https://github.com/ggerganov/whisper.spm) | [#313](https://github.com/ggerganov/whisper.cpp/discussions/313)
793
- - [exPHAT/SwiftWhisper](https://github.com/exPHAT/SwiftWhisper)
794
- - [x] .NET: | [#422](https://github.com/ggerganov/whisper.cpp/discussions/422)
795
- - [sandrohanea/whisper.net](https://github.com/sandrohanea/whisper.net)
796
- - [NickDarvey/whisper](https://github.com/NickDarvey/whisper)
797
- - [x] Python: | [#9](https://github.com/ggerganov/whisper.cpp/issues/9)
798
- - [stlukey/whispercpp.py](https://github.com/stlukey/whispercpp.py) (Cython)
799
- - [AIWintermuteAI/whispercpp](https://github.com/AIWintermuteAI/whispercpp) (Updated fork of aarnphm/whispercpp)
800
- - [aarnphm/whispercpp](https://github.com/aarnphm/whispercpp) (Pybind11)
801
- - [x] R: [bnosac/audio.whisper](https://github.com/bnosac/audio.whisper)
802
- - [x] Unity: [macoron/whisper.unity](https://github.com/Macoron/whisper.unity)
803
-
804
- ## Examples
805
-
806
- There are various examples of using the library for different projects in the [examples](examples) folder.
807
- Some of the examples are even ported to run in the browser using WebAssembly. Check them out!
808
-
809
- | Example | Web | Description |
810
- | --------------------------------------------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
811
- | [main](examples/main) | [whisper.wasm](examples/whisper.wasm) | Tool for translating and transcribing audio using Whisper |
812
- | [bench](examples/bench) | [bench.wasm](examples/bench.wasm) | Benchmark the performance of Whisper on your machine |
813
- | [stream](examples/stream) | [stream.wasm](examples/stream.wasm) | Real-time transcription of raw microphone capture |
814
- | [command](examples/command) | [command.wasm](examples/command.wasm) | Basic voice assistant example for receiving voice commands from the mic |
815
- | [wchess](examples/wchess) | [wchess.wasm](examples/wchess) | Voice-controlled chess |
816
- | [talk](examples/talk) | [talk.wasm](examples/talk.wasm) | Talk with a GPT-2 bot |
817
- | [talk-llama](examples/talk-llama) | | Talk with a LLaMA bot |
818
- | [whisper.objc](examples/whisper.objc) | | iOS mobile application using whisper.cpp |
819
- | [whisper.swiftui](examples/whisper.swiftui) | | SwiftUI iOS / macOS application using whisper.cpp |
820
- | [whisper.android](examples/whisper.android) | | Android mobile application using whisper.cpp |
821
- | [whisper.nvim](examples/whisper.nvim) | | Speech-to-text plugin for Neovim |
822
- | [generate-karaoke.sh](examples/generate-karaoke.sh) | | Helper script to easily [generate a karaoke video](https://youtu.be/uj7hVta4blM) of raw audio capture |
823
- | [livestream.sh](examples/livestream.sh) | | [Livestream audio transcription](https://github.com/ggerganov/whisper.cpp/issues/185) |
824
- | [yt-wsp.sh](examples/yt-wsp.sh) | | Download + transcribe and/or translate any VOD [(original)](https://gist.github.com/DaniruKun/96f763ec1a037cc92fe1a059b643b818) |
825
- | [server](examples/server) | | HTTP transcription server with OAI-like API |
826
-
827
- ## [Discussions](https://github.com/ggerganov/whisper.cpp/discussions)
828
-
829
- If you have any kind of feedback about this project feel free to use the Discussions section and open a new topic.
830
- You can use the [Show and tell](https://github.com/ggerganov/whisper.cpp/discussions/categories/show-and-tell) category
831
- to share your own projects that use `whisper.cpp`. If you have a question, make sure to check the
832
- [Frequently asked questions (#126)](https://github.com/ggerganov/whisper.cpp/discussions/126) discussion.