Spaces:
Running
on
Zero
Running
on
Zero
liuyang
commited on
Commit
·
e91cdbe
1
Parent(s):
4042b26
requirements
Browse files- requirements.txt +3 -3
requirements.txt
CHANGED
|
@@ -1,17 +1,17 @@
|
|
| 1 |
# 1. Do NOT pin torch/torchaudio here – keep the CUDA builds that come with the image
|
| 2 |
-
torch==2.
|
| 3 |
transformers==4.48.0
|
| 4 |
# Removed flash-attention since faster-whisper handles this internally
|
| 5 |
# https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.0.8/flash_attn-2.7.4.post1+cu126torch2.4-cp310-cp310-linux_x86_64.whl
|
| 6 |
pydantic==2.10.6
|
| 7 |
|
| 8 |
# 2. Main whisper model
|
| 9 |
-
faster-whisper
|
| 10 |
|
| 11 |
# 3. Extra libs your app really needs
|
| 12 |
gradio==5.0.1
|
| 13 |
spaces>=0.19.0
|
| 14 |
-
pyannote.audio
|
| 15 |
pandas>=1.5.0
|
| 16 |
numpy>=1.24.0
|
| 17 |
librosa>=0.10.0
|
|
|
|
| 1 |
# 1. Do NOT pin torch/torchaudio here – keep the CUDA builds that come with the image
|
| 2 |
+
torch==2.3.1
|
| 3 |
transformers==4.48.0
|
| 4 |
# Removed flash-attention since faster-whisper handles this internally
|
| 5 |
# https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.0.8/flash_attn-2.7.4.post1+cu126torch2.4-cp310-cp310-linux_x86_64.whl
|
| 6 |
pydantic==2.10.6
|
| 7 |
|
| 8 |
# 2. Main whisper model
|
| 9 |
+
faster-whisper==1.1.1
|
| 10 |
|
| 11 |
# 3. Extra libs your app really needs
|
| 12 |
gradio==5.0.1
|
| 13 |
spaces>=0.19.0
|
| 14 |
+
pyannote.audio==3.3.1
|
| 15 |
pandas>=1.5.0
|
| 16 |
numpy>=1.24.0
|
| 17 |
librosa>=0.10.0
|