Spaces:
Running
Running
sunheycho
commited on
Commit
·
812f50d
1
Parent(s):
3432460
Reduce HF Spaces build time: pin CPU-only torch/torchvision; drop llama-cpp, accelerate/bitsandbytes; constrain chromadb <1.0
Browse files- requirements.txt +11 -7
requirements.txt
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
# Core dependencies
|
2 |
gradio>=4.0.0
|
3 |
-
|
|
|
|
|
4 |
transformers>=4.30.0
|
5 |
Pillow>=9.0.0
|
6 |
|
@@ -26,15 +28,15 @@ requests>=2.31.0
|
|
26 |
|
27 |
# Llama 4 integration
|
28 |
# Use Hugging Face accelerate; "accelerator" was a typo and can cause install issues
|
29 |
-
accelerate
|
30 |
-
#
|
31 |
-
bitsandbytes>=0.41.0; platform_system == 'Linux' and platform_machine == 'x86_64'
|
32 |
sentencepiece>=0.1.99
|
33 |
protobuf>=4.23.0
|
34 |
|
35 |
# Vector DB and image similarity search
|
36 |
chroma-hnswlib>=0.7.3
|
37 |
-
chromadb>=0.4.18
|
38 |
scipy>=1.10.0,<1.14.0; python_version < '3.12'
|
39 |
scipy>=1.14.1,<1.15.0; python_version >= '3.12'
|
40 |
open-clip-torch>=2.20.0
|
@@ -50,5 +52,7 @@ langchain-openai>=0.1.16
|
|
50 |
langchain-community>=0.2.6
|
51 |
langchain-experimental>=0.0.60
|
52 |
|
53 |
-
# llama.cpp bindings for loading local GGUF (quantized Q4) models
|
54 |
-
|
|
|
|
|
|
1 |
# Core dependencies
|
2 |
gradio>=4.0.0
|
3 |
+
# Pin to a CPU-only friendly PyTorch to avoid huge CUDA deps on Spaces
|
4 |
+
torch==2.3.1
|
5 |
+
torchvision==0.18.1
|
6 |
transformers>=4.30.0
|
7 |
Pillow>=9.0.0
|
8 |
|
|
|
28 |
|
29 |
# Llama 4 integration
|
30 |
# Use Hugging Face accelerate; "accelerator" was a typo and can cause install issues
|
31 |
+
# accelerate/bitsandbytes are not required in this Space; omit to reduce build size
|
32 |
+
# accelerate>=0.20.0
|
33 |
+
# bitsandbytes>=0.41.0; platform_system == 'Linux' and platform_machine == 'x86_64'
|
34 |
sentencepiece>=0.1.99
|
35 |
protobuf>=4.23.0
|
36 |
|
37 |
# Vector DB and image similarity search
|
38 |
chroma-hnswlib>=0.7.3
|
39 |
+
chromadb>=0.4.18,<1.0.0
|
40 |
scipy>=1.10.0,<1.14.0; python_version < '3.12'
|
41 |
scipy>=1.14.1,<1.15.0; python_version >= '3.12'
|
42 |
open-clip-torch>=2.20.0
|
|
|
52 |
langchain-community>=0.2.6
|
53 |
langchain-experimental>=0.0.60
|
54 |
|
55 |
+
# llama.cpp bindings for loading local GGUF (quantized Q4) models — optional.
|
56 |
+
# Removed from default install to avoid long native builds on Spaces.
|
57 |
+
# llama-cpp-python>=0.2.90
|
58 |
+
|