Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
yusufs
/
vllm-inference
like
0
Paused
App
Files
Files
Fetching metadata from the HF Docker repository...
cab183f
vllm-inference
1 contributor
History:
43 commits
yusufs
feat(/app/run-llama.sh): /app/run-llama.sh
cab183f
3 months ago
.gitignore
Safe
19 Bytes
feat(download_model.py): remove download_model.py during build, it causing big image size
4 months ago
Dockerfile
Safe
1.3 kB
feat(/app/run-llama.sh): /app/run-llama.sh
3 months ago
README.md
Safe
1.73 kB
feat(add-model): always download model during build, it will be cached in the consecutive builds
4 months ago
download_model.py
Safe
700 Bytes
feat(add-model): always download model during build, it will be cached in the consecutive builds
4 months ago
main.py
Safe
6.7 kB
feat(parse): parse output
4 months ago
openai_compatible_api_server.py
Safe
24.4 kB
feat(dep_sizes.txt): removes dep_sizes.txt during build, it not needed
4 months ago
poetry.lock
Safe
426 kB
feat(refactor): move the files to root
4 months ago
pyproject.toml
Safe
416 Bytes
feat(refactor): move the files to root
4 months ago
requirements.txt
Safe
9.99 kB
feat(first-commit): follow examples and tutorials
4 months ago
run-llama.sh
Safe
1.52 kB
feat(quantization): T4 not support bfloat16
4 months ago
run-sailor.sh
Safe
1.83 kB
docs(sailor): add not about minimum resources of sailor
4 months ago