Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
yusufs
/
vllm-inference
like
0
Paused
App
Files
Files
Fetching metadata from the HF Docker repository...
8679a35
vllm-inference
1 contributor
History:
29 commits
yusufs
feat(add-model): always download model during build, it will be cached in the consecutive builds
8679a35
4 months ago
.gitignore
Safe
5 Bytes
feat(first-commit): follow examples and tutorials
4 months ago
Dockerfile
Safe
1.23 kB
feat(add-model): always download model during build, it will be cached in the consecutive builds
4 months ago
README.md
Safe
1.73 kB
feat(add-model): always download model during build, it will be cached in the consecutive builds
4 months ago
download_model.py
Safe
700 Bytes
feat(add-model): always download model during build, it will be cached in the consecutive builds
4 months ago
main.py
Safe
6.7 kB
feat(parse): parse output
4 months ago
openai_compatible_api_server.py
Safe
24.4 kB
feat(endpoint): add prefix /api on each endpoint
4 months ago
poetry.lock
Safe
426 kB
feat(refactor): move the files to root
4 months ago
pyproject.toml
Safe
416 Bytes
feat(refactor): move the files to root
4 months ago
requirements.txt
Safe
9.99 kB
feat(first-commit): follow examples and tutorials
4 months ago
run-llama.sh
Safe
1.51 kB
feat(add-model): always download model during build, it will be cached in the consecutive builds
4 months ago
run-sailor.sh
Safe
1.36 kB
feat(add-model): always download model during build, it will be cached in the consecutive builds
4 months ago