vision_llm_agent / requirements.txt
sunheycho
feat(llama-lora): add display names and defaults; add peft; rebuild static\n\n- UI: add baseName/loraName fields and send to backend; show names in results\n- Defaults: base=openlm-research/open_llama_3b, adapter=HGKo/openllama3b-lora-adapter\n- Backend deps: add peft to requirements.txt\n- Frontend: build CRA and sync artifacts to top-level static/
56b6ee6
# Core dependencies
gradio>=4.0.0
# Pin to a CPU-only friendly PyTorch to avoid huge CUDA deps on Spaces
torch==2.3.1
torchvision==0.18.1
transformers>=4.30.0
Pillow>=9.0.0
# Object detection models
ultralytics>=8.0.0 # YOLOv8
timm>=0.9.0 # Vision Transformer support
# API dependencies
flask>=2.0.0
flask-cors>=3.0.0
flask-login>=0.6.2
flask-session>=0.4.0
matplotlib>=3.5.0
# Prefer wheel-available versions; keep flexible to match PyPI wheels
numpy>=1.24.4,<2.0.0; python_version < '3.12'
numpy>=2.0.0,<2.3.0; python_version >= '3.12'
# For future phases
fastapi>=0.100.0
uvicorn[standard]>=0.22.0
python-multipart>=0.0.5
requests>=2.31.0
# Llama 4 integration
# Use Hugging Face accelerate; "accelerator" was a typo and can cause install issues
# accelerate/bitsandbytes are not required in this Space; omit to reduce build size
# accelerate>=0.20.0
# bitsandbytes>=0.41.0; platform_system == 'Linux' and platform_machine == 'x86_64'
sentencepiece>=0.1.99
protobuf>=4.23.0
# Vector DB and image similarity search
chroma-hnswlib>=0.7.3
chromadb>=0.4.18,<1.0.0
scipy>=1.10.0,<1.14.0; python_version < '3.12'
scipy>=1.14.1,<1.15.0; python_version >= '3.12'
open-clip-torch>=2.20.0
# pysqlite3-binary needed for some Linux environments; skip on macOS
pysqlite3-binary>=0.5.0; sys_platform == 'linux'
# OpenAI Python SDK
openai>=1.30.0
# LangChain (RAG pipeline)
langchain>=0.2.6
langchain-openai>=0.1.16
langchain-community>=0.2.6
langchain-experimental>=0.0.60
# PEFT for LoRA
peft>=0.10.0
# llama.cpp bindings for loading local GGUF (quantized Q4) models β€” optional.
# Removed from default install to avoid long native builds on Spaces.
# llama-cpp-python>=0.2.90