# Hugging Face Spaces Deployment Guide ## ✅ Changes Made for Hugging Face Spaces Compatibility ### 1. Port Configuration - **Updated `backend/app.py`**: Server now reads `PORT` from environment variable (default: 7860) ```python port = int(os.environ.get("PORT", 7860)) uvicorn.run(app, host="0.0.0.0", port=port) ``` - **Updated `Dockerfile`**: CMD uses `${PORT:-7860}` for dynamic port binding ### 2. Filesystem Permissions - **Changed output directory**: `OUTPUT_DIR` now uses `/tmp/outputs` instead of `./outputs` - Hugging Face Spaces containers have read-only `/app` directory - `/tmp` is writable for temporary files - **Note**: Files in `/tmp` are ephemeral and lost on restart ### 3. Static File Serving - **Fixed sample image serving**: Mounted `/cyto`, `/colpo`, `/histo` directories from `frontend/dist` - **Added catch-all route**: Serves static files (logos, banners) from dist root - **Frontend dist path fallback**: Checks both `./frontend/dist` (Docker) and `../frontend/dist` (local dev) ### 4. Frontend Configuration - **Frontend already configured**: Uses `window.location.origin` in production, so API calls work on any domain - **Vite build**: Copies `public/` contents to `dist/` automatically --- ## 📋 Deployment Checklist ### Step 1: Create Hugging Face Space 1. Go to https://huggingface.co/spaces 2. Click **"Create new Space"** 3. Choose: - **Space SDK**: Docker - **Hardware**: CPU Basic (free) or GPU (for faster inference) - **Visibility**: Public or Private ### Step 2: Set Up Git LFS (for large model files) Your project has large model files (`.pt`, `.pth`, `.keras`). Track them with Git LFS: ```bash # Install Git LFS if not already installed git lfs install # Track model files git lfs track "*.pt" git lfs track "*.pth" git lfs track "*.keras" git lfs track "*.pkl" # Commit .gitattributes git add .gitattributes git commit -m "Track model files with Git LFS" ``` ### Step 3: Configure Secrets (Optional) If you want AI-generated summaries using Mistral, add a secret: 1. Go to Space Settings → Variables and secrets 2. Add new secret: - Name: `HF_TOKEN` - Value: Your Hugging Face token (from https://huggingface.co/settings/tokens) ### Step 4: Push Code to Space ```bash # Add Space as remote git remote add space https://huggingface.co/spaces// # Push to Space git push space main ``` ### Step 5: Monitor Build - Hugging Face will build the Docker image (this may take 10-20 minutes) - Watch logs in the Space's "Logs" tab - Once built, the Space will automatically start --- ## 🔍 Troubleshooting ### Build Issues **Problem**: Docker build times out or fails - **Solution**: Reduce image size by pinning lighter dependencies in `requirements.txt` - **Solution**: Consider using pre-built wheels for TensorFlow/PyTorch **Problem**: Model files not found - **Solution**: Ensure Git LFS is configured and model files are committed - **Solution**: Check that model paths in `backend/app.py` match actual filenames ### Runtime Issues **Problem**: 404 errors for sample images - **Solution**: Rebuild frontend: `cd frontend && npm run build` - **Solution**: Verify `frontend/public/` contents are copied to `dist/` **Problem**: Permission denied errors - **Solution**: All writes should go to `/tmp/outputs` (already fixed) - **Solution**: Never write to `/app` directory **Problem**: Port binding errors - **Solution**: Use `$PORT` env var (already configured in Dockerfile and app.py) ### Performance Issues **Problem**: Slow startup or inference - **Solution**: Models load at startup; consider lazy loading on first request - **Solution**: Upgrade to GPU hardware tier for faster inference - **Solution**: Add caching for model weights --- ## 📁 File Structure Expected in Space ``` /app/ ├── app.py # Main FastAPI app ├── model.py, model_histo.py, etc. # Model definitions ├── augmentations.py # Image preprocessing ├── requirements.txt # Python dependencies ├── best2.pt # YOLO cytology model ├── MWTclass2.pth # MWT classifier ├── yolo_colposcopy.pt # YOLO colposcopy model ├── histopathology_trained_model.keras # Histopathology model ├── logistic_regression_model.pkl # CIN classifier (optional) └── frontend/ └── dist/ # Built frontend ├── index.html ├── assets/ # JS/CSS bundles ├── cyto/ # Sample cytology images ├── colpo/ # Sample colposcopy images ├── histo/ # Sample histopathology images └── *.png, *.jpeg # Logos, banners ``` --- ## 🌐 Access Your Space Once deployed, your app will be available at: ``` https://huggingface.co/spaces// ``` The frontend serves at `/` and the API is accessible at: - `POST /predict/` - Run model inference - `POST /reports/` - Generate medical reports - `GET /health` - Health check - `GET /models` - List available models --- ## ⚠️ Important Notes ### Ephemeral Storage - Files in `/tmp/outputs` are **lost on restart** - For persistent reports, consider: - Downloading immediately after generation - Uploading to external storage (S3, Hugging Face Datasets) - Using Persistent Storage (requires paid tier) ### Model Loading Time - All models load at startup (~30-60 seconds) - First request after restart may be slower - Consider implementing health check endpoint that waits for models ### Resource Limits - Free CPU tier: Limited RAM and CPU - Models are memory-intensive (TensorFlow + PyTorch + YOLO) - May need **CPU Upgrade** or **GPU** tier for production use ### CORS - Currently allows all origins (`allow_origins=["*"]`) - For production, restrict to your Space domain --- ## 🚀 Next Steps After Deployment 1. **Test all three models**: - Upload cytology sample → Test YOLO detection - Upload colposcopy sample → Test CIN classification - Upload histopathology sample → Test breast cancer classification 2. **Generate a test report**: - Run an analysis - Fill out patient metadata - Generate HTML/PDF report - Verify download links work 3. **Monitor performance**: - Check inference times - Monitor memory usage in Space logs - Consider upgrading hardware if needed 4. **Share your Space**: - Add a README with usage instructions - Include sample images in the repo - Add citations for model papers --- ## 📞 Support If you encounter issues: 1. Check Space logs: Settings → Logs 2. Verify all model files are present: Settings → Files 3. Test locally with Docker: `docker build -t pathora . && docker run -p 7860:7860 pathora` 4. Open an issue on Hugging Face Discuss: https://discuss.huggingface.co/ --- **Deployment ready! 🎉**