Spaces:
Sleeping
Sleeping
SafeSpace FastAPI Backend
Overview
FastAPI backend service for threat intelligence and safety recommendations with ML-enhanced categorization.
Current Status
β WORKING - Server running successfully on http://localhost:8000
Features
- β
Threat Detection API -
/api/threats
endpoint working - β ML Model Integration - NB-SVM threat classifier loaded and working
- β News API Integration - Fetching real news data
- β
Health Check -
/health
endpoint available - β
API Documentation - Available at
/docs
- β οΈ AI Advice Generation - Working with fallback (OpenRouter API key needed)
- β οΈ ONNX Model - Optional, not currently available
API Endpoints
GET /
- Root endpointGET /health
- Health checkGET /api/test
- Test endpointGET /api/threats?city={city}
- Get threats for specific cityGET /api/threats/{id}
- Get threat detailsGET /api/models/status
- ML model statusPOST /api/models/download
- Download ML models
Quick Start
1. Install Dependencies
cd backend/fastapi
pip install -r requirements.txt
2. Start Server
# Option 1: Direct Python
python run.py
# Option 2: Windows Batch File
start_fastapi.bat
# Option 3: Manual uvicorn
uvicorn server.main:app --host 0.0.0.0 --port 8000
3. Test API
- Health Check: http://localhost:8000/health
- API Docs: http://localhost:8000/docs
- Test Threats: http://localhost:8000/api/threats?city=Delhi
Directory Structure
fastapi/
βββ run.py # Main startup script
βββ start_fastapi.bat # Windows startup script
βββ requirements.txt # Python dependencies
βββ models/ # ML models directory
β βββ threat.pkl # β
NB-SVM threat classifier
β βββ sentiment.pkl # Additional model
β βββ model_info.txt # Model documentation
βββ server/ # Main application code
β βββ main.py # FastAPI app configuration
β βββ routes/
β β βββ api.py # β
API endpoints
β βββ utils/
β βββ model_loader.py # β
ML model management
β βββ solution.py # AI advice generation
βββ venv/ # Virtual environment
Recent Fixes Applied
- β Fixed Model Loading Paths - Corrected relative paths for model files
- β Robust Error Handling - Server continues running even if optional models fail
- β Optional Dependencies - ONNX and transformers are now optional
- β CORS Configuration - Added support for both React (3000) and Node.js (3001)
- β Proper Startup Script - Fixed directory and import issues
Integration Status
- β Frontend Integration - API endpoints accessible from React frontend
- β Node.js Backend - CORS configured for authentication backend
- β ML Pipeline - Threat classification working with existing model
- β News API - Real-time news fetching operational
Performance
- Startup Time: ~2-3 seconds
- Response Time: ~2-5 seconds per threat query
- Memory Usage: ~50-100MB
- Timeout Protection: 5-8 seconds with fallback data
Next Steps
- Optional: Add OpenRouter API key for enhanced AI advice
- Optional: Add ONNX model for improved threat detection
- Optional: Implement caching for better performance
- Optional: Add more sophisticated threat categorization
Troubleshooting
- If server fails to start, check
pip install -r requirements.txt
- If models fail to load, they will use fallback threat detection
- API will return mock data if external services are unavailable
- Check logs for detailed error information