Spaces:
				
			
			
	
			
			
		Sleeping
		
	
	
	
			
			
	
	
	
	
		
		
		Sleeping
		
	File size: 3,888 Bytes
			
			50f0958  | 
								1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102  | 
								# SafeSpace FastAPI Backend
## Overview
FastAPI backend service for threat intelligence and safety recommendations with ML-enhanced categorization.
## Current Status
β
 **WORKING** - Server running successfully on http://localhost:8000
### Features
- β
 **Threat Detection API** - `/api/threats` endpoint working
- β
 **ML Model Integration** - NB-SVM threat classifier loaded and working
- β
 **News API Integration** - Fetching real news data
- β
 **Health Check** - `/health` endpoint available
- β
 **API Documentation** - Available at `/docs`
- β οΈ **AI Advice Generation** - Working with fallback (OpenRouter API key needed)
- β οΈ **ONNX Model** - Optional, not currently available
### API Endpoints
- `GET /` - Root endpoint
- `GET /health` - Health check
- `GET /api/test` - Test endpoint  
- `GET /api/threats?city={city}` - Get threats for specific city
- `GET /api/threats/{id}` - Get threat details
- `GET /api/models/status` - ML model status
- `POST /api/models/download` - Download ML models
## Quick Start
### 1. Install Dependencies
```bash
cd backend/fastapi
pip install -r requirements.txt
```
### 2. Start Server
```bash
# Option 1: Direct Python
python run.py
# Option 2: Windows Batch File
start_fastapi.bat
# Option 3: Manual uvicorn
uvicorn server.main:app --host 0.0.0.0 --port 8000
```
### 3. Test API
- Health Check: http://localhost:8000/health
- API Docs: http://localhost:8000/docs
- Test Threats: http://localhost:8000/api/threats?city=Delhi
## Directory Structure
```
fastapi/
βββ run.py                    # Main startup script
βββ start_fastapi.bat        # Windows startup script
βββ requirements.txt         # Python dependencies
βββ models/                  # ML models directory
β   βββ threat.pkl          # β
 NB-SVM threat classifier
β   βββ sentiment.pkl       # Additional model
β   βββ model_info.txt      # Model documentation
βββ server/                 # Main application code
β   βββ main.py            # FastAPI app configuration
β   βββ routes/
β   β   βββ api.py         # β
 API endpoints
β   βββ utils/
β       βββ model_loader.py # β
 ML model management
β       βββ solution.py     # AI advice generation
βββ venv/                   # Virtual environment
```
## Recent Fixes Applied
1. β
 **Fixed Model Loading Paths** - Corrected relative paths for model files
2. β
 **Robust Error Handling** - Server continues running even if optional models fail
3. β
 **Optional Dependencies** - ONNX and transformers are now optional
4. β
 **CORS Configuration** - Added support for both React (3000) and Node.js (3001)
5. β
 **Proper Startup Script** - Fixed directory and import issues
## Integration Status
- β
 **Frontend Integration** - API endpoints accessible from React frontend
- β
 **Node.js Backend** - CORS configured for authentication backend
- β
 **ML Pipeline** - Threat classification working with existing model
- β
 **News API** - Real-time news fetching operational
## Performance
- **Startup Time**: ~2-3 seconds
- **Response Time**: ~2-5 seconds per threat query
- **Memory Usage**: ~50-100MB
- **Timeout Protection**: 5-8 seconds with fallback data
## Next Steps
1. **Optional**: Add OpenRouter API key for enhanced AI advice
2. **Optional**: Add ONNX model for improved threat detection
3. **Optional**: Implement caching for better performance
4. **Optional**: Add more sophisticated threat categorization
## Troubleshooting
- If server fails to start, check `pip install -r requirements.txt`
- If models fail to load, they will use fallback threat detection
- API will return mock data if external services are unavailable
- Check logs for detailed error information
 |