Safe-Space / README.md
parthraninga's picture
Upload 14 files
50f0958 verified
|
raw
history blame
3.89 kB

SafeSpace FastAPI Backend

Overview

FastAPI backend service for threat intelligence and safety recommendations with ML-enhanced categorization.

Current Status

βœ… WORKING - Server running successfully on http://localhost:8000

Features

  • βœ… Threat Detection API - /api/threats endpoint working
  • βœ… ML Model Integration - NB-SVM threat classifier loaded and working
  • βœ… News API Integration - Fetching real news data
  • βœ… Health Check - /health endpoint available
  • βœ… API Documentation - Available at /docs
  • ⚠️ AI Advice Generation - Working with fallback (OpenRouter API key needed)
  • ⚠️ ONNX Model - Optional, not currently available

API Endpoints

  • GET / - Root endpoint
  • GET /health - Health check
  • GET /api/test - Test endpoint
  • GET /api/threats?city={city} - Get threats for specific city
  • GET /api/threats/{id} - Get threat details
  • GET /api/models/status - ML model status
  • POST /api/models/download - Download ML models

Quick Start

1. Install Dependencies

cd backend/fastapi
pip install -r requirements.txt

2. Start Server

# Option 1: Direct Python
python run.py

# Option 2: Windows Batch File
start_fastapi.bat

# Option 3: Manual uvicorn
uvicorn server.main:app --host 0.0.0.0 --port 8000

3. Test API

Directory Structure

fastapi/
β”œβ”€β”€ run.py                    # Main startup script
β”œβ”€β”€ start_fastapi.bat        # Windows startup script
β”œβ”€β”€ requirements.txt         # Python dependencies
β”œβ”€β”€ models/                  # ML models directory
β”‚   β”œβ”€β”€ threat.pkl          # βœ… NB-SVM threat classifier
β”‚   β”œβ”€β”€ sentiment.pkl       # Additional model
β”‚   └── model_info.txt      # Model documentation
β”œβ”€β”€ server/                 # Main application code
β”‚   β”œβ”€β”€ main.py            # FastAPI app configuration
β”‚   β”œβ”€β”€ routes/
β”‚   β”‚   └── api.py         # βœ… API endpoints
β”‚   └── utils/
β”‚       β”œβ”€β”€ model_loader.py # βœ… ML model management
β”‚       └── solution.py     # AI advice generation
└── venv/                   # Virtual environment

Recent Fixes Applied

  1. βœ… Fixed Model Loading Paths - Corrected relative paths for model files
  2. βœ… Robust Error Handling - Server continues running even if optional models fail
  3. βœ… Optional Dependencies - ONNX and transformers are now optional
  4. βœ… CORS Configuration - Added support for both React (3000) and Node.js (3001)
  5. βœ… Proper Startup Script - Fixed directory and import issues

Integration Status

  • βœ… Frontend Integration - API endpoints accessible from React frontend
  • βœ… Node.js Backend - CORS configured for authentication backend
  • βœ… ML Pipeline - Threat classification working with existing model
  • βœ… News API - Real-time news fetching operational

Performance

  • Startup Time: ~2-3 seconds
  • Response Time: ~2-5 seconds per threat query
  • Memory Usage: ~50-100MB
  • Timeout Protection: 5-8 seconds with fallback data

Next Steps

  1. Optional: Add OpenRouter API key for enhanced AI advice
  2. Optional: Add ONNX model for improved threat detection
  3. Optional: Implement caching for better performance
  4. Optional: Add more sophisticated threat categorization

Troubleshooting

  • If server fails to start, check pip install -r requirements.txt
  • If models fail to load, they will use fallback threat detection
  • API will return mock data if external services are unavailable
  • Check logs for detailed error information