title: Mental Health AI Assistant
emoji: π§
colorFrom: blue
colorTo: purple
sdk: docker
app_file: app.py
pinned: false
Mental Health Chatbot Application
A comprehensive mental health support application that provides conversational assistance, assessment tools, and resource information. Optimized for Hugging Face Spaces deployment with graceful fallback system.
π Deployment Status
β READY FOR HUGGING FACE SPACES
This application features a dual-mode deployment system:
π Full Mode (when all dependencies work)
- AI-powered chatbot with CrewAI agents
- Advanced sentiment analysis and RAG system
- Complete assessment tools with professional scoring
- Voice integration and TTS capabilities
π‘οΈ Minimal Mode (guaranteed fallback)
- Simple keyword-based chat responses
- Basic mental health assessments
- Crisis resources and educational materials
- Always works regardless of dependency issues
ποΈ Architecture
Resilient Deployment Design
Docker Build Attempt:
βββ Try Full Requirements (requirements.txt)
βββ If grpcio/complex deps fail β Install minimal deps only
βββ Runtime: Try app.py (full features)
βββ If app.py fails β Fallback to app_minimal.py
βββ Result: Always working mental health assistant
Two-App System
app.py: Full-featured application with AI agentsapp_minimal.py: Guaranteed-working minimal version- Automatic fallback: Dockerfile tries full first, switches to minimal if needed
π Features
Core Functionality
- AI-Powered Chatbot: Multi-agent system using CrewAI for intelligent mental health conversations
- RAG (Retrieval-Augmented Generation): Knowledge base integration for evidence-based responses
- Real-time Chat Interface: Modern Gradio-based chat UI with typing indicators and message history
- Voice Integration: Text-to-Speech (TTS) and Speech-to-Text (STT) capabilities
- Sentiment Analysis: Real-time emotion detection and sentiment tracking
- Mental Health Assessments: Standardized questionnaires (PHQ-9, GAD-7, DAST-10, AUDIT, Bipolar)
User Management
- Session Management: Secure guest sessions with chat history
- User Dashboard: Assessment history and insights tracking
- Anonymous Access: Privacy-focused chat sessions
- Assessment Storage: Persistent assessment results and reports
Advanced Features
- Crisis Detection: Automatic identification of mental health emergencies
- Condition Classification: AI-powered categorization of mental health concerns
- Session Persistence: Chat history maintained during session
- PDF Report Generation: Downloadable assessment reports
- Multi-Agent Architecture: Specialized agents for different aspects of mental health support
ποΈ Architecture
Gradio Interface (app.py)
- Multi-tab Interface: Chat, Voice, Assessment, Resources, and About tabs
- Real-time Chat: Message processing with typing indicators
- Voice Processing: Whisper integration for speech-to-text
- Assessment Integration: Interactive mental health questionnaires
- Session Management: User context and history preservation
Backend Services
Flask Application (main.py)
- Core chat response generation
- Voice processing (Whisper integration)
- Text-to-Speech generation (Edge TTS)
- Database operations (SQLite with SQLAlchemy)
- Chat session management
AI Agents System (crew_ai/)
- Crisis Detection Agent: Identifies emergency situations
- Mental Condition Classifier: Categorizes mental health concerns
- RAG Agents: Knowledge retrieval and summarization
- Assessment Conductor: Administers standardized questionnaires
- Response Generator: Produces empathetic, helpful responses
Data Layer
- SQLite Database: User profiles, assessments, chat history
- Vector Store: Knowledge base for RAG system
- Session Storage: Temporary chat data and user context
π Hugging Face Spaces Deployment with Docker
This application is containerized and optimized for deployment on Hugging Face Spaces using Docker, integrating both Flask (main.py) and FastAPI (fastapi_app.py) backends.
Architecture Overview
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Gradio UI β β Flask App β β FastAPI App β
β (Port 7860) βββββΊβ (Port 5001) βββββΊβ (Port 8001) β
β app.py β β main.py β β fastapi_app.py β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βββββββββββββββββββββββββΌββββββββββββββββββββββββ
β
βββββββββββββββββββ
β SQLite Databaseβ
β Vector Store β
β Session Data β
βββββββββββββββββββ
Prerequisites
- Hugging Face account with Spaces access
- Required API keys (Google, OpenAI, Groq)
- Git for repository management
Environment Variables
Set the following secrets in your Hugging Face Space:
Required API Keys
GOOGLE_API_KEY=your_google_api_key
GROQ_API_KEY=your_groq_api_key
OPENAI_API_KEY=your_openai_api_key
Security
SECRET_KEY=your_super_secure_secret_key
FLASK_SECRET_KEY=your_flask_secret_key
Deployment Steps
Create a New Docker Space
- Go to Hugging Face Spaces
- Click "Create new Space"
- Choose Docker as the SDK
- Name your space (e.g.,
mental-health-ai-assistant)
Clone and Prepare Repository
git clone <your-repository> cd bhutan # Verify Docker files are present ls -la Dockerfile app.py requirements.txtInitialize Space Repository
# Initialize git for Spaces git init git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME # Add all files git add . git commit -m "Deploy Mental Health AI Assistant to Hugging Face Spaces" git push -u origin mainConfigure Space Settings
- Go to your Space's Settings tab
- Add all required API keys in "Repository secrets"
- Choose hardware tier (CPU Basic recommended minimum)
Monitor Deployment
- Watch the build logs in your Space
- The Docker container will automatically:
- Install system dependencies (FFmpeg, GCC, etc.)
- Install Python packages
- Start Flask backend (port 5001)
- Start FastAPI backend (port 8001)
- Launch Gradio interface (port 7860)
Docker Configuration Details
Multi-Service Architecture
The application runs three services in a single container:
Gradio Frontend (app.py)
- Main user interface on port 7860
- Handles user interactions and session management
- Communicates with Flask and FastAPI backends
Flask Backend (main.py)
- Core application logic on port 5001
- Chat response generation
- Voice processing (Whisper)
- Database operations (SQLite)
FastAPI Backend (fastapi_app.py)
- AI services on port 8001
- CrewAI multi-agent system
- Assessment processing
- PDF report generation
Container Specifications
- Base Image: Python 3.11-slim
- System Packages: FFmpeg, GCC, libffi-dev, libssl-dev
- Exposed Port: 7860 (Gradio)
- Health Check: HTTP probe on port 7860
- Working Directory: /app
File Structure
βββ Dockerfile # Container configuration
βββ .dockerignore # Build optimization
βββ app.py # Gradio entry point
βββ main.py # Flask backend
βββ fastapi_app.py # FastAPI backend
βββ requirements.txt # Python dependencies
βββ config_manager.py # Configuration
βββ models/ # Database models
βββ crew_ai/ # AI agents
βββ knowledge/ # Knowledge base
βββ static/ # Assets
Monitoring and Logs
Access logs and metrics through the Hugging Face Spaces interface:
Build Logs
- Docker image building process
- Dependency installation progress
- System package installation
- Application startup sequence
Runtime Logs
- Application logs and error messages
- User interaction tracking
- AI response generation logs
- Health check status
Performance Metrics
- Container resource usage (CPU, Memory)
- Request response times
- User session analytics
- Error rates and debugging info
Troubleshooting
Common Issues
Build Failures
# Check build logs in Spaces interface # Common causes: # - Missing or invalid API keys # - Dependency conflicts # - Insufficient disk space during buildContainer Startup Issues
# Monitor runtime logs for: # - Database initialization errors # - Missing environment variables # - Port binding conflictsMemory Issues
# Symptoms: Container restarts, OOM errors # Solutions: # - Upgrade to CPU Upgrade plan # - Optimize model loading # - Reduce concurrent request handlingAPI Rate Limiting
# Monitor for API quota exceeded errors # Solutions: # - Check API key quotas # - Implement request caching # - Add rate limiting in application
Performance Optimization
Container Efficiency
- Multi-stage Docker builds reduce image size
- .dockerignore excludes unnecessary files
- System dependencies are cached between builds
Application Performance
- Database connections are pooled
- Static assets are served efficiently
- AI model responses are cached when possible
Resource Management
- Memory usage is optimized for container limits
- CPU usage is balanced across features
- Network requests are handled asynchronously
Local Development and Testing
Local Docker Testing
# Build the Docker image locally
docker build -t mental-health-chatbot .
# Run locally with environment variables
docker run -p 7860:7860 \
-e GOOGLE_API_KEY=your_key \
-e GROQ_API_KEY=your_key \
-e OPENAI_API_KEY=your_key \
-e SECRET_KEY=your_secret \
mental-health-chatbot
# Access the application
# Open browser to http://localhost:7860
Development Without Docker
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export GOOGLE_API_KEY=your_key
export GROQ_API_KEY=your_key
export OPENAI_API_KEY=your_key
export SECRET_KEY=your_secret
# Run the Gradio application
python app.py
Security and Privacy
Container Security
- Isolated Environment: Application runs in isolated Docker container
- Minimal Base Image: Uses slim Python image with minimal attack surface
- Non-persistent Secrets: API keys are passed as environment variables
- Network Security: Only port 7860 is exposed
Data Privacy
- Session-based Storage: Chat data is not permanently stored
- Local Processing: Most AI processing happens within the container
- API Security: API keys are securely managed through Spaces secrets
- User Anonymity: No personal data collection or tracking
π Features
Core Functionality
- AI-Powered Chatbot: Multi-agent system using CrewAI for intelligent mental health conversations
- RAG (Retrieval-Augmented Generation): Knowledge base integration for evidence-based responses
- Real-time Chat Interface: Modern Gradio-based chat UI with typing indicators and message history
- Voice Integration: Text-to-Speech (TTS) and Speech-to-Text (STT) capabilities
- Sentiment Analysis: Real-time emotion detection and sentiment tracking
- Mental Health Assessments: Standardized questionnaires (PHQ-9, GAD-7, DAST-10, AUDIT, Bipolar)
User Management
- Session Management: Secure guest sessions with chat history
- User Dashboard: Assessment history and insights tracking
- Anonymous Access: Privacy-focused chat sessions
- Assessment Storage: Persistent assessment results and reports
Advanced Features
- Crisis Detection: Automatic identification of mental health emergencies
- Condition Classification: AI-powered categorization of mental health concerns
- Session Persistence: Chat history maintained during session
- PDF Report Generation: Downloadable assessment reports
- Multi-Agent Architecture: Specialized agents for different aspects of mental health support
ποΈ Architecture
Gradio Interface (app.py)
- Multi-tab Interface: Chat, Voice, Assessment, Resources, and About tabs
- Real-time Chat: Message processing with typing indicators
- Voice Processing: Whisper integration for speech-to-text
- Assessment Integration: Interactive mental health questionnaires
- Session Management: User context and history preservation
Backend Services
Flask Application (main.py)
- Core chat response generation
- Voice processing (Whisper integration)
- Text-to-Speech generation (Edge TTS)
- Database operations (SQLite with SQLAlchemy)
- Chat session management
AI Agents System (crew_ai/)
- Crisis Detection Agent: Identifies emergency situations
- Mental Condition Classifier: Categorizes mental health concerns
- RAG Agents: Knowledge retrieval and summarization
- Assessment Conductor: Administers standardized questionnaires
- Response Generator: Produces empathetic, helpful responses
Data Layer
- SQLite Database: User profiles, assessments, chat history
- Vector Store: Knowledge base for RAG system
- Session Storage: Temporary chat data and user context
π Hugging Face Spaces Deployment
This application is optimized for deployment on Hugging Face Spaces using Gradio.
Prerequisites
- Hugging Face account
- Git
- Required API keys (Google, OpenAI, Groq)
Environment Variables
Set the following secrets in your Hugging Face Space:
Required API Keys
GOOGLE_API_KEY=your_google_api_key
GROQ_API_KEY=your_groq_api_key
OPENAI_API_KEY=your_openai_api_key
Security
SECRET_KEY=your_super_secure_secret_key
FLASK_SECRET_KEY=your_flask_secret_key
Optional Configuration
FLASK_ENV=production
DEBUG=false
HUGGINGFACE_SPACES=1
Deployment Steps
Create a New Space
- Go to Hugging Face Spaces
- Click "Create new Space"
- Choose Gradio as the SDK
- Select Python as the programming language
Upload Your Code
git clone <your-repository> cd bhutan # Initialize git for Spaces git init git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME # Add and commit files git add . git commit -m "Initial deployment to Hugging Face Spaces" git push -u origin mainConfigure Space Settings
- Hardware: Choose CPU Basic (free) or upgrade for better performance
- Visibility: Public or Private as preferred
- Secrets: Add all required environment variables in the Settings tab
Set Up Dependencies The
requirements.txtfile is already configured for Spaces deployment with:- Gradio for the interface
- Flask for backend logic
- AI/ML libraries (optimized versions)
- All necessary dependencies
Entry Point The application uses
app.pyas the main entry point, which:- Creates a Gradio Blocks interface
- Integrates all Flask backend functionality
- Provides tabs for Chat, Voice, Assessment, Resources, and About
- Handles session management and user interactions
Space Configuration
Hardware Requirements
- CPU Basic: Suitable for basic chat functionality
- CPU Upgrade: Recommended for full AI features including voice processing
- GPU: Not required but can improve response times
File Structure for Spaces
app.py # Main Gradio entry point
requirements.txt # Hugging Face Spaces dependencies
README.md # This documentation
main.py # Flask backend logic
config_manager.py # Configuration management
models/ # Database models
crew_ai/ # AI agents system
knowledge/ # Knowledge base files
static/ # CSS and assets
templates/ # HTML templates (used by Flask components)
Features in Spaces
Chat Tab
- Real-time conversation with AI mental health assistant
- Message history preservation during session
- Typing indicators and response streaming
- Crisis detection and appropriate responses
Voice Tab
- Speech-to-text using Whisper
- Text-to-speech for responses
- Voice conversation mode
- Audio file upload support
Assessment Tab
- Interactive mental health questionnaires
- Real-time scoring and analysis
- PDF report generation
- Assessment history tracking
Resources Tab
- Mental health resources and information
- Crisis hotlines and emergency contacts
- Self-help tools and techniques
- Educational materials
About Tab
- Application information and usage guide
- Privacy policy and data handling
- Contact information and support
Monitoring and Logs
Access logs and metrics through the Hugging Face Spaces interface:
- Build Logs: Installation and setup information
- Application Logs: Runtime logs and error messages
- Usage Analytics: Space visits and user interactions
Troubleshooting
Common Issues
- Build Failures
- Check that all required secrets are set correctly
- Verify API keys are valid and have sufficient quotas
- Review build logs for missing dependencies
π Additional Resources
Documentation
- API Documentation: Available at
/docsendpoint when running - Agent Configuration: See
config/directory for YAML configurations - Database Schema: Check
models/directory for SQLAlchemy models
Support
- GitHub Issues: For bug reports and feature requests
- Community Discussions: Join the Hugging Face Space comments
- Documentation Updates: Contributing to improve documentation is welcome
License
This project is open source. Please check the LICENSE file for details.
Ready for Hugging Face Spaces! π
This application has been fully optimized for Hugging Face Spaces deployment with Gradio interface, removing all Docker and Render-specific configurations.
emotion_detector:
role: Emotion Detector
goal: Analyze user input to determine their emotional state
backstory: You are an empathetic AI skilled at identifying emotions
crisis_detector:
role: Crisis Detector
goal: Identify potential mental health emergencies
backstory: You are trained to recognize signs of crisis
RAG Configuration
Knowledge retrieval settings in config/rag.yaml:
vector_store:
chunk_size: 1000
chunk_overlap: 200
retrieval:
top_k: 5
similarity_threshold: 0.7
π€ Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
π License
This project is licensed under the MIT License - see the LICENSE file for details.
π Support
For issues and questions:
- Create an issue on GitHub
- Check the troubleshooting section above
- Review Render deployment logs
π Security
- All user data is encrypted in transit
- Passwords are hashed using bcrypt
- Session management with secure cookies
- API keys are stored as environment variables
- No sensitive data in logs or version control
Disclaimer: This application is designed to provide mental health support and information but is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of qualified healthcare providers for mental health concerns.