Jack1808's picture
Update README.md
a7abf51 verified
---
title: Jack Patel AI Assistant
emoji: πŸ¦™
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.34.1
app_file: main.py
license: mit
---
# πŸ€– Jack Patel AI Assistant
A personalized AI assistant powered by a fine-tuned TinyLlama model, trained to answer questions about Jack Patel using custom data stored in `Data.json`.
## 🌟 Features
- **Personalized Responses**: Trained on 150+ question-answer pairs about Jack Patel (from `Data.json`)
- **Smart Fallback**: If a match isn't found in the training data, the model generates a response using TinyLlama
- **Modern UI**: FastAPI-powered UI using a clean and responsive HTML form
- **API Access**: RESTful API endpoints for programmatic access
- **Container-Ready**: Deployable instantly on Hugging Face Spaces via Docker or source files
## πŸš€ How to Use
### πŸ–₯️ Web Interface
Visit the Space and type any question β€” get a smart response about Jack Patel.
### πŸ”Œ API Usage
Make a GET request to:
```
/api/generate?instruction=your_question
```
### βœ… Health Check
Use:
```
/health
```
to confirm the app is running correctly.
## πŸ“ Example Questions
- "What is your name?"
- "What is your father's name?"
- "Which school did you attend?"
- "What are your technical skills?"
- "Tell me about your hobbies"
## πŸ› οΈ Technical Details
- **Base Model**: `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
- **Fine-Tuning**: LoRA with your `Data.json` file containing personal Q&A pairs
- **Framework**: FastAPI (backend), Jinja2 (templates)
- **Interface**: HTML/CSS in `templates/index.html`
- **Model Inference**: PyTorch + Transformers
## πŸ“ File Structure
```
β”œβ”€β”€ main.py # FastAPI application logic
β”œβ”€β”€ requirements.txt # Dependencies
β”œβ”€β”€ Dockerfile # Build config for Hugging Face Space
β”œβ”€β”€ Data.json # Personalized Q&A training data
β”œβ”€β”€ templates/
β”‚ └── index.html # Web-based UI
β”œβ”€β”€ lora_model/ # LoRA fine-tuned TinyLlama model
└── static/ # Optional static files (CSS, JS)
```
## πŸ“Š Model Behavior
- **Uses Data.json**: Loads Q&A pairs at runtime for exact matching
- **Model-Generated Responses**: For non-exact matches, the TinyLlama model generates contextual answers
- **Fallback Intelligence**: Matches similar or rephrased queries using embedding similarity
## πŸ”§ Setup Instructions
### πŸ§ͺ Local Development
```bash
pip install -r requirements.txt
python main.py
```
### ☁️ Hugging Face Spaces
1. Upload all project files including:
* `main.py`
* `Data.json`
* `lora_model/` folder
* `requirements.txt`
* `templates/index.html`
2. Hugging Face will automatically detect and run your app.
## πŸ”’ Privacy & Security
* No data is stored or logged
* Responses are generated locally in-memory
* No external API calls
* Entirely contained in the Hugging Face Space runtime
## 🀝 Contributing
Want to improve this AI assistant? You can:
* Add new question-answer pairs to `Data.json`
* Enhance the UI
* Optimize performance and latency
* Add advanced features (like semantic search)
## πŸ“„ License
This project is licensed under the MIT License.
**Built with ❀️ using FastAPI, PyTorch, and Hugging Face Transformers**