Spaces:
Sleeping
Sleeping
File size: 3,281 Bytes
c38fcf7 a7abf51 c38fcf7 d80e34c c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 912dbf9 c38fcf7 1f89c4d 912dbf9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
---
title: Jack Patel AI Assistant
emoji: π¦
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.34.1
app_file: main.py
license: mit
---
# π€ Jack Patel AI Assistant
A personalized AI assistant powered by a fine-tuned TinyLlama model, trained to answer questions about Jack Patel using custom data stored in `Data.json`.
## π Features
- **Personalized Responses**: Trained on 150+ question-answer pairs about Jack Patel (from `Data.json`)
- **Smart Fallback**: If a match isn't found in the training data, the model generates a response using TinyLlama
- **Modern UI**: FastAPI-powered UI using a clean and responsive HTML form
- **API Access**: RESTful API endpoints for programmatic access
- **Container-Ready**: Deployable instantly on Hugging Face Spaces via Docker or source files
## π How to Use
### π₯οΈ Web Interface
Visit the Space and type any question β get a smart response about Jack Patel.
### π API Usage
Make a GET request to:
```
/api/generate?instruction=your_question
```
### β
Health Check
Use:
```
/health
```
to confirm the app is running correctly.
## π Example Questions
- "What is your name?"
- "What is your father's name?"
- "Which school did you attend?"
- "What are your technical skills?"
- "Tell me about your hobbies"
## π οΈ Technical Details
- **Base Model**: `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
- **Fine-Tuning**: LoRA with your `Data.json` file containing personal Q&A pairs
- **Framework**: FastAPI (backend), Jinja2 (templates)
- **Interface**: HTML/CSS in `templates/index.html`
- **Model Inference**: PyTorch + Transformers
## π File Structure
```
βββ main.py # FastAPI application logic
βββ requirements.txt # Dependencies
βββ Dockerfile # Build config for Hugging Face Space
βββ Data.json # Personalized Q&A training data
βββ templates/
β βββ index.html # Web-based UI
βββ lora_model/ # LoRA fine-tuned TinyLlama model
βββ static/ # Optional static files (CSS, JS)
```
## π Model Behavior
- **Uses Data.json**: Loads Q&A pairs at runtime for exact matching
- **Model-Generated Responses**: For non-exact matches, the TinyLlama model generates contextual answers
- **Fallback Intelligence**: Matches similar or rephrased queries using embedding similarity
## π§ Setup Instructions
### π§ͺ Local Development
```bash
pip install -r requirements.txt
python main.py
```
### βοΈ Hugging Face Spaces
1. Upload all project files including:
* `main.py`
* `Data.json`
* `lora_model/` folder
* `requirements.txt`
* `templates/index.html`
2. Hugging Face will automatically detect and run your app.
## π Privacy & Security
* No data is stored or logged
* Responses are generated locally in-memory
* No external API calls
* Entirely contained in the Hugging Face Space runtime
## π€ Contributing
Want to improve this AI assistant? You can:
* Add new question-answer pairs to `Data.json`
* Enhance the UI
* Optimize performance and latency
* Add advanced features (like semantic search)
## π License
This project is licensed under the MIT License.
**Built with β€οΈ using FastAPI, PyTorch, and Hugging Face Transformers** |