Spaces:
Sleeping
Sleeping
File size: 6,670 Bytes
c701ddd 20a7b41 c701ddd 20a7b41 c701ddd 20a7b41 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
---
title: OpenWebUI - Ollama Chat
emoji: π€
colorFrom: green
colorTo: blue
sdk: docker
app_port: 7860
---
# π€ OpenWebUI - Ollama Chat
A beautiful, modern chat interface for Ollama models, deployed as a Hugging Face Space. This Space provides a full-featured web UI that connects to your Ollama API Space for an interactive chat experience.
## π Features
- **Beautiful Chat Interface**: Modern, responsive design with gradient backgrounds and smooth animations
- **Model Selection**: Choose from any available Ollama model
- **Parameter Control**: Adjust temperature, max tokens, and other generation parameters
- **Real-time Chat**: Interactive chat experience with typing indicators
- **Mobile Responsive**: Works perfectly on desktop and mobile devices
- **Ollama Integration**: Seamlessly connects to your Ollama API Space
- **Health Monitoring**: Built-in health checks and status monitoring
## π Quick Start
### 1. Deploy to Hugging Face Spaces
1. Fork this repository or create a new Space
2. Upload these files to your Space
3. Set the following environment variables in your Space settings:
- `OLLAMA_API_URL`: URL to your Ollama Space (e.g., `https://your-ollama-space.hf.space`)
- `DEFAULT_MODEL`: Default model to use (e.g., `llama2`)
- `MAX_TOKENS`: Maximum tokens for generation (default: `2048`)
- `TEMPERATURE`: Default temperature (default: `0.7`)
### 2. Local Development
```bash
# Clone the repository
git clone <your-repo-url>
cd openwebui-space
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export OLLAMA_API_URL=https://your-ollama-space.hf.space
export DEFAULT_MODEL=llama2
# Run the application
python app.py
```
## π§ Configuration
### Environment Variables
- `OLLAMA_API_URL`: **Required** - URL to your Ollama Space (e.g., `https://your-ollama-space.hf.space`)
- `DEFAULT_MODEL`: Default model to use (default: `llama2`)
- `MAX_TOKENS`: Maximum tokens for generation (default: `2048`)
- `TEMPERATURE`: Default temperature setting (default: `0.7`)
### Prerequisites
Before using this OpenWebUI Space, you need:
1. **Ollama Space**: A deployed Ollama API Space (see the `ollama-space` directory)
2. **Ollama Instance**: A running Ollama instance that your Ollama Space can connect to
3. **Network Access**: The OpenWebUI Space must be able to reach your Ollama Space
## π‘ API Endpoints
### GET `/`
Main chat interface - the beautiful web UI.
### POST `/api/chat`
Chat API endpoint for programmatic access.
**Request Body:**
```json
{
"message": "Hello, how are you?",
"model": "llama2",
"temperature": 0.7,
"max_tokens": 2048
}
```
**Response:**
```json
{
"status": "success",
"response": "Hello! I'm doing well, thank you for asking...",
"model": "llama2",
"usage": {
"prompt_tokens": 7,
"completion_tokens": 15,
"total_tokens": 22
}
}
```
### GET `/api/models`
Get available models from the connected Ollama Space.
### GET `/health`
Health check endpoint that also checks the Ollama Space connection.
## π Integration with Ollama Space
This OpenWebUI Space is designed to work seamlessly with the Ollama Space:
1. **API Communication**: The OpenWebUI Space communicates with your Ollama Space via HTTP API calls
2. **Model Discovery**: Automatically discovers and lists available models from your Ollama Space
3. **Text Generation**: Sends generation requests to your Ollama Space and displays responses
4. **Health Monitoring**: Monitors the health of both the OpenWebUI Space and the Ollama Space
### Architecture
```
User β OpenWebUI Space β Ollama Space β Ollama Instance
```
## π¨ UI Features
### Chat Interface
- **Message Bubbles**: Distinct styling for user and AI messages
- **Avatars**: Visual indicators for message sender
- **Auto-scroll**: Automatically scrolls to new messages
- **Typing Indicators**: Shows when AI is generating a response
### Controls
- **Model Selection**: Dropdown to choose from available models
- **Temperature Slider**: Adjust creativity/randomness (0.0 - 2.0)
- **Max Tokens**: Set maximum response length
- **Real-time Updates**: Parameter changes are applied immediately
### Responsive Design
- **Mobile Optimized**: Touch-friendly interface for mobile devices
- **Adaptive Layout**: Automatically adjusts to different screen sizes
- **Modern CSS**: Uses CSS Grid and Flexbox for optimal layouts
## π³ Docker Support
The Space includes a Dockerfile for containerized deployment:
```bash
# Build the image
docker build -t openwebui-space .
# Run the container
docker run -p 7860:7860 \
-e OLLAMA_API_URL=https://your-ollama-space.hf.space \
-e DEFAULT_MODEL=llama2 \
openwebui-space
```
## π Security Considerations
- **Public Access**: The Space is publicly accessible - consider adding authentication for production use
- **API Communication**: All communication with the Ollama Space is over HTTPS (when using HF Spaces)
- **Input Validation**: User inputs are validated before being sent to the Ollama Space
## π¨ Troubleshooting
### Common Issues
1. **Cannot connect to Ollama Space**: Check the `OLLAMA_API_URL` environment variable
2. **No models available**: Ensure your Ollama Space is running and has models
3. **Generation errors**: Check the Ollama Space logs for detailed error messages
4. **Slow responses**: Large models may take time to generate responses
### Health Checks
Use the `/health` endpoint to monitor:
- OpenWebUI Space status
- Connection to Ollama Space
- Ollama Space health status
### Debug Mode
For local development, you can enable debug mode by setting `debug=True` in the Flask app.
## π± Mobile Experience
The interface is fully optimized for mobile devices:
- Touch-friendly controls
- Responsive design that adapts to screen size
- Optimized for portrait and landscape orientations
- Fast loading and smooth scrolling
## π― Use Cases
- **Personal AI Assistant**: Chat with your local models
- **Development Testing**: Test model responses and parameters
- **Demo and Showcase**: Present your Ollama setup to others
- **API Gateway**: Use the chat interface as a frontend for your Ollama API
## π License
This project is open source and available under the MIT License.
## π€ Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## π Support
If you encounter any issues or have questions, please open an issue on the repository.
## π Related Projects
- **Ollama Space**: The backend API Space that this UI connects to
- **Ollama**: The local LLM runner that powers everything
- **Hugging Face Spaces**: The deployment platform for both Spaces
|