File size: 1,950 Bytes
4e00035
 
 
 
 
 
 
 
 
3896f1b
4e00035
 
f00a406
 
5f2f118
f00a406
5f2f118
f00a406
5f2f118
f00a406
5f2f118
f00a406
5f2f118
 
 
 
 
 
f00a406
5f2f118
f00a406
5f2f118
f00a406
5f2f118
 
f00a406
5f2f118
f00a406
5f2f118
f00a406
 
 
 
5f2f118
f00a406
 
 
 
 
ef37337
f00a406
 
 
5f2f118
f00a406
5f2f118
f00a406
5f2f118
f00a406
5f2f118
f00a406
5f2f118
f00a406
 
5f2f118
 
f00a406
 
 
5f2f118
f00a406
 
5f2f118
 
f00a406
 
 
 
 
5f2f118
f00a406
 
5f2f118
f00a406
 
5f2f118
ef37337
f00a406
 
5f2f118
f00a406
5f2f118
ef37337
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
title: SmolLM2 Backend
emoji: 📊
colorFrom: yellow
colorTo: red
sdk: docker
pinned: false
license: apache-2.0
short_description: Backend of SmolLM2 chat
app_port: 7860
---

# SmolLM2 Backend

This project implements a FastAPI API that uses LangChain and LangGraph to generate text with the Qwen2.5-72B-Instruct model from HuggingFace.

## Configuration

### In HuggingFace Spaces

This project is designed to run in HuggingFace Spaces. To configure it:

1. Create a new Space in HuggingFace with SDK Docker
2. Configure the `HUGGINGFACE_TOKEN` or `HF_TOKEN` environment variable in the Space configuration:
   - Go to the "Settings" tab of your Space
   - Scroll down to the "Repository secrets" section
   - Add a new variable with the name `HUGGINGFACE_TOKEN` and your token as the value
   - Save the changes

### Local development

For local development:

1. Clone this repository
2. Create a `.env` file in the project root with your HuggingFace token:
   ```
   HUGGINGFACE_TOKEN=your_token_here
   ```
3. Install the dependencies:
   ```
   pip install -r requirements.txt
   ```

## Local execution

```bash
uvicorn app:app --reload
```

The API will be available at `http://localhost:7860`.

## Endpoints

### GET `/`

Welcome endpoint that returns a greeting message.

### POST `/generate`

Endpoint to generate text using the language model.

**Request parameters:**
```json
{
  "query": "Your question here",
  "thread_id": "optional_thread_identifier"
}
```

**Response:**
```json
{
  "generated_text": "Generated text by the model",
  "thread_id": "thread identifier"
}
```

## Docker

To run the application in a Docker container:

```bash
# Build the image
docker build -t smollm2-backend .

# Run the container
docker run -p 7860:7860 --env-file .env smollm2-backend
```

## API documentation

The interactive API documentation is available at:
- Swagger UI: `http://localhost:7860/docs`
- ReDoc: `http://localhost:7860/redoc`