File size: 12,153 Bytes
c6f267d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
# HNTAI Medical Data Extraction - Refactored System

## Overview

This project has been completely refactored to provide a unified, flexible model management system that supports **any model name and type**, including GGUF models for patient summary generation. The system now offers dynamic model loading, runtime model switching, and robust fallback mechanisms.

## ๐Ÿš€ Key Features

### โœจ **Universal Model Support**
- **Any Model Name**: Use any Hugging Face model, local model, or custom model
- **Any Model Type**: Support for text-generation, summarization, NER, GGUF, OpenVINO, and more
- **Automatic Type Detection**: The system automatically detects model types from names
- **Dynamic Loading**: Load models at runtime without restarting the application

### ๐Ÿ”„ **GGUF Model Integration**
- **Seamless GGUF Support**: Full integration with llama.cpp for GGUF models
- **Patient Summary Generation**: Optimized for medical text summarization
- **Memory Efficient**: Ultra-conservative settings for Hugging Face Spaces
- **Fallback Mechanisms**: Automatic fallback when GGUF models fail

### ๐Ÿง  **Unified Model Manager**
- **Single Interface**: One manager handles all model types
- **Smart Caching**: Intelligent model caching with memory management
- **Fallback Chains**: Multiple fallback options for robustness
- **Performance Monitoring**: Built-in timing and memory tracking

## ๐Ÿ—๏ธ Architecture

### Core Components

1. **`UnifiedModelManager`** - Central model management system
2. **`BaseModelLoader`** - Abstract interface for all model loaders
3. **`TransformersModelLoader`** - Hugging Face Transformers models
4. **`GGUFModelLoader`** - GGUF models via llama.cpp
5. **`OpenVINOModelLoader`** - OpenVINO optimized models
6. **`PatientSummarizerAgent`** - Enhanced patient summary generation

### Model Type Support

| Model Type | Description | Example Models |
|------------|-------------|----------------|
| `text-generation` | Causal language models | `facebook/bart-base`, `microsoft/DialoGPT-medium` |
| `summarization` | Text summarization models | `Falconsai/medical_summarization`, `facebook/bart-large-cnn` |
| `ner` | Named Entity Recognition | `dslim/bert-base-NER`, `Jean-Baptiste/roberta-large-ner-english` |
| `gguf` | GGUF format models | `microsoft/Phi-3-mini-4k-instruct-gguf` |
| `openvino` | OpenVINO optimized models | `microsoft/Phi-3-mini-4k-instruct` |

## ๐Ÿš€ Quick Start

### 1. Basic Usage

```python
from ai_med_extract.utils.model_manager import model_manager

# Load any model dynamically
loader = model_manager.get_model_loader(
    model_name="microsoft/Phi-3-mini-4k-instruct-gguf",
    model_type="gguf",
    filename="Phi-3-mini-4k-instruct-q4.gguf"
)

# Generate text
result = loader.generate("Generate a medical summary for...")
```

### 2. Patient Summary Generation

```python
from ai_med_extract.agents.patient_summary_agent import PatientSummarizerAgent

# Create agent with any model
agent = PatientSummarizerAgent(
    model_name="microsoft/Phi-3-mini-4k-instruct-gguf",
    model_type="gguf"
)

# Generate clinical summary
summary = agent.generate_clinical_summary(patient_data)
```

### 3. Runtime Model Switching

```python
# Switch models at runtime
agent.update_model(
    model_name="Falconsai/medical_summarization",
    model_type="summarization"
)
```

## ๐Ÿ“ก API Endpoints

### Model Management API

#### Load Model
```http
POST /api/models/load
Content-Type: application/json

{
    "model_name": "microsoft/Phi-3-mini-4k-instruct-gguf",
    "model_type": "gguf",
    "filename": "Phi-3-mini-4k-instruct-q4.gguf",
    "force_reload": false
}
```

#### Generate Text
```http
POST /api/models/generate
Content-Type: application/json

{
    "model_name": "microsoft/Phi-3-mini-4k-instruct-gguf",
    "model_type": "gguf",
    "prompt": "Generate a medical summary for...",
    "max_tokens": 512,
    "temperature": 0.7
}
```

#### Switch Agent Model
```http
POST /api/models/switch
Content-Type: application/json

{
    "agent_name": "patient_summarizer",
    "model_name": "microsoft/Phi-3-mini-4k-instruct-gguf",
    "model_type": "gguf"
}
```

#### Get Model Information
```http
GET /api/models/info?model_name=microsoft/Phi-3-mini-4k-instruct-gguf
```

#### Health Check
```http
GET /api/models/health
```

### Patient Summary API

#### Generate Patient Summary
```http
POST /generate_patient_summary
Content-Type: application/json

{
    "patientid": "12345",
    "token": "your_token",
    "key": "your_api_key",
    "patient_summarizer_model_name": "microsoft/Phi-3-mini-4k-instruct-gguf",
    "patient_summarizer_model_type": "gguf"
}
```

## ๐Ÿ”ง Configuration

### Environment Variables

```bash
# Cache directories
HF_HOME=/tmp/huggingface
XDG_CACHE_HOME=/tmp
TORCH_HOME=/tmp/torch
WHISPER_CACHE=/tmp/whisper

# GGUF optimization
GGUF_N_THREADS=2
GGUF_N_BATCH=64
```

### Model Configuration

The system automatically uses optimized models for different environments:

- **Local Development**: Full model capabilities
- **Hugging Face Spaces**: Memory-optimized models
- **Production**: Configurable based on resources

## ๐ŸŽฏ Use Cases

### 1. **Medical Document Processing**
```python
# Extract medical data with any model
medical_data = model_manager.generate_text(
    model_name="facebook/bart-base",
    model_type="text-generation",
    prompt="Extract medical entities from: " + document_text
)
```

### 2. **Patient Summary Generation**
```python
# Use GGUF model for patient summaries
summary = model_manager.generate_text(
    model_name="microsoft/Phi-3-mini-4k-instruct-gguf",
    model_type="gguf",
    prompt=patient_data_prompt,
    max_tokens=512
)
```

### 3. **Dynamic Model Switching**
```python
# Switch between models based on task requirements
if task == "summarization":
    model_name = "Falconsai/medical_summarization"
    model_type = "summarization"
elif task == "extraction":
    model_name = "facebook/bart-base"
    model_type = "text-generation"

loader = model_manager.get_model_loader(model_name, model_type)
```

## ๐Ÿ”’ Memory Management

### Hugging Face Spaces Optimization

The system automatically detects Hugging Face Spaces and applies ultra-conservative memory settings:

- **GGUF Models**: 1 thread, 16 batch size, 512 context
- **Transformers**: Float32 precision, minimal memory usage
- **Automatic Fallbacks**: Graceful degradation when memory is limited

### Memory Monitoring

```python
# Check memory usage
health = requests.get("/api/models/health").json()
print(f"GPU Memory: {health['gpu_info']['memory_allocated']}")
print(f"Loaded Models: {health['loaded_models_count']}")
```

## ๐Ÿงช Testing

### Test GGUF Models

```bash
# Test GGUF model loading
python test_gguf.py

# Test specific model
python -c "
from ai_med_extract.utils.model_manager import model_manager
loader = model_manager.get_model_loader('microsoft/Phi-3-mini-4k-instruct-gguf', 'gguf')
result = loader.generate('Test prompt')
print(f'Success: {len(result)} characters generated')
"
```

### Model Validation

```python
from ai_med_extract.utils.model_config import validate_model_config

# Validate model configuration
validation = validate_model_config(
    model_name="microsoft/Phi-3-mini-4k-instruct-gguf",
    model_type="gguf"
)

print(f"Valid: {validation['valid']}")
print(f"Warnings: {validation['warnings']}")
```

## ๐Ÿšจ Error Handling

### Fallback Mechanisms

1. **Primary Model**: Attempts to load the specified model
2. **Fallback Model**: Uses predefined fallback for the model type
3. **Text Fallback**: Generates structured text responses
4. **Graceful Degradation**: Continues operation with reduced functionality

### Common Issues

#### GGUF Model Loading Fails
```python
# Check model file
if not os.path.exists(model_path):
    # Download from Hugging Face
    from huggingface_hub import hf_hub_download
    model_path = hf_hub_download(repo_id, filename)
```

#### Memory Issues
```python
# Clear cache and reload
model_manager.clear_cache()
torch.cuda.empty_cache()

# Use smaller model
loader = model_manager.get_model_loader(
    model_name="facebook/bart-base",  # Smaller model
    model_type="text-generation"
)
```

## ๐Ÿ“Š Performance

### Benchmarking

```python
import time

# Time model loading
start = time.time()
loader = model_manager.get_model_loader(model_name, model_type)
load_time = time.time() - start

# Time generation
start = time.time()
result = loader.generate(prompt)
gen_time = time.time() - start

print(f"Load: {load_time:.2f}s, Generate: {gen_time:.2f}s")
```

### Optimization Tips

1. **Use Appropriate Model Size**: Smaller models for limited resources
2. **Enable Caching**: Models are cached after first load
3. **Batch Processing**: Process multiple requests together
4. **Memory Monitoring**: Regular health checks

## ๐Ÿ”ฎ Future Enhancements

### Planned Features

- **Model Quantization**: Automatic model optimization
- **Distributed Loading**: Load models across multiple devices
- **Model Versioning**: Track and manage model versions
- **Performance Analytics**: Detailed performance metrics
- **Auto-scaling**: Automatic model scaling based on load

### Extensibility

The system is designed for easy extension:

```python
class CustomModelLoader(BaseModelLoader):
    def __init__(self, model_name: str):
        self.model_name = model_name
    
    def load(self):
        # Custom loading logic
        pass
    
    def generate(self, prompt: str, **kwargs):
        # Custom generation logic
        pass
```

## ๐Ÿ“ Migration Guide

### From Old System

1. **Replace Hardcoded Models**:
   ```python
   # Old
   model = LazyModelLoader("facebook/bart-base", "text-generation")
   
   # New
   model = model_manager.get_model_loader("facebook/bart-base", "text-generation")
   ```

2. **Update Patient Summarizer**:
   ```python
   # Old
   agent = PatientSummarizerAgent()
   
   # New
   agent = PatientSummarizerAgent(
       model_name="microsoft/Phi-3-mini-4k-instruct-gguf",
       model_type="gguf"
   )
   ```

3. **Use Dynamic Model Selection**:
   ```python
   # Old: Fixed model types
   # New: Dynamic model selection
   model_type = request.form.get("model_type", "text-generation")
   model_name = request.form.get("model_name", "facebook/bart-base")
   ```

## ๐Ÿค Contributing

### Development Setup

```bash
# Clone repository
git clone <repository-url>
cd HNTAI

# Install dependencies
pip install -r requirements.txt

# Run tests
python -m pytest tests/

# Start development server
python -m ai_med_extract.app
```

### Adding New Model Types

1. **Create Loader Class**:
   ```python
   class CustomModelLoader(BaseModelLoader):
       # Implement required methods
       pass
   ```

2. **Update Model Manager**:
   ```python
   if model_type == "custom":
       loader = CustomModelLoader(model_name)
   ```

3. **Add Configuration**:
   ```python
   DEFAULT_MODELS["custom"] = {
       "primary": "default/custom-model",
       "fallback": "fallback/custom-model"
   }
   ```

## ๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

## ๐Ÿ†˜ Support

### Getting Help

- **Documentation**: This README and inline code comments
- **Issues**: GitHub Issues for bug reports
- **Discussions**: GitHub Discussions for questions
- **Examples**: See `test_gguf.py` and other test files

### Common Questions

**Q: Can I use my own GGUF model?**
A: Yes! Just provide the path to your .gguf file or upload it to Hugging Face.

**Q: How do I optimize for memory?**
A: Use smaller models, enable caching, and monitor memory usage via `/api/models/health`.

**Q: Can I switch models without restarting?**
A: Yes! Use the `/api/models/switch` endpoint to change models at runtime.

**Q: What if a model fails to load?**
A: The system automatically falls back to alternative models and provides detailed error information.

---

**๐ŸŽ‰ Congratulations!** You now have a powerful, flexible system that can work with any model name and type, including GGUF models for patient summary generation. The system is designed to be robust, efficient, and easy to use while maintaining backward compatibility.