Spaces:
Running
Running
David Ko
commited on
Commit
·
8ed5ac1
1
Parent(s):
abb0a18
Copy files from hf-space to vision_llm_agent
Browse files- .gitignore +28 -0
- Dockerfile +37 -0
- README.md +110 -1
- api.py +1032 -0
- app.py +560 -0
- requirements.txt +33 -0
- static/asset-manifest.json +24 -0
- static/css/main.59c2a54e.chunk.css +2 -0
- static/css/main.59c2a54e.chunk.css.map +1 -0
- static/index.html +1 -0
- static/js/2.252de3c4.chunk.js +0 -0
- static/js/2.252de3c4.chunk.js.LICENSE.txt +58 -0
- static/js/2.252de3c4.chunk.js.map +0 -0
- static/js/3.9013e23f.chunk.js +2 -0
- static/js/3.9013e23f.chunk.js.map +1 -0
- static/js/main.ad7f086c.chunk.js +2 -0
- static/js/main.ad7f086c.chunk.js.map +1 -0
- static/js/runtime-main.25710301.js +2 -0
- static/js/runtime-main.25710301.js.map +1 -0
- static/manifest.json +15 -0
- static/precache-manifest.053b14ee2ebd7996a78e6e055f2144fe.js +30 -0
- static/service-worker.js +39 -0
.gitignore
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Python
|
2 |
+
__pycache__/
|
3 |
+
*.py[cod]
|
4 |
+
*.egg-info/
|
5 |
+
*.egg
|
6 |
+
|
7 |
+
# Virtual environments
|
8 |
+
venv/
|
9 |
+
.venv/
|
10 |
+
ENV/
|
11 |
+
|
12 |
+
# OS files
|
13 |
+
.DS_Store
|
14 |
+
Thumbs.db
|
15 |
+
|
16 |
+
# Logs
|
17 |
+
*.log
|
18 |
+
|
19 |
+
# Node
|
20 |
+
node_modules/
|
21 |
+
|
22 |
+
# Build artifacts
|
23 |
+
build/
|
24 |
+
dist/
|
25 |
+
|
26 |
+
# Env files
|
27 |
+
.env
|
28 |
+
.env.*
|
Dockerfile
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
FROM python:3.9-slim
|
2 |
+
|
3 |
+
WORKDIR /app
|
4 |
+
|
5 |
+
# Install system dependencies
|
6 |
+
RUN apt-get update && apt-get install -y \
|
7 |
+
build-essential \
|
8 |
+
git \
|
9 |
+
libgl1 \
|
10 |
+
libglib2.0-0 \
|
11 |
+
wget \
|
12 |
+
&& rm -rf /var/lib/apt/lists/*
|
13 |
+
|
14 |
+
# Create cache directory with proper permissions
|
15 |
+
RUN mkdir -p /.cache && chmod 777 /.cache
|
16 |
+
|
17 |
+
# Copy requirements first for better caching
|
18 |
+
COPY requirements.txt .
|
19 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
20 |
+
|
21 |
+
# Copy backend code
|
22 |
+
COPY api.py .
|
23 |
+
COPY static/ static/
|
24 |
+
|
25 |
+
# Static files are already copied in the previous step
|
26 |
+
# No need to copy frontend build files separately
|
27 |
+
|
28 |
+
# Set environment variables
|
29 |
+
ENV PYTHONUNBUFFERED=1
|
30 |
+
ENV PYTHONDONTWRITEBYTECODE=1
|
31 |
+
ENV PORT=7860
|
32 |
+
|
33 |
+
# Expose the port Hugging Face Spaces expects
|
34 |
+
EXPOSE 7860
|
35 |
+
|
36 |
+
# Command to run the application
|
37 |
+
CMD ["python", "api.py"]
|
README.md
CHANGED
@@ -8,4 +8,113 @@ pinned: false
|
|
8 |
license: gpl-3.0
|
9 |
---
|
10 |
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
license: gpl-3.0
|
9 |
---
|
10 |
|
11 |
+
# Vision LLM Agent - Object Detection with AI Assistant
|
12 |
+
|
13 |
+
A multi-model object detection and image classification demo with LLM-based AI assistant for answering questions about detected objects. This project uses YOLOv8, DETR, and ViT models for vision tasks, and TinyLlama for natural language processing.
|
14 |
+
|
15 |
+
## Project Architecture
|
16 |
+
|
17 |
+
This project follows a phased development approach:
|
18 |
+
|
19 |
+
### Phase 0: PoC with Gradio (Original)
|
20 |
+
- Simple Gradio interface with multiple object detection models
|
21 |
+
- Uses Hugging Face's free tier for model hosting
|
22 |
+
- Easy to deploy to Hugging Face Spaces
|
23 |
+
|
24 |
+
### Phase 1: Service Separation (Implemented)
|
25 |
+
- Backend: Flask API with model inference endpoints
|
26 |
+
- REST API endpoints for model inference
|
27 |
+
- JSON responses with detection results and performance metrics
|
28 |
+
|
29 |
+
### Phase 2: UI Upgrade (Implemented)
|
30 |
+
- Modern React frontend with Material-UI components
|
31 |
+
- Improved user experience with responsive design
|
32 |
+
- Separate frontend and backend architecture
|
33 |
+
|
34 |
+
### Phase 3: CI/CD & Testing (Planned)
|
35 |
+
- GitHub Actions for automated testing and deployment
|
36 |
+
- Comprehensive test suite with pytest and ESLint
|
37 |
+
- Automatic rebuilds on Hugging Face Spaces
|
38 |
+
|
39 |
+
## How to Run
|
40 |
+
|
41 |
+
### Option 1: Original Gradio App
|
42 |
+
1. Install dependencies:
|
43 |
+
```bash
|
44 |
+
pip install -r requirements.txt
|
45 |
+
```
|
46 |
+
|
47 |
+
2. Run the Gradio app:
|
48 |
+
```bash
|
49 |
+
python app.py
|
50 |
+
```
|
51 |
+
|
52 |
+
3. Open your browser and go to the URL shown in the terminal (typically `http://127.0.0.1:7860`)
|
53 |
+
|
54 |
+
### Option 2: React Frontend with Flask Backend
|
55 |
+
1. Install backend dependencies:
|
56 |
+
```bash
|
57 |
+
pip install -r requirements.txt
|
58 |
+
```
|
59 |
+
|
60 |
+
2. Start the Flask backend server:
|
61 |
+
```bash
|
62 |
+
python api.py
|
63 |
+
```
|
64 |
+
|
65 |
+
3. In a separate terminal, navigate to the frontend directory:
|
66 |
+
```bash
|
67 |
+
cd frontend
|
68 |
+
```
|
69 |
+
|
70 |
+
4. Install frontend dependencies:
|
71 |
+
```bash
|
72 |
+
npm install
|
73 |
+
```
|
74 |
+
|
75 |
+
5. Start the React development server:
|
76 |
+
```bash
|
77 |
+
npm start
|
78 |
+
```
|
79 |
+
|
80 |
+
6. Open your browser and go to `http://localhost:3000`
|
81 |
+
|
82 |
+
## Models Used
|
83 |
+
|
84 |
+
- **YOLOv8**: Fast and accurate object detection
|
85 |
+
- **DETR**: DEtection TRansformer for object detection
|
86 |
+
- **ViT**: Vision Transformer for image classification
|
87 |
+
- **TinyLlama**: For natural language processing and question answering about detected objects
|
88 |
+
|
89 |
+
## API Endpoints
|
90 |
+
|
91 |
+
The Flask backend provides the following API endpoints:
|
92 |
+
|
93 |
+
- `GET /api/status` - Check the status of the API and available models
|
94 |
+
- `POST /api/detect/yolo` - Detect objects using YOLOv8
|
95 |
+
- `POST /api/detect/detr` - Detect objects using DETR
|
96 |
+
- `POST /api/classify/vit` - Classify images using ViT
|
97 |
+
|
98 |
+
All POST endpoints accept form data with an 'image' field containing the image file.
|
99 |
+
|
100 |
+
## Deployment
|
101 |
+
|
102 |
+
### Gradio App
|
103 |
+
The Gradio app is designed to be easily deployed to Hugging Face Spaces:
|
104 |
+
|
105 |
+
1. Create a new Space on Hugging Face
|
106 |
+
2. Select Gradio as the SDK
|
107 |
+
3. Push this repository to the Space's git repository
|
108 |
+
4. The app will automatically deploy
|
109 |
+
|
110 |
+
### React + Flask App
|
111 |
+
For the React + Flask version, you'll need to:
|
112 |
+
|
113 |
+
1. Build the React frontend:
|
114 |
+
```bash
|
115 |
+
cd frontend
|
116 |
+
npm run build
|
117 |
+
```
|
118 |
+
|
119 |
+
2. Serve the static files from a web server or cloud hosting service
|
120 |
+
3. Deploy the Flask backend to a server that supports Python
|
api.py
ADDED
@@ -0,0 +1,1032 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# -*- coding: utf-8 -*-
|
2 |
+
# Set matplotlib config directory to avoid permission issues
|
3 |
+
import os
|
4 |
+
os.environ['MPLCONFIGDIR'] = '/tmp/matplotlib'
|
5 |
+
|
6 |
+
from flask import Flask, request, jsonify, send_from_directory
|
7 |
+
import torch
|
8 |
+
from PIL import Image
|
9 |
+
import numpy as np
|
10 |
+
import io
|
11 |
+
from io import BytesIO
|
12 |
+
import base64
|
13 |
+
import uuid
|
14 |
+
import matplotlib.pyplot as plt
|
15 |
+
from matplotlib.patches import Rectangle
|
16 |
+
import time
|
17 |
+
from flask_cors import CORS
|
18 |
+
import json
|
19 |
+
import sys
|
20 |
+
|
21 |
+
# Fix for SQLite3 version compatibility with ChromaDB
|
22 |
+
try:
|
23 |
+
import pysqlite3
|
24 |
+
sys.modules['sqlite3'] = pysqlite3
|
25 |
+
except ImportError:
|
26 |
+
print("Warning: pysqlite3 not found, using built-in sqlite3")
|
27 |
+
|
28 |
+
import chromadb
|
29 |
+
from chromadb.utils import embedding_functions
|
30 |
+
|
31 |
+
app = Flask(__name__, static_folder='static')
|
32 |
+
CORS(app) # Enable CORS for all routes
|
33 |
+
|
34 |
+
# Model initialization
|
35 |
+
print("Loading models... This may take a moment.")
|
36 |
+
|
37 |
+
# Image embedding model (CLIP) for vector search
|
38 |
+
clip_model = None
|
39 |
+
clip_processor = None
|
40 |
+
try:
|
41 |
+
from transformers import CLIPProcessor, CLIPModel
|
42 |
+
|
43 |
+
# 임시 디렉토리 사용
|
44 |
+
import tempfile
|
45 |
+
temp_dir = tempfile.gettempdir()
|
46 |
+
os.environ["TRANSFORMERS_CACHE"] = temp_dir
|
47 |
+
|
48 |
+
# CLIP 모델 로드 (이미지 임베딩용)
|
49 |
+
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
|
50 |
+
clip_processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
|
51 |
+
|
52 |
+
print("CLIP model loaded successfully")
|
53 |
+
except Exception as e:
|
54 |
+
print("Error loading CLIP model:", e)
|
55 |
+
clip_model = None
|
56 |
+
clip_processor = None
|
57 |
+
|
58 |
+
# Vector DB 초기화
|
59 |
+
vector_db = None
|
60 |
+
image_collection = None
|
61 |
+
object_collection = None
|
62 |
+
try:
|
63 |
+
# ChromaDB 클라이언트 초기화 (인메모리 DB)
|
64 |
+
vector_db = chromadb.Client()
|
65 |
+
|
66 |
+
# 임베딩 함수 설정
|
67 |
+
ef = embedding_functions.DefaultEmbeddingFunction()
|
68 |
+
|
69 |
+
# 이미지 컬렉션 생성
|
70 |
+
image_collection = vector_db.create_collection(
|
71 |
+
name="image_collection",
|
72 |
+
embedding_function=ef,
|
73 |
+
get_or_create=True
|
74 |
+
)
|
75 |
+
|
76 |
+
# 객체 인식 결과 컬렉션 생성
|
77 |
+
object_collection = vector_db.create_collection(
|
78 |
+
name="object_collection",
|
79 |
+
embedding_function=ef,
|
80 |
+
get_or_create=True
|
81 |
+
)
|
82 |
+
|
83 |
+
print("Vector DB initialized successfully")
|
84 |
+
except Exception as e:
|
85 |
+
print("Error initializing Vector DB:", e)
|
86 |
+
vector_db = None
|
87 |
+
image_collection = None
|
88 |
+
object_collection = None
|
89 |
+
|
90 |
+
# YOLOv8 model
|
91 |
+
yolo_model = None
|
92 |
+
try:
|
93 |
+
import os
|
94 |
+
from ultralytics import YOLO
|
95 |
+
|
96 |
+
# 모델 파일 경로 - 임시 디렉토리 사용
|
97 |
+
import tempfile
|
98 |
+
temp_dir = tempfile.gettempdir()
|
99 |
+
model_path = os.path.join(temp_dir, "yolov8n.pt")
|
100 |
+
|
101 |
+
# 모델 파일이 없으면 직접 다운로드
|
102 |
+
if not os.path.exists(model_path):
|
103 |
+
print(f"Downloading YOLOv8 model to {model_path}...")
|
104 |
+
try:
|
105 |
+
os.system(f"wget -q https://ultralytics.com/assets/yolov8n.pt -O {model_path}")
|
106 |
+
print("YOLOv8 model downloaded successfully")
|
107 |
+
except Exception as e:
|
108 |
+
print(f"Error downloading YOLOv8 model: {e}")
|
109 |
+
# 다운로드 실패 시 대체 URL 시도
|
110 |
+
try:
|
111 |
+
os.system(f"wget -q https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt -O {model_path}")
|
112 |
+
print("YOLOv8 model downloaded from alternative source")
|
113 |
+
except Exception as e2:
|
114 |
+
print(f"Error downloading from alternative source: {e2}")
|
115 |
+
# 마지막 대안으로 직접 모델 URL 사용
|
116 |
+
try:
|
117 |
+
os.system(f"curl -L https://ultralytics.com/assets/yolov8n.pt --output {model_path}")
|
118 |
+
print("YOLOv8 model downloaded using curl")
|
119 |
+
except Exception as e3:
|
120 |
+
print(f"All download attempts failed: {e3}")
|
121 |
+
|
122 |
+
# 환경 변수 설정 - 설정 파일 경로 지정
|
123 |
+
os.environ["YOLO_CONFIG_DIR"] = temp_dir
|
124 |
+
os.environ["MPLCONFIGDIR"] = temp_dir
|
125 |
+
|
126 |
+
yolo_model = YOLO(model_path) # Using the nano model for faster inference
|
127 |
+
print("YOLOv8 model loaded successfully")
|
128 |
+
except Exception as e:
|
129 |
+
print("Error loading YOLOv8 model:", e)
|
130 |
+
yolo_model = None
|
131 |
+
|
132 |
+
# DETR model (DEtection TRansformer)
|
133 |
+
detr_processor = None
|
134 |
+
detr_model = None
|
135 |
+
try:
|
136 |
+
from transformers import DetrImageProcessor, DetrForObjectDetection
|
137 |
+
|
138 |
+
detr_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50")
|
139 |
+
detr_model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
|
140 |
+
|
141 |
+
print("DETR model loaded successfully")
|
142 |
+
except Exception as e:
|
143 |
+
print("Error loading DETR model:", e)
|
144 |
+
detr_processor = None
|
145 |
+
detr_model = None
|
146 |
+
|
147 |
+
# ViT model
|
148 |
+
vit_processor = None
|
149 |
+
vit_model = None
|
150 |
+
try:
|
151 |
+
from transformers import ViTImageProcessor, ViTForImageClassification
|
152 |
+
vit_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224")
|
153 |
+
vit_model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
|
154 |
+
print("ViT model loaded successfully")
|
155 |
+
except Exception as e:
|
156 |
+
print("Error loading ViT model:", e)
|
157 |
+
vit_processor = None
|
158 |
+
vit_model = None
|
159 |
+
|
160 |
+
# Get device information
|
161 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
162 |
+
print(f"Using device: {device}")
|
163 |
+
|
164 |
+
# LLM model (using an open-access model instead of Llama 4 which requires authentication)
|
165 |
+
llm_model = None
|
166 |
+
llm_tokenizer = None
|
167 |
+
try:
|
168 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
169 |
+
|
170 |
+
print("Loading LLM model... This may take a moment.")
|
171 |
+
model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" # Using TinyLlama as an open-access alternative
|
172 |
+
|
173 |
+
llm_tokenizer = AutoTokenizer.from_pretrained(model_name)
|
174 |
+
llm_model = AutoModelForCausalLM.from_pretrained(
|
175 |
+
model_name,
|
176 |
+
torch_dtype=torch.float16,
|
177 |
+
# Removing options that require accelerate package
|
178 |
+
# device_map="auto",
|
179 |
+
# load_in_8bit=True
|
180 |
+
).to(device)
|
181 |
+
print("LLM model loaded successfully")
|
182 |
+
except Exception as e:
|
183 |
+
print(f"Error loading LLM model: {e}")
|
184 |
+
llm_model = None
|
185 |
+
llm_tokenizer = None
|
186 |
+
|
187 |
+
def process_llm_query(vision_results, user_query):
|
188 |
+
"""Process a query with the LLM model using vision results and user text"""
|
189 |
+
if llm_model is None or llm_tokenizer is None:
|
190 |
+
return {"error": "LLM model not available"}
|
191 |
+
|
192 |
+
# 결과 데이터 요약 (토큰 길이 제한을 위해)
|
193 |
+
summarized_results = []
|
194 |
+
|
195 |
+
# 객체 탐지 결과 요약
|
196 |
+
if isinstance(vision_results, list):
|
197 |
+
# 최대 10개 객체만 포함
|
198 |
+
for i, obj in enumerate(vision_results[:10]):
|
199 |
+
if isinstance(obj, dict):
|
200 |
+
# 필요한 정보만 추출
|
201 |
+
summary = {
|
202 |
+
"label": obj.get("label", "unknown"),
|
203 |
+
"confidence": obj.get("confidence", 0),
|
204 |
+
}
|
205 |
+
summarized_results.append(summary)
|
206 |
+
|
207 |
+
# Create a prompt combining vision results and user query
|
208 |
+
prompt = f"""You are an AI assistant analyzing image detection results.
|
209 |
+
Here are the objects detected in the image: {json.dumps(summarized_results, indent=2)}
|
210 |
+
|
211 |
+
User question: {user_query}
|
212 |
+
|
213 |
+
Please provide a detailed analysis based on the detected objects and the user's question.
|
214 |
+
"""
|
215 |
+
|
216 |
+
# Tokenize and generate response
|
217 |
+
try:
|
218 |
+
start_time = time.time()
|
219 |
+
|
220 |
+
# 토큰 길이 확인 및 제한
|
221 |
+
tokens = llm_tokenizer.encode(prompt)
|
222 |
+
if len(tokens) > 1500: # 안전 마진 설정
|
223 |
+
prompt = f"""You are an AI assistant analyzing image detection results.
|
224 |
+
The image contains {len(summarized_results)} detected objects.
|
225 |
+
|
226 |
+
User question: {user_query}
|
227 |
+
|
228 |
+
Please provide a general analysis based on the user's question.
|
229 |
+
"""
|
230 |
+
|
231 |
+
inputs = llm_tokenizer(prompt, return_tensors="pt").to(device)
|
232 |
+
with torch.no_grad():
|
233 |
+
output = llm_model.generate(
|
234 |
+
**inputs,
|
235 |
+
max_new_tokens=512,
|
236 |
+
temperature=0.7,
|
237 |
+
top_p=0.9,
|
238 |
+
do_sample=True
|
239 |
+
)
|
240 |
+
|
241 |
+
response_text = llm_tokenizer.decode(output[0], skip_special_tokens=True)
|
242 |
+
|
243 |
+
# Remove the prompt from the response
|
244 |
+
if response_text.startswith(prompt):
|
245 |
+
response_text = response_text[len(prompt):].strip()
|
246 |
+
|
247 |
+
inference_time = time.time() - start_time
|
248 |
+
|
249 |
+
return {
|
250 |
+
"response": response_text,
|
251 |
+
"performance": {
|
252 |
+
"inference_time": round(inference_time, 3),
|
253 |
+
"device": "GPU" if torch.cuda.is_available() else "CPU"
|
254 |
+
}
|
255 |
+
}
|
256 |
+
except Exception as e:
|
257 |
+
return {"error": f"Error processing LLM query: {str(e)}"}
|
258 |
+
|
259 |
+
def image_to_base64(img):
|
260 |
+
"""Convert PIL Image to base64 string"""
|
261 |
+
buffered = io.BytesIO()
|
262 |
+
img.save(buffered, format="PNG")
|
263 |
+
img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
264 |
+
return img_str
|
265 |
+
|
266 |
+
def process_yolo(image):
|
267 |
+
if yolo_model is None:
|
268 |
+
return {"error": "YOLOv8 model not loaded"}
|
269 |
+
|
270 |
+
# Measure inference time
|
271 |
+
start_time = time.time()
|
272 |
+
|
273 |
+
# Convert to numpy if it's a PIL image
|
274 |
+
if isinstance(image, Image.Image):
|
275 |
+
image_np = np.array(image)
|
276 |
+
else:
|
277 |
+
image_np = image
|
278 |
+
|
279 |
+
# Run inference
|
280 |
+
results = yolo_model(image_np)
|
281 |
+
|
282 |
+
# Process results
|
283 |
+
result_image = results[0].plot()
|
284 |
+
result_image = Image.fromarray(result_image)
|
285 |
+
|
286 |
+
# Get detection information
|
287 |
+
boxes = results[0].boxes
|
288 |
+
class_names = results[0].names
|
289 |
+
|
290 |
+
# Format detection results
|
291 |
+
detections = []
|
292 |
+
for box in boxes:
|
293 |
+
class_id = int(box.cls[0].item())
|
294 |
+
class_name = class_names[class_id]
|
295 |
+
confidence = round(box.conf[0].item(), 2)
|
296 |
+
bbox = box.xyxy[0].tolist()
|
297 |
+
bbox = [round(x) for x in bbox]
|
298 |
+
detections.append({
|
299 |
+
"class": class_name,
|
300 |
+
"confidence": confidence,
|
301 |
+
"bbox": bbox
|
302 |
+
})
|
303 |
+
|
304 |
+
# Calculate inference time
|
305 |
+
inference_time = time.time() - start_time
|
306 |
+
|
307 |
+
# Add inference time and device info
|
308 |
+
device_info = "GPU" if torch.cuda.is_available() else "CPU"
|
309 |
+
|
310 |
+
return {
|
311 |
+
"image": image_to_base64(result_image),
|
312 |
+
"detections": detections,
|
313 |
+
"performance": {
|
314 |
+
"inference_time": round(inference_time, 3),
|
315 |
+
"device": device_info
|
316 |
+
}
|
317 |
+
}
|
318 |
+
|
319 |
+
def process_detr(image):
|
320 |
+
if detr_model is None or detr_processor is None:
|
321 |
+
return {"error": "DETR model not loaded"}
|
322 |
+
|
323 |
+
# Measure inference time
|
324 |
+
start_time = time.time()
|
325 |
+
|
326 |
+
# Prepare image for the model
|
327 |
+
inputs = detr_processor(images=image, return_tensors="pt")
|
328 |
+
|
329 |
+
# Run inference
|
330 |
+
with torch.no_grad():
|
331 |
+
outputs = detr_model(**inputs)
|
332 |
+
|
333 |
+
# Process results
|
334 |
+
target_sizes = torch.tensor([image.size[::-1]])
|
335 |
+
results = detr_processor.post_process_object_detection(
|
336 |
+
outputs, target_sizes=target_sizes, threshold=0.9
|
337 |
+
)[0]
|
338 |
+
|
339 |
+
# Create a copy of the image to draw on
|
340 |
+
result_image = image.copy()
|
341 |
+
fig, ax = plt.subplots(1)
|
342 |
+
ax.imshow(result_image)
|
343 |
+
|
344 |
+
# Format detection results
|
345 |
+
detections = []
|
346 |
+
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
|
347 |
+
box = [round(i) for i in box.tolist()]
|
348 |
+
class_name = detr_model.config.id2label[label.item()]
|
349 |
+
confidence = round(score.item(), 2)
|
350 |
+
|
351 |
+
# Draw rectangle
|
352 |
+
rect = Rectangle((box[0], box[1]), box[2] - box[0], box[3] - box[1],
|
353 |
+
linewidth=2, edgecolor='r', facecolor='none')
|
354 |
+
ax.add_patch(rect)
|
355 |
+
|
356 |
+
# Add label
|
357 |
+
plt.text(box[0], box[1], "{}: {}".format(class_name, confidence),
|
358 |
+
bbox=dict(facecolor='white', alpha=0.8))
|
359 |
+
|
360 |
+
detections.append({
|
361 |
+
"class": class_name,
|
362 |
+
"confidence": confidence,
|
363 |
+
"bbox": box
|
364 |
+
})
|
365 |
+
|
366 |
+
# Save figure to image
|
367 |
+
buf = io.BytesIO()
|
368 |
+
plt.tight_layout()
|
369 |
+
plt.axis('off')
|
370 |
+
plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0)
|
371 |
+
buf.seek(0)
|
372 |
+
result_image = Image.open(buf)
|
373 |
+
plt.close(fig)
|
374 |
+
|
375 |
+
# Calculate inference time
|
376 |
+
inference_time = time.time() - start_time
|
377 |
+
|
378 |
+
# Add inference time and device info
|
379 |
+
device_info = "GPU" if torch.cuda.is_available() else "CPU"
|
380 |
+
|
381 |
+
return {
|
382 |
+
"image": image_to_base64(result_image),
|
383 |
+
"detections": detections,
|
384 |
+
"performance": {
|
385 |
+
"inference_time": round(inference_time, 3),
|
386 |
+
"device": device_info
|
387 |
+
}
|
388 |
+
}
|
389 |
+
|
390 |
+
def process_vit(image):
|
391 |
+
if vit_model is None or vit_processor is None:
|
392 |
+
return {"error": "ViT model not loaded"}
|
393 |
+
|
394 |
+
# Measure inference time
|
395 |
+
start_time = time.time()
|
396 |
+
|
397 |
+
# Prepare image for the model
|
398 |
+
inputs = vit_processor(images=image, return_tensors="pt")
|
399 |
+
|
400 |
+
# Run inference
|
401 |
+
with torch.no_grad():
|
402 |
+
outputs = vit_model(**inputs)
|
403 |
+
logits = outputs.logits
|
404 |
+
|
405 |
+
# Get the predicted class
|
406 |
+
predicted_class_idx = logits.argmax(-1).item()
|
407 |
+
prediction = vit_model.config.id2label[predicted_class_idx]
|
408 |
+
|
409 |
+
# Get top 5 predictions
|
410 |
+
probs = torch.nn.functional.softmax(logits, dim=-1)[0]
|
411 |
+
top5_prob, top5_indices = torch.topk(probs, 5)
|
412 |
+
|
413 |
+
results = []
|
414 |
+
for i, (prob, idx) in enumerate(zip(top5_prob, top5_indices)):
|
415 |
+
class_name = vit_model.config.id2label[idx.item()]
|
416 |
+
results.append({
|
417 |
+
"rank": i+1,
|
418 |
+
"class": class_name,
|
419 |
+
"probability": round(prob.item(), 3)
|
420 |
+
})
|
421 |
+
|
422 |
+
# Calculate inference time
|
423 |
+
inference_time = time.time() - start_time
|
424 |
+
|
425 |
+
# Add inference time and device info
|
426 |
+
device_info = "GPU" if torch.cuda.is_available() else "CPU"
|
427 |
+
|
428 |
+
return {
|
429 |
+
"top_predictions": results,
|
430 |
+
"performance": {
|
431 |
+
"inference_time": round(inference_time, 3),
|
432 |
+
"device": device_info
|
433 |
+
}
|
434 |
+
}
|
435 |
+
|
436 |
+
@app.route('/api/detect/yolo', methods=['POST'])
|
437 |
+
def yolo_detect():
|
438 |
+
if 'image' not in request.files:
|
439 |
+
return jsonify({"error": "No image provided"}), 400
|
440 |
+
|
441 |
+
file = request.files['image']
|
442 |
+
image = Image.open(file.stream)
|
443 |
+
|
444 |
+
result = process_yolo(image)
|
445 |
+
return jsonify(result)
|
446 |
+
|
447 |
+
@app.route('/api/detect/detr', methods=['POST'])
|
448 |
+
def detr_detect():
|
449 |
+
if 'image' not in request.files:
|
450 |
+
return jsonify({"error": "No image provided"}), 400
|
451 |
+
|
452 |
+
file = request.files['image']
|
453 |
+
image = Image.open(file.stream)
|
454 |
+
|
455 |
+
result = process_detr(image)
|
456 |
+
return jsonify(result)
|
457 |
+
|
458 |
+
@app.route('/api/classify/vit', methods=['POST'])
|
459 |
+
def vit_classify():
|
460 |
+
if 'image' not in request.files:
|
461 |
+
return jsonify({"error": "No image provided"}), 400
|
462 |
+
|
463 |
+
file = request.files['image']
|
464 |
+
image = Image.open(file.stream)
|
465 |
+
|
466 |
+
result = process_vit(image)
|
467 |
+
return jsonify(result)
|
468 |
+
|
469 |
+
@app.route('/api/analyze', methods=['POST'])
|
470 |
+
def analyze_with_llm():
|
471 |
+
# Check if required data is in the request
|
472 |
+
if not request.json:
|
473 |
+
return jsonify({"error": "No JSON data provided"}), 400
|
474 |
+
|
475 |
+
# Extract vision results and user query from request
|
476 |
+
data = request.json
|
477 |
+
if 'visionResults' not in data or 'userQuery' not in data:
|
478 |
+
return jsonify({"error": "Missing required fields: visionResults or userQuery"}), 400
|
479 |
+
|
480 |
+
vision_results = data['visionResults']
|
481 |
+
user_query = data['userQuery']
|
482 |
+
|
483 |
+
# Process the query with LLM
|
484 |
+
result = process_llm_query(vision_results, user_query)
|
485 |
+
|
486 |
+
return jsonify(result)
|
487 |
+
|
488 |
+
def generate_image_embedding(image):
|
489 |
+
"""CLIP 모델을 사용하여 이미지 임베딩 생성"""
|
490 |
+
if clip_model is None or clip_processor is None:
|
491 |
+
return None
|
492 |
+
|
493 |
+
try:
|
494 |
+
# 이미지 전처리
|
495 |
+
inputs = clip_processor(images=image, return_tensors="pt")
|
496 |
+
|
497 |
+
# 이미지 임베딩 생성
|
498 |
+
with torch.no_grad():
|
499 |
+
image_features = clip_model.get_image_features(**inputs)
|
500 |
+
|
501 |
+
# 임베딩 정규화 및 numpy 배열로 변환
|
502 |
+
image_embedding = image_features.squeeze().cpu().numpy()
|
503 |
+
normalized_embedding = image_embedding / np.linalg.norm(image_embedding)
|
504 |
+
|
505 |
+
return normalized_embedding.tolist()
|
506 |
+
except Exception as e:
|
507 |
+
print(f"Error generating image embedding: {e}")
|
508 |
+
return None
|
509 |
+
|
510 |
+
@app.route('/api/similar-images', methods=['POST'])
|
511 |
+
def find_similar_images():
|
512 |
+
"""유사 이미지 검색 API"""
|
513 |
+
if clip_model is None or clip_processor is None or image_collection is None:
|
514 |
+
return jsonify({"error": "Image embedding model or vector DB not available"})
|
515 |
+
|
516 |
+
try:
|
517 |
+
# 요청에서 이미지 데이터 추출
|
518 |
+
if 'image' not in request.files and 'image' not in request.form:
|
519 |
+
return jsonify({"error": "No image provided"})
|
520 |
+
|
521 |
+
if 'image' in request.files:
|
522 |
+
# 파일로 업로드된 경우
|
523 |
+
image_file = request.files['image']
|
524 |
+
image = Image.open(image_file).convert('RGB')
|
525 |
+
else:
|
526 |
+
# base64로 인코딩된 경우
|
527 |
+
image_data = request.form['image']
|
528 |
+
if image_data.startswith('data:image'):
|
529 |
+
# Remove the data URL prefix if present
|
530 |
+
image_data = image_data.split(',')[1]
|
531 |
+
image = Image.open(BytesIO(base64.b64decode(image_data))).convert('RGB')
|
532 |
+
|
533 |
+
# 이미지 ID 생성 (임시)
|
534 |
+
image_id = str(uuid.uuid4())
|
535 |
+
|
536 |
+
# 이미지 임베딩 생성
|
537 |
+
embedding = generate_image_embedding(image)
|
538 |
+
if embedding is None:
|
539 |
+
return jsonify({"error": "Failed to generate image embedding"})
|
540 |
+
|
541 |
+
# 현재 이미지를 DB에 추가 (선택적)
|
542 |
+
# image_collection.add(
|
543 |
+
# ids=[image_id],
|
544 |
+
# embeddings=[embedding]
|
545 |
+
# )
|
546 |
+
|
547 |
+
# 유사 이미지 검색
|
548 |
+
results = image_collection.query(
|
549 |
+
query_embeddings=[embedding],
|
550 |
+
n_results=5 # 상위 5개 결과 반환
|
551 |
+
)
|
552 |
+
|
553 |
+
# 결과 포맷팅
|
554 |
+
similar_images = []
|
555 |
+
if len(results['ids'][0]) > 0:
|
556 |
+
for i, img_id in enumerate(results['ids'][0]):
|
557 |
+
similar_images.append({
|
558 |
+
"id": img_id,
|
559 |
+
"distance": float(results['distances'][0][i]) if 'distances' in results else 0.0,
|
560 |
+
"metadata": results['metadatas'][0][i] if 'metadatas' in results else {}
|
561 |
+
})
|
562 |
+
|
563 |
+
return jsonify({
|
564 |
+
"query_image_id": image_id,
|
565 |
+
"similar_images": similar_images
|
566 |
+
})
|
567 |
+
|
568 |
+
except Exception as e:
|
569 |
+
print(f"Error in similar-images API: {e}")
|
570 |
+
return jsonify({"error": str(e)}), 500
|
571 |
+
|
572 |
+
@app.route('/api/add-to-collection', methods=['POST'])
|
573 |
+
def add_to_collection():
|
574 |
+
"""이미지를 벡터 DB에 추가하는 API"""
|
575 |
+
if clip_model is None or clip_processor is None or image_collection is None:
|
576 |
+
return jsonify({"error": "Image embedding model or vector DB not available"})
|
577 |
+
|
578 |
+
try:
|
579 |
+
# 요청에서 이미지 데이터 추출
|
580 |
+
if 'image' not in request.files and 'image' not in request.form:
|
581 |
+
return jsonify({"error": "No image provided"})
|
582 |
+
|
583 |
+
# 메타데이터 추출
|
584 |
+
metadata = {}
|
585 |
+
if 'metadata' in request.form:
|
586 |
+
metadata = json.loads(request.form['metadata'])
|
587 |
+
|
588 |
+
# 이미지 ID (제공되지 않은 경우 자동 생성)
|
589 |
+
image_id = request.form.get('id', str(uuid.uuid4()))
|
590 |
+
|
591 |
+
if 'image' in request.files:
|
592 |
+
# 파일로 업로드된 경우
|
593 |
+
image_file = request.files['image']
|
594 |
+
image = Image.open(image_file).convert('RGB')
|
595 |
+
else:
|
596 |
+
# base64로 인코딩된 경우
|
597 |
+
image_data = request.form['image']
|
598 |
+
if image_data.startswith('data:image'):
|
599 |
+
# Remove the data URL prefix if present
|
600 |
+
image_data = image_data.split(',')[1]
|
601 |
+
image = Image.open(BytesIO(base64.b64decode(image_data))).convert('RGB')
|
602 |
+
|
603 |
+
# 이미지 임베딩 생성
|
604 |
+
embedding = generate_image_embedding(image)
|
605 |
+
if embedding is None:
|
606 |
+
return jsonify({"error": "Failed to generate image embedding"})
|
607 |
+
|
608 |
+
# 이미지 데이터를 base64로 인코딩하여 메타데이터에 추가
|
609 |
+
buffered = BytesIO()
|
610 |
+
image.save(buffered, format="JPEG")
|
611 |
+
img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
612 |
+
metadata['image_data'] = img_str
|
613 |
+
|
614 |
+
# 이미지를 DB에 추가
|
615 |
+
image_collection.add(
|
616 |
+
ids=[image_id],
|
617 |
+
embeddings=[embedding],
|
618 |
+
metadatas=[metadata]
|
619 |
+
)
|
620 |
+
|
621 |
+
return jsonify({
|
622 |
+
"success": True,
|
623 |
+
"image_id": image_id,
|
624 |
+
"message": "Image added to collection"
|
625 |
+
})
|
626 |
+
|
627 |
+
except Exception as e:
|
628 |
+
print(f"Error in add-to-collection API: {e}")
|
629 |
+
return jsonify({"error": str(e)}), 500
|
630 |
+
|
631 |
+
@app.route('/api/add-detected-objects', methods=['POST'])
|
632 |
+
def add_detected_objects():
|
633 |
+
"""객체 인식 결과를 벡터 DB에 추가하는 API"""
|
634 |
+
if clip_model is None or object_collection is None:
|
635 |
+
return jsonify({"error": "Image embedding model or vector DB not available"})
|
636 |
+
|
637 |
+
try:
|
638 |
+
# 디버깅: 요청 데이터 로깅
|
639 |
+
print("[DEBUG] Received request in add-detected-objects")
|
640 |
+
|
641 |
+
# 요청에서 이미지와 객체 검출 결과 데이터 추출
|
642 |
+
data = request.json
|
643 |
+
print(f"[DEBUG] Request data keys: {list(data.keys()) if data else 'None'}")
|
644 |
+
|
645 |
+
if not data:
|
646 |
+
print("[DEBUG] Error: No data received in request")
|
647 |
+
return jsonify({"error": "No data received"})
|
648 |
+
|
649 |
+
if 'image' not in data:
|
650 |
+
print("[DEBUG] Error: 'image' key not found in request data")
|
651 |
+
return jsonify({"error": "Missing image data"})
|
652 |
+
|
653 |
+
if 'objects' not in data:
|
654 |
+
print("[DEBUG] Error: 'objects' key not found in request data")
|
655 |
+
return jsonify({"error": "Missing objects data"})
|
656 |
+
|
657 |
+
# 이미지 데이터 디버깅
|
658 |
+
print(f"[DEBUG] Image data type: {type(data['image'])}")
|
659 |
+
print(f"[DEBUG] Image data starts with: {data['image'][:50]}...") # 처음 50자만 출력
|
660 |
+
|
661 |
+
# 객체 데이터 디버깅
|
662 |
+
print(f"[DEBUG] Objects data type: {type(data['objects'])}")
|
663 |
+
print(f"[DEBUG] Objects count: {len(data['objects']) if isinstance(data['objects'], list) else 'Not a list'}")
|
664 |
+
if isinstance(data['objects'], list) and len(data['objects']) > 0:
|
665 |
+
print(f"[DEBUG] First object keys: {list(data['objects'][0].keys()) if isinstance(data['objects'][0], dict) else 'Not a dict'}")
|
666 |
+
|
667 |
+
# 이미지 데이터 처리
|
668 |
+
image_data = data['image']
|
669 |
+
if image_data.startswith('data:image'):
|
670 |
+
image_data = image_data.split(',')[1]
|
671 |
+
|
672 |
+
image = Image.open(BytesIO(base64.b64decode(image_data))).convert('RGB')
|
673 |
+
image_width, image_height = image.size
|
674 |
+
|
675 |
+
# 이미지 ID
|
676 |
+
image_id = data.get('imageId', str(uuid.uuid4()))
|
677 |
+
|
678 |
+
# 객체 데이터 처리
|
679 |
+
objects = data['objects']
|
680 |
+
object_ids = []
|
681 |
+
object_embeddings = []
|
682 |
+
object_metadatas = []
|
683 |
+
|
684 |
+
for obj in objects:
|
685 |
+
# 객체 ID 생성
|
686 |
+
object_id = f"{image_id}_{str(uuid.uuid4())[:8]}"
|
687 |
+
|
688 |
+
# 바운딩 박스 정보 추출
|
689 |
+
bbox = obj.get('bbox', [])
|
690 |
+
|
691 |
+
# 리스트 형태의 bbox [x1, y1, x2, y2] 처리
|
692 |
+
if isinstance(bbox, list) and len(bbox) >= 4:
|
693 |
+
x1 = bbox[0] / image_width # 정규화된 좌표로 변환
|
694 |
+
y1 = bbox[1] / image_height
|
695 |
+
x2 = bbox[2] / image_width
|
696 |
+
y2 = bbox[3] / image_height
|
697 |
+
width = x2 - x1
|
698 |
+
height = y2 - y1
|
699 |
+
# 딕셔너리 형태의 bbox {'x': x, 'y': y, 'width': width, 'height': height} 처리
|
700 |
+
elif isinstance(bbox, dict):
|
701 |
+
x1 = bbox.get('x', 0)
|
702 |
+
y1 = bbox.get('y', 0)
|
703 |
+
width = bbox.get('width', 0)
|
704 |
+
height = bbox.get('height', 0)
|
705 |
+
else:
|
706 |
+
# 기본값 설정
|
707 |
+
x1, y1, width, height = 0, 0, 0, 0
|
708 |
+
|
709 |
+
# 바운딩 박스를 이미지 좌표로 변환
|
710 |
+
x1_px = int(x1 * image_width)
|
711 |
+
y1_px = int(y1 * image_height)
|
712 |
+
width_px = int(width * image_width)
|
713 |
+
height_px = int(height * image_height)
|
714 |
+
|
715 |
+
# 객체 이미지 자르기
|
716 |
+
try:
|
717 |
+
object_image = image.crop((x1_px, y1_px, x1_px + width_px, y1_px + height_px))
|
718 |
+
|
719 |
+
# 임베딩 생��
|
720 |
+
embedding = generate_image_embedding(object_image)
|
721 |
+
if embedding is None:
|
722 |
+
continue
|
723 |
+
|
724 |
+
# 메타데이터 구성
|
725 |
+
# bbox를 JSON 문자열로 직렬화하여 ChromaDB 메타데이터 제한 우회
|
726 |
+
bbox_json = json.dumps({
|
727 |
+
"x": x1,
|
728 |
+
"y": y1,
|
729 |
+
"width": width,
|
730 |
+
"height": height
|
731 |
+
})
|
732 |
+
|
733 |
+
# 객체 이미지를 base64로 인코딩
|
734 |
+
buffered = BytesIO()
|
735 |
+
object_image.save(buffered, format="JPEG")
|
736 |
+
img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
737 |
+
|
738 |
+
metadata = {
|
739 |
+
"image_id": image_id,
|
740 |
+
"class": obj.get('class', ''),
|
741 |
+
"confidence": obj.get('confidence', 0),
|
742 |
+
"bbox": bbox_json, # JSON 문자열로 저장
|
743 |
+
"image_data": img_str # 이미지 데이터 추가
|
744 |
+
}
|
745 |
+
|
746 |
+
object_ids.append(object_id)
|
747 |
+
object_embeddings.append(embedding)
|
748 |
+
object_metadatas.append(metadata)
|
749 |
+
except Exception as e:
|
750 |
+
print(f"Error processing object: {e}")
|
751 |
+
continue
|
752 |
+
|
753 |
+
# 객체가 없는 경우
|
754 |
+
if not object_ids:
|
755 |
+
return jsonify({"error": "No valid objects to add"})
|
756 |
+
|
757 |
+
# 디버깅: 메타데이터 출력
|
758 |
+
print(f"[DEBUG] Adding {len(object_ids)} objects to vector DB")
|
759 |
+
print(f"[DEBUG] First metadata sample: {object_metadatas[0] if object_metadatas else 'None'}")
|
760 |
+
|
761 |
+
try:
|
762 |
+
# 객체들을 DB에 추가
|
763 |
+
object_collection.add(
|
764 |
+
ids=object_ids,
|
765 |
+
embeddings=object_embeddings,
|
766 |
+
metadatas=object_metadatas
|
767 |
+
)
|
768 |
+
print("[DEBUG] Successfully added objects to vector DB")
|
769 |
+
except Exception as e:
|
770 |
+
print(f"[DEBUG] Error adding to vector DB: {e}")
|
771 |
+
raise e
|
772 |
+
|
773 |
+
return jsonify({
|
774 |
+
"success": True,
|
775 |
+
"image_id": image_id,
|
776 |
+
"object_count": len(object_ids),
|
777 |
+
"object_ids": object_ids
|
778 |
+
})
|
779 |
+
|
780 |
+
except Exception as e:
|
781 |
+
print(f"Error in add-detected-objects API: {e}")
|
782 |
+
return jsonify({"error": str(e)}), 500
|
783 |
+
|
784 |
+
@app.route('/api/search-similar-objects', methods=['POST'])
|
785 |
+
def search_similar_objects():
|
786 |
+
"""유사한 객체 검색 API"""
|
787 |
+
print("[DEBUG] Received request in search-similar-objects")
|
788 |
+
|
789 |
+
if clip_model is None or object_collection is None:
|
790 |
+
print("[DEBUG] Error: Image embedding model or vector DB not available")
|
791 |
+
return jsonify({"error": "Image embedding model or vector DB not available"})
|
792 |
+
|
793 |
+
try:
|
794 |
+
# 요청 데이터 추출
|
795 |
+
data = request.json
|
796 |
+
print(f"[DEBUG] Request data keys: {list(data.keys()) if data else 'None'}")
|
797 |
+
|
798 |
+
if not data:
|
799 |
+
print("[DEBUG] Error: Missing request data")
|
800 |
+
return jsonify({"error": "Missing request data"})
|
801 |
+
|
802 |
+
# 검색 유형 결정
|
803 |
+
search_type = data.get('searchType', 'image')
|
804 |
+
n_results = int(data.get('n_results', 5)) # 결과 개수
|
805 |
+
print(f"[DEBUG] Search type: {search_type}, n_results: {n_results}")
|
806 |
+
|
807 |
+
query_embedding = None
|
808 |
+
|
809 |
+
if search_type == 'image' and 'image' in data:
|
810 |
+
# 이미지로 검색하는 경우
|
811 |
+
print("[DEBUG] Searching by image")
|
812 |
+
image_data = data['image']
|
813 |
+
if image_data.startswith('data:image'):
|
814 |
+
image_data = image_data.split(',')[1]
|
815 |
+
|
816 |
+
try:
|
817 |
+
image = Image.open(BytesIO(base64.b64decode(image_data))).convert('RGB')
|
818 |
+
query_embedding = generate_image_embedding(image)
|
819 |
+
print(f"[DEBUG] Generated image embedding: {type(query_embedding)}, shape: {len(query_embedding) if query_embedding is not None else 'None'}")
|
820 |
+
except Exception as e:
|
821 |
+
print(f"[DEBUG] Error generating image embedding: {e}")
|
822 |
+
return jsonify({"error": f"Error processing image: {str(e)}"}), 500
|
823 |
+
|
824 |
+
elif search_type == 'object' and 'objectId' in data:
|
825 |
+
# 객체 ID로 검색하는 경우
|
826 |
+
object_id = data['objectId']
|
827 |
+
result = object_collection.get(ids=[object_id], include=["embeddings"])
|
828 |
+
|
829 |
+
if result and "embeddings" in result and len(result["embeddings"]) > 0:
|
830 |
+
query_embedding = result["embeddings"][0]
|
831 |
+
|
832 |
+
elif search_type == 'class' and 'class_name' in data:
|
833 |
+
# 클래스 이름으로 검색하는 경우
|
834 |
+
print("[DEBUG] Searching by class name")
|
835 |
+
class_name = data['class_name']
|
836 |
+
print(f"[DEBUG] Class name: {class_name}")
|
837 |
+
filter_query = {"class": {"$eq": class_name}}
|
838 |
+
|
839 |
+
try:
|
840 |
+
# 클래스로 필터링하여 검색
|
841 |
+
print(f"[DEBUG] Querying with filter: {filter_query}")
|
842 |
+
# Use get method instead of query for class-based filtering
|
843 |
+
results = object_collection.get(
|
844 |
+
where=filter_query,
|
845 |
+
limit=n_results,
|
846 |
+
include=["metadatas", "embeddings", "documents"]
|
847 |
+
)
|
848 |
+
|
849 |
+
print(f"[DEBUG] Query results: {results['ids'][0] if 'ids' in results and len(results['ids']) > 0 else 'No results'}")
|
850 |
+
formatted_results = format_object_results(results)
|
851 |
+
print(f"[DEBUG] Formatted results count: {len(formatted_results)}")
|
852 |
+
|
853 |
+
return jsonify({
|
854 |
+
"success": True,
|
855 |
+
"searchType": "class",
|
856 |
+
"results": formatted_results
|
857 |
+
})
|
858 |
+
except Exception as e:
|
859 |
+
print(f"[DEBUG] Error in class search: {e}")
|
860 |
+
return jsonify({"error": f"Error in class search: {str(e)}"}), 500
|
861 |
+
|
862 |
+
else:
|
863 |
+
print(f"[DEBUG] Invalid search parameters: {data}")
|
864 |
+
return jsonify({"error": "Invalid search parameters"})
|
865 |
+
|
866 |
+
if query_embedding is None:
|
867 |
+
print("[DEBUG] Error: Failed to generate query embedding")
|
868 |
+
return jsonify({"error": "Failed to generate query embedding"})
|
869 |
+
|
870 |
+
try:
|
871 |
+
# 유사도 검색 실행
|
872 |
+
print(f"[DEBUG] Running similarity search with embedding of length {len(query_embedding)}")
|
873 |
+
results = object_collection.query(
|
874 |
+
query_embeddings=[query_embedding],
|
875 |
+
n_results=n_results,
|
876 |
+
include=["metadatas", "distances"]
|
877 |
+
)
|
878 |
+
|
879 |
+
print(f"[DEBUG] Query results: {results['ids'][0] if 'ids' in results and len(results['ids']) > 0 else 'No results'}")
|
880 |
+
formatted_results = format_object_results(results)
|
881 |
+
print(f"[DEBUG] Formatted results count: {len(formatted_results)}")
|
882 |
+
|
883 |
+
return jsonify({
|
884 |
+
"success": True,
|
885 |
+
"searchType": search_type,
|
886 |
+
"results": formatted_results
|
887 |
+
})
|
888 |
+
except Exception as e:
|
889 |
+
print(f"[DEBUG] Error in similarity search: {e}")
|
890 |
+
return jsonify({"error": f"Error in similarity search: {str(e)}"}), 500
|
891 |
+
|
892 |
+
except Exception as e:
|
893 |
+
print(f"Error in search-similar-objects API: {e}")
|
894 |
+
return jsonify({"error": str(e)}), 500
|
895 |
+
|
896 |
+
def format_object_results(results):
|
897 |
+
"""검색 결과 포맷팅 - ChromaDB query 및 get 메서드 결과 모두 처리"""
|
898 |
+
formatted_results = []
|
899 |
+
|
900 |
+
print(f"[DEBUG] Formatting results: {results.keys() if results else 'None'}")
|
901 |
+
|
902 |
+
if not results:
|
903 |
+
print("[DEBUG] No results to format")
|
904 |
+
return formatted_results
|
905 |
+
|
906 |
+
try:
|
907 |
+
# Check if this is a result from 'get' method (class search) or 'query' method (similarity search)
|
908 |
+
is_get_result = 'ids' in results and isinstance(results['ids'], list) and not isinstance(results['ids'][0], list) if 'ids' in results else False
|
909 |
+
|
910 |
+
if is_get_result:
|
911 |
+
# Handle results from 'get' method (flat structure)
|
912 |
+
print("[DEBUG] Processing results from get method (class search)")
|
913 |
+
if len(results['ids']) == 0:
|
914 |
+
return formatted_results
|
915 |
+
|
916 |
+
for i, obj_id in enumerate(results['ids']):
|
917 |
+
try:
|
918 |
+
# Extract object info
|
919 |
+
metadata = results['metadatas'][i] if 'metadatas' in results else {}
|
920 |
+
|
921 |
+
# Deserialize bbox if stored as JSON string
|
922 |
+
if 'bbox' in metadata and isinstance(metadata['bbox'], str):
|
923 |
+
try:
|
924 |
+
metadata['bbox'] = json.loads(metadata['bbox'])
|
925 |
+
except:
|
926 |
+
pass # Keep as is if deserialization fails
|
927 |
+
|
928 |
+
result_item = {
|
929 |
+
"id": obj_id,
|
930 |
+
"metadata": metadata
|
931 |
+
}
|
932 |
+
|
933 |
+
# No distance in get results
|
934 |
+
|
935 |
+
# Check if image data is already in metadata
|
936 |
+
if 'image_data' not in metadata:
|
937 |
+
print(f"[DEBUG] Image data not found in metadata for object {obj_id}")
|
938 |
+
else:
|
939 |
+
print(f"[DEBUG] Image data found in metadata for object {obj_id}")
|
940 |
+
|
941 |
+
formatted_results.append(result_item)
|
942 |
+
except Exception as e:
|
943 |
+
print(f"[DEBUG] Error formatting get result {i}: {e}")
|
944 |
+
else:
|
945 |
+
# Handle results from 'query' method (nested structure)
|
946 |
+
print("[DEBUG] Processing results from query method (similarity search)")
|
947 |
+
if 'ids' not in results or len(results['ids']) == 0 or len(results['ids'][0]) == 0:
|
948 |
+
return formatted_results
|
949 |
+
|
950 |
+
for i, obj_id in enumerate(results['ids'][0]):
|
951 |
+
try:
|
952 |
+
# Extract object info
|
953 |
+
metadata = results['metadatas'][0][i] if 'metadatas' in results and len(results['metadatas']) > 0 else {}
|
954 |
+
|
955 |
+
# Deserialize bbox if stored as JSON string
|
956 |
+
if 'bbox' in metadata and isinstance(metadata['bbox'], str):
|
957 |
+
try:
|
958 |
+
metadata['bbox'] = json.loads(metadata['bbox'])
|
959 |
+
except:
|
960 |
+
pass # Keep as is if deserialization fails
|
961 |
+
|
962 |
+
result_item = {
|
963 |
+
"id": obj_id,
|
964 |
+
"metadata": metadata
|
965 |
+
}
|
966 |
+
|
967 |
+
if 'distances' in results and len(results['distances']) > 0:
|
968 |
+
result_item["distance"] = float(results['distances'][0][i])
|
969 |
+
|
970 |
+
# Check if image data is already in metadata
|
971 |
+
if 'image_data' not in metadata:
|
972 |
+
try:
|
973 |
+
# Try to get original image via image ID
|
974 |
+
image_id = metadata.get('image_id')
|
975 |
+
if image_id:
|
976 |
+
print(f"[DEBUG] Image data not found in metadata for object {obj_id} with image_id {image_id}")
|
977 |
+
except Exception as e:
|
978 |
+
print(f"[DEBUG] Error checking image data for result {i}: {e}")
|
979 |
+
else:
|
980 |
+
print(f"[DEBUG] Image data found in metadata for object {obj_id}")
|
981 |
+
|
982 |
+
formatted_results.append(result_item)
|
983 |
+
except Exception as e:
|
984 |
+
print(f"[DEBUG] Error formatting query result {i}: {e}")
|
985 |
+
except Exception as e:
|
986 |
+
print(f"[DEBUG] Error in format_object_results: {e}")
|
987 |
+
|
988 |
+
return formatted_results
|
989 |
+
|
990 |
+
@app.route('/', defaults={'path': ''}, methods=['GET'])
|
991 |
+
@app.route('/<path:path>', methods=['GET'])
|
992 |
+
def serve_react(path):
|
993 |
+
"""Serve React frontend"""
|
994 |
+
if path != "" and os.path.exists(os.path.join(app.static_folder, path)):
|
995 |
+
return send_from_directory(app.static_folder, path)
|
996 |
+
else:
|
997 |
+
return send_from_directory(app.static_folder, 'index.html')
|
998 |
+
|
999 |
+
@app.route('/similar-images', methods=['GET'])
|
1000 |
+
def similar_images_page():
|
1001 |
+
"""Serve similar images search page"""
|
1002 |
+
return send_from_directory(app.static_folder, 'similar-images.html')
|
1003 |
+
|
1004 |
+
@app.route('/object-detection-search', methods=['GET'])
|
1005 |
+
def object_detection_search_page():
|
1006 |
+
"""Serve object detection search page"""
|
1007 |
+
return send_from_directory(app.static_folder, 'object-detection-search.html')
|
1008 |
+
|
1009 |
+
@app.route('/model-vector-db', methods=['GET'])
|
1010 |
+
def model_vector_db_page():
|
1011 |
+
"""Serve model vector DB UI page"""
|
1012 |
+
return send_from_directory(app.static_folder, 'model-vector-db.html')
|
1013 |
+
|
1014 |
+
@app.route('/api/status', methods=['GET'])
|
1015 |
+
def status():
|
1016 |
+
return jsonify({
|
1017 |
+
"status": "online",
|
1018 |
+
"models": {
|
1019 |
+
"yolo": yolo_model is not None,
|
1020 |
+
"detr": detr_model is not None and detr_processor is not None,
|
1021 |
+
"vit": vit_model is not None and vit_processor is not None
|
1022 |
+
},
|
1023 |
+
"device": "GPU" if torch.cuda.is_available() else "CPU"
|
1024 |
+
})
|
1025 |
+
|
1026 |
+
def index():
|
1027 |
+
return send_from_directory('static', 'index.html')
|
1028 |
+
|
1029 |
+
if __name__ == "__main__":
|
1030 |
+
# 허깅페이스 Space에서는 PORT 환경 변수를 사용합니다
|
1031 |
+
port = int(os.environ.get("PORT", 7860))
|
1032 |
+
app.run(debug=False, host='0.0.0.0', port=port)
|
app.py
ADDED
@@ -0,0 +1,560 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gradio as gr
|
2 |
+
import torch
|
3 |
+
from PIL import Image
|
4 |
+
import numpy as np
|
5 |
+
import os
|
6 |
+
import requests
|
7 |
+
import json
|
8 |
+
import base64
|
9 |
+
from io import BytesIO
|
10 |
+
import uuid
|
11 |
+
|
12 |
+
# Model initialization
|
13 |
+
print("Loading models... This may take a moment.")
|
14 |
+
|
15 |
+
# YOLOv8 model
|
16 |
+
yolo_model = None
|
17 |
+
try:
|
18 |
+
from ultralytics import YOLO
|
19 |
+
yolo_model = YOLO("yolov8n.pt") # Using the nano model for faster inference
|
20 |
+
print("YOLOv8 model loaded successfully")
|
21 |
+
except Exception as e:
|
22 |
+
print("Error loading YOLOv8 model:", e)
|
23 |
+
yolo_model = None
|
24 |
+
|
25 |
+
# DETR model (DEtection TRansformer)
|
26 |
+
detr_processor = None
|
27 |
+
detr_model = None
|
28 |
+
try:
|
29 |
+
from transformers import DetrImageProcessor, DetrForObjectDetection
|
30 |
+
|
31 |
+
# Load the DETR image processor
|
32 |
+
# DetrImageProcessor: Handles preprocessing of images for DETR model
|
33 |
+
# - Resizes images to appropriate dimensions
|
34 |
+
# - Normalizes pixel values
|
35 |
+
# - Converts images to tensors
|
36 |
+
# - Handles batch processing
|
37 |
+
detr_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50")
|
38 |
+
|
39 |
+
# Load the DETR object detection model
|
40 |
+
# DetrForObjectDetection: The actual object detection model
|
41 |
+
# - Uses ResNet-50 as backbone
|
42 |
+
# - Transformer-based architecture for object detection
|
43 |
+
# - Predicts bounding boxes and object classes
|
44 |
+
# - Pre-trained on COCO dataset by Facebook AI Research
|
45 |
+
detr_model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
|
46 |
+
|
47 |
+
print("DETR model loaded successfully")
|
48 |
+
except Exception as e:
|
49 |
+
print("Error loading DETR model:", e)
|
50 |
+
detr_processor = None
|
51 |
+
detr_model = None
|
52 |
+
|
53 |
+
# ViT model
|
54 |
+
vit_processor = None
|
55 |
+
vit_model = None
|
56 |
+
try:
|
57 |
+
from transformers import ViTImageProcessor, ViTForImageClassification
|
58 |
+
vit_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224")
|
59 |
+
vit_model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
|
60 |
+
print("ViT model loaded successfully")
|
61 |
+
except Exception as e:
|
62 |
+
print("Error loading ViT model:", e)
|
63 |
+
vit_processor = None
|
64 |
+
vit_model = None
|
65 |
+
|
66 |
+
# Get device information
|
67 |
+
import torch
|
68 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
69 |
+
print(f"Using device: {device}")
|
70 |
+
|
71 |
+
# 벡터 DB에 객체 저장 함수
|
72 |
+
def save_objects_to_vector_db(image, detection_results, model_type='yolo'):
|
73 |
+
if image is None or detection_results is None:
|
74 |
+
return "이미지나 객체 인식 결과가 없습니다."
|
75 |
+
|
76 |
+
try:
|
77 |
+
# 이미지를 base64로 인코딩
|
78 |
+
buffered = BytesIO()
|
79 |
+
image.save(buffered, format="JPEG")
|
80 |
+
img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
81 |
+
|
82 |
+
# 모델 타입에 따라 다른 API 엔드포인트 호출
|
83 |
+
if model_type in ['yolo', 'detr']:
|
84 |
+
# 객체 정보 추출
|
85 |
+
objects = []
|
86 |
+
for obj in detection_results['objects']:
|
87 |
+
objects.append({
|
88 |
+
"class": obj['class'],
|
89 |
+
"confidence": obj['confidence'],
|
90 |
+
"bbox": obj['bbox']
|
91 |
+
})
|
92 |
+
|
93 |
+
# API 요청 데이터 구성
|
94 |
+
data = {
|
95 |
+
"image": img_str,
|
96 |
+
"objects": objects,
|
97 |
+
"image_id": str(uuid.uuid4())
|
98 |
+
}
|
99 |
+
|
100 |
+
# API 호출
|
101 |
+
response = requests.post(
|
102 |
+
"http://localhost:7860/api/add-detected-objects",
|
103 |
+
json=data
|
104 |
+
)
|
105 |
+
|
106 |
+
if response.status_code == 200:
|
107 |
+
result = response.json()
|
108 |
+
if 'error' in result:
|
109 |
+
return f"오류 발생: {result['error']}"
|
110 |
+
return f"벡터 DB에 {len(objects)}개 객체 저장 완료! ID: {result.get('ids', '알 수 없음')}"
|
111 |
+
|
112 |
+
elif model_type == 'vit':
|
113 |
+
# ViT 분류 결과 저장
|
114 |
+
data = {
|
115 |
+
"image": img_str,
|
116 |
+
"metadata": {
|
117 |
+
"model": "vit",
|
118 |
+
"classifications": detection_results.get('classifications', [])
|
119 |
+
}
|
120 |
+
}
|
121 |
+
|
122 |
+
# API 호출
|
123 |
+
response = requests.post(
|
124 |
+
"http://localhost:7860/api/add-image",
|
125 |
+
json=data
|
126 |
+
)
|
127 |
+
|
128 |
+
if response.status_code == 200:
|
129 |
+
result = response.json()
|
130 |
+
if 'error' in result:
|
131 |
+
return f"오류 발생: {result['error']}"
|
132 |
+
return f"벡터 DB에 이미지 및 분류 결과 저장 완료! ID: {result.get('id', '알 수 없음')}"
|
133 |
+
|
134 |
+
else:
|
135 |
+
return "지원하지 않는 모델 타입입니다."
|
136 |
+
|
137 |
+
if response.status_code != 200:
|
138 |
+
return f"API 오류: {response.status_code}"
|
139 |
+
except Exception as e:
|
140 |
+
return f"오류 발생: {str(e)}"
|
141 |
+
|
142 |
+
# 벡터 DB에서 유사 객체 검색 함수
|
143 |
+
def search_similar_objects(image=None, class_name=None):
|
144 |
+
try:
|
145 |
+
data = {}
|
146 |
+
|
147 |
+
if image is not None:
|
148 |
+
# 이미지를 base64로 인코딩
|
149 |
+
buffered = BytesIO()
|
150 |
+
image.save(buffered, format="JPEG")
|
151 |
+
img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
152 |
+
data["image"] = img_str
|
153 |
+
data["n_results"] = 5
|
154 |
+
elif class_name is not None and class_name.strip():
|
155 |
+
data["class_name"] = class_name.strip()
|
156 |
+
data["n_results"] = 5
|
157 |
+
else:
|
158 |
+
return "이미지나 클래스 이름 중 하나는 제공해야 합니다.", []
|
159 |
+
|
160 |
+
# API 호출
|
161 |
+
response = requests.post(
|
162 |
+
"http://localhost:7860/api/search-similar-objects",
|
163 |
+
json=data
|
164 |
+
)
|
165 |
+
|
166 |
+
if response.status_code == 200:
|
167 |
+
results = response.json()
|
168 |
+
if isinstance(results, dict) and 'error' in results:
|
169 |
+
return f"오류 발생: {results['error']}", []
|
170 |
+
|
171 |
+
# 결과 포맷팅
|
172 |
+
formatted_results = []
|
173 |
+
for i, result in enumerate(results):
|
174 |
+
similarity = (1 - result.get('distance', 0)) * 100
|
175 |
+
img_data = result.get('image', '')
|
176 |
+
|
177 |
+
# 이미지 데이터를 PIL 이미지로 변환
|
178 |
+
if img_data:
|
179 |
+
try:
|
180 |
+
img_bytes = base64.b64decode(img_data)
|
181 |
+
img = Image.open(BytesIO(img_bytes))
|
182 |
+
except Exception:
|
183 |
+
img = None
|
184 |
+
else:
|
185 |
+
img = None
|
186 |
+
|
187 |
+
# 메타데이터 추출
|
188 |
+
metadata = result.get('metadata', {})
|
189 |
+
class_name = metadata.get('class', 'N/A')
|
190 |
+
confidence = metadata.get('confidence', 0) * 100 if metadata.get('confidence') else 'N/A'
|
191 |
+
|
192 |
+
formatted_results.append({
|
193 |
+
'image': img,
|
194 |
+
'info': f"결과 #{i+1} | 유사도: {similarity:.2f}% | 클래스: {class_name} | 신뢰도: {confidence if isinstance(confidence, str) else f'{confidence:.2f}%'} | ID: {result.get('id', 'N/A')}"
|
195 |
+
})
|
196 |
+
|
197 |
+
return f"{len(formatted_results)}개의 유사 객체를 찾았습니다.", formatted_results
|
198 |
+
else:
|
199 |
+
return f"API 오류: {response.status_code}", []
|
200 |
+
except Exception as e:
|
201 |
+
return f"오류 발생: {str(e)}", []
|
202 |
+
|
203 |
+
# Define model inference functions
|
204 |
+
def process_yolo(image):
|
205 |
+
if yolo_model is None:
|
206 |
+
return None, "YOLOv8 model not loaded", None
|
207 |
+
|
208 |
+
# Measure inference time
|
209 |
+
import time
|
210 |
+
start_time = time.time()
|
211 |
+
|
212 |
+
# Convert to numpy if it's a PIL image
|
213 |
+
if isinstance(image, Image.Image):
|
214 |
+
image_np = np.array(image)
|
215 |
+
else:
|
216 |
+
image_np = image
|
217 |
+
|
218 |
+
# Run inference
|
219 |
+
results = yolo_model(image_np)
|
220 |
+
|
221 |
+
# Process results
|
222 |
+
result_image = results[0].plot()
|
223 |
+
result_image = Image.fromarray(result_image)
|
224 |
+
|
225 |
+
# Get detection information
|
226 |
+
boxes = results[0].boxes
|
227 |
+
class_names = results[0].names
|
228 |
+
|
229 |
+
# Format detection results
|
230 |
+
detections = []
|
231 |
+
detection_objects = {'objects': []}
|
232 |
+
|
233 |
+
for box in boxes:
|
234 |
+
class_id = int(box.cls[0].item())
|
235 |
+
class_name = class_names[class_id]
|
236 |
+
confidence = round(box.conf[0].item(), 2)
|
237 |
+
bbox = box.xyxy[0].tolist()
|
238 |
+
bbox = [round(x) for x in bbox]
|
239 |
+
|
240 |
+
detections.append("{}: {} at {}".format(class_name, confidence, bbox))
|
241 |
+
|
242 |
+
# 벡터 DB 저장용 객체 정보 추가
|
243 |
+
detection_objects['objects'].append({
|
244 |
+
'class': class_name,
|
245 |
+
'confidence': confidence,
|
246 |
+
'bbox': bbox
|
247 |
+
})
|
248 |
+
|
249 |
+
# Calculate inference time
|
250 |
+
inference_time = time.time() - start_time
|
251 |
+
|
252 |
+
# Add inference time and device info to detection text
|
253 |
+
device_info = "GPU" if torch.cuda.is_available() else "CPU"
|
254 |
+
performance_info = f"\n\nInference time: {inference_time:.3f} seconds on {device_info}"
|
255 |
+
detection_text = "\n".join(detections) if detections else "No objects detected"
|
256 |
+
detection_text += performance_info
|
257 |
+
|
258 |
+
return result_image, detection_text, detection_objects
|
259 |
+
|
260 |
+
return result_image, detection_text
|
261 |
+
|
262 |
+
def process_detr(image):
|
263 |
+
if detr_model is None or detr_processor is None:
|
264 |
+
return None, "DETR model not loaded"
|
265 |
+
|
266 |
+
# Measure inference time
|
267 |
+
import time
|
268 |
+
start_time = time.time()
|
269 |
+
|
270 |
+
# Prepare image for the model
|
271 |
+
inputs = detr_processor(images=image, return_tensors="pt")
|
272 |
+
|
273 |
+
# Run inference
|
274 |
+
with torch.no_grad():
|
275 |
+
outputs = detr_model(**inputs)
|
276 |
+
|
277 |
+
# Convert outputs to image with bounding boxes
|
278 |
+
# Create tensor with original image dimensions (height, width)
|
279 |
+
# image.size[::-1] reverses the (width, height) to (height, width) as required by DETR
|
280 |
+
target_sizes = torch.tensor([image.size[::-1]])
|
281 |
+
|
282 |
+
# Process raw model outputs into usable detection results
|
283 |
+
# - Maps predictions back to original image size
|
284 |
+
# - Filters detections using confidence threshold (0.9)
|
285 |
+
# - Returns a dictionary with 'scores', 'labels', and 'boxes' keys
|
286 |
+
# - [0] extracts results for the first (and only) image in the batch
|
287 |
+
results = detr_processor.post_process_object_detection(
|
288 |
+
outputs, target_sizes=target_sizes, threshold=0.9
|
289 |
+
)[0]
|
290 |
+
|
291 |
+
# Create a copy of the image to draw on
|
292 |
+
result_image = image.copy()
|
293 |
+
import matplotlib.pyplot as plt
|
294 |
+
from matplotlib.patches import Rectangle
|
295 |
+
import io
|
296 |
+
|
297 |
+
# Create figure and axes
|
298 |
+
fig, ax = plt.subplots(1)
|
299 |
+
ax.imshow(result_image)
|
300 |
+
|
301 |
+
# Format detection results
|
302 |
+
detections = []
|
303 |
+
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
|
304 |
+
box = [round(i) for i in box.tolist()]
|
305 |
+
class_name = detr_model.config.id2label[label.item()]
|
306 |
+
confidence = round(score.item(), 2)
|
307 |
+
|
308 |
+
# Draw rectangle
|
309 |
+
rect = Rectangle((box[0], box[1]), box[2] - box[0], box[3] - box[1],
|
310 |
+
linewidth=2, edgecolor='r', facecolor='none')
|
311 |
+
ax.add_patch(rect)
|
312 |
+
|
313 |
+
# Add label
|
314 |
+
plt.text(box[0], box[1], "{}: {}".format(class_name, confidence),
|
315 |
+
bbox=dict(facecolor='white', alpha=0.8))
|
316 |
+
|
317 |
+
detections.append("{}: {} at {}".format(class_name, confidence, box))
|
318 |
+
|
319 |
+
# Save figure to image
|
320 |
+
buf = io.BytesIO()
|
321 |
+
plt.tight_layout()
|
322 |
+
plt.axis('off')
|
323 |
+
plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0)
|
324 |
+
buf.seek(0)
|
325 |
+
result_image = Image.open(buf)
|
326 |
+
plt.close(fig)
|
327 |
+
|
328 |
+
# Calculate inference time
|
329 |
+
inference_time = time.time() - start_time
|
330 |
+
|
331 |
+
# Add inference time and device info to detection text
|
332 |
+
device_info = "GPU" if torch.cuda.is_available() else "CPU"
|
333 |
+
performance_info = f"\n\nInference time: {inference_time:.3f} seconds on {device_info}"
|
334 |
+
detection_text = "\n".join(detections) if detections else "No objects detected"
|
335 |
+
detection_text += performance_info
|
336 |
+
|
337 |
+
return result_image, detection_text
|
338 |
+
|
339 |
+
def process_vit(image):
|
340 |
+
if vit_model is None or vit_processor is None:
|
341 |
+
return "ViT model not loaded"
|
342 |
+
|
343 |
+
# Measure inference time
|
344 |
+
import time
|
345 |
+
start_time = time.time()
|
346 |
+
|
347 |
+
# Prepare image for the model
|
348 |
+
inputs = vit_processor(images=image, return_tensors="pt")
|
349 |
+
|
350 |
+
# Run inference
|
351 |
+
with torch.no_grad():
|
352 |
+
outputs = vit_model(**inputs)
|
353 |
+
# Extract raw logits (unnormalized scores) from model output
|
354 |
+
# Hugging Face models return logits directly, not probabilities
|
355 |
+
logits = outputs.logits
|
356 |
+
|
357 |
+
# Get the predicted class
|
358 |
+
# argmax(-1) finds the index with highest score across the last dimension (class dimension)
|
359 |
+
# item() converts the tensor value to a Python scalar
|
360 |
+
predicted_class_idx = logits.argmax(-1).item()
|
361 |
+
# Map the class index to human-readable label using the model's configuration
|
362 |
+
prediction = vit_model.config.id2label[predicted_class_idx]
|
363 |
+
|
364 |
+
# Get top 5 predictions
|
365 |
+
# Apply softmax to convert raw logits to probabilities
|
366 |
+
# softmax normalizes the exponentials of logits so they sum to 1.0
|
367 |
+
# dim=-1 applies softmax along the class dimension
|
368 |
+
# Shape before softmax: [1, num_classes] (batch_size=1, num_classes=1000)
|
369 |
+
# [0] extracts the first (and only) item from the batch dimension
|
370 |
+
# Shape after [0]: [num_classes] (a 1D tensor with 1000 class probabilities)
|
371 |
+
probs = torch.nn.functional.softmax(logits, dim=-1)[0]
|
372 |
+
# Get the values and indices of the 5 highest probabilities
|
373 |
+
top5_prob, top5_indices = torch.topk(probs, 5)
|
374 |
+
|
375 |
+
results = []
|
376 |
+
for i, (prob, idx) in enumerate(zip(top5_prob, top5_indices)):
|
377 |
+
class_name = vit_model.config.id2label[idx.item()]
|
378 |
+
results.append("{}. {}: {:.3f}".format(i+1, class_name, prob.item()))
|
379 |
+
|
380 |
+
# Calculate inference time
|
381 |
+
inference_time = time.time() - start_time
|
382 |
+
|
383 |
+
# Add inference time and device info to results
|
384 |
+
device_info = "GPU" if torch.cuda.is_available() else "CPU"
|
385 |
+
performance_info = f"\n\nInference time: {inference_time:.3f} seconds on {device_info}"
|
386 |
+
result_text = "\n".join(results)
|
387 |
+
result_text += performance_info
|
388 |
+
|
389 |
+
return result_text
|
390 |
+
|
391 |
+
# Define Gradio interface
|
392 |
+
with gr.Blocks(title="Object Detection Demo") as demo:
|
393 |
+
gr.Markdown("""
|
394 |
+
# Multi-Model Object Detection Demo
|
395 |
+
|
396 |
+
This demo showcases three different object detection and image classification models:
|
397 |
+
- **YOLOv8**: Fast and accurate object detection
|
398 |
+
- **DETR**: DEtection TRansformer for object detection
|
399 |
+
- **ViT**: Vision Transformer for image classification
|
400 |
+
|
401 |
+
Upload an image to see how each model performs!
|
402 |
+
""")
|
403 |
+
|
404 |
+
with gr.Row():
|
405 |
+
input_image = gr.Image(type="pil", label="Input Image")
|
406 |
+
|
407 |
+
with gr.Row():
|
408 |
+
yolo_button = gr.Button("Detect with YOLOv8")
|
409 |
+
detr_button = gr.Button("Detect with DETR")
|
410 |
+
vit_button = gr.Button("Classify with ViT")
|
411 |
+
|
412 |
+
with gr.Row():
|
413 |
+
with gr.Column():
|
414 |
+
yolo_output = gr.Image(type="pil", label="YOLOv8 Detection")
|
415 |
+
yolo_text = gr.Textbox(label="YOLOv8 Results")
|
416 |
+
|
417 |
+
with gr.Column():
|
418 |
+
detr_output = gr.Image(type="pil", label="DETR Detection")
|
419 |
+
detr_text = gr.Textbox(label="DETR Results")
|
420 |
+
|
421 |
+
with gr.Column():
|
422 |
+
vit_text = gr.Textbox(label="ViT Classification Results")
|
423 |
+
|
424 |
+
# 벡터 DB 저장 버튼 및 결과 표시
|
425 |
+
with gr.Row():
|
426 |
+
with gr.Column():
|
427 |
+
gr.Markdown("### 벡터 DB 저장")
|
428 |
+
save_yolo_button = gr.Button("YOLOv8 인식 결과 저장", variant="primary")
|
429 |
+
save_detr_button = gr.Button("DETR 인식 결과 저장", variant="primary")
|
430 |
+
save_vit_button = gr.Button("ViT 분류 결과 저장", variant="primary")
|
431 |
+
save_result = gr.Textbox(label="벡터 DB 저장 결과")
|
432 |
+
|
433 |
+
with gr.Column():
|
434 |
+
gr.Markdown("### 벡터 DB 검색")
|
435 |
+
search_class = gr.Textbox(label="클래스 이름으로 검색")
|
436 |
+
search_button = gr.Button("검색", variant="secondary")
|
437 |
+
search_result_text = gr.Textbox(label="검색 결과 정보")
|
438 |
+
search_result_gallery = gr.Gallery(label="검색 결과", columns=5, height=300)
|
439 |
+
|
440 |
+
# 객체 인식 결과 저장용 상태 변수
|
441 |
+
yolo_detection_state = gr.State(None)
|
442 |
+
detr_detection_state = gr.State(None)
|
443 |
+
vit_classification_state = gr.State(None)
|
444 |
+
|
445 |
+
# Set up event handlers
|
446 |
+
yolo_button.click(
|
447 |
+
fn=process_yolo,
|
448 |
+
inputs=input_image,
|
449 |
+
outputs=[yolo_output, yolo_text, yolo_detection_state]
|
450 |
+
)
|
451 |
+
|
452 |
+
# DETR 결과 처리 함수 수정 - 상태 저장 추가
|
453 |
+
def process_detr_with_state(image):
|
454 |
+
result_image, result_text = process_detr(image)
|
455 |
+
|
456 |
+
# 객체 인식 결과 추출
|
457 |
+
detection_results = {"objects": []}
|
458 |
+
|
459 |
+
# 결과 텍스트에서 객체 정보 추출
|
460 |
+
lines = result_text.split('\n')
|
461 |
+
for line in lines:
|
462 |
+
if ': ' in line and ' at ' in line:
|
463 |
+
try:
|
464 |
+
class_conf, location = line.split(' at ')
|
465 |
+
class_name, confidence = class_conf.split(': ')
|
466 |
+
confidence = float(confidence)
|
467 |
+
|
468 |
+
# 바운딩 박스 정보 추출
|
469 |
+
bbox_str = location.strip('[]').split(', ')
|
470 |
+
bbox = [int(coord) for coord in bbox_str]
|
471 |
+
|
472 |
+
detection_results["objects"].append({
|
473 |
+
"class": class_name,
|
474 |
+
"confidence": confidence,
|
475 |
+
"bbox": bbox
|
476 |
+
})
|
477 |
+
except Exception:
|
478 |
+
pass
|
479 |
+
|
480 |
+
return result_image, result_text, detection_results
|
481 |
+
|
482 |
+
# ViT 결과 처리 함수 수정 - 상태 저장 추가
|
483 |
+
def process_vit_with_state(image):
|
484 |
+
result_text = process_vit(image)
|
485 |
+
|
486 |
+
# 분류 결과 추출
|
487 |
+
classifications = []
|
488 |
+
|
489 |
+
# 결과 텍스트에서 분류 정보 추출
|
490 |
+
lines = result_text.split('\n')
|
491 |
+
for line in lines:
|
492 |
+
if '. ' in line and ': ' in line:
|
493 |
+
try:
|
494 |
+
rank_class, confidence = line.split(': ')
|
495 |
+
_, class_name = rank_class.split('. ')
|
496 |
+
confidence = float(confidence)
|
497 |
+
|
498 |
+
classifications.append({
|
499 |
+
"class": class_name,
|
500 |
+
"confidence": confidence
|
501 |
+
})
|
502 |
+
except Exception:
|
503 |
+
pass
|
504 |
+
|
505 |
+
return result_text, {"classifications": classifications}
|
506 |
+
|
507 |
+
detr_button.click(
|
508 |
+
fn=process_detr_with_state,
|
509 |
+
inputs=input_image,
|
510 |
+
outputs=[detr_output, detr_text, detr_detection_state]
|
511 |
+
)
|
512 |
+
|
513 |
+
vit_button.click(
|
514 |
+
fn=process_vit_with_state,
|
515 |
+
inputs=input_image,
|
516 |
+
outputs=[vit_text, vit_classification_state]
|
517 |
+
)
|
518 |
+
|
519 |
+
# 벡터 DB 저장 버튼 이벤트 핸들러
|
520 |
+
save_yolo_button.click(
|
521 |
+
fn=lambda img, det: save_objects_to_vector_db(img, det, 'yolo'),
|
522 |
+
inputs=[input_image, yolo_detection_state],
|
523 |
+
outputs=save_result
|
524 |
+
)
|
525 |
+
|
526 |
+
save_detr_button.click(
|
527 |
+
fn=lambda img, det: save_objects_to_vector_db(img, det, 'detr'),
|
528 |
+
inputs=[input_image, detr_detection_state],
|
529 |
+
outputs=save_result
|
530 |
+
)
|
531 |
+
|
532 |
+
save_vit_button.click(
|
533 |
+
fn=lambda img, det: save_objects_to_vector_db(img, det, 'vit'),
|
534 |
+
inputs=[input_image, vit_classification_state],
|
535 |
+
outputs=save_result
|
536 |
+
)
|
537 |
+
|
538 |
+
# 검색 버튼 이벤트 핸들러
|
539 |
+
def format_search_results(result_text, results):
|
540 |
+
images = []
|
541 |
+
captions = []
|
542 |
+
|
543 |
+
for result in results:
|
544 |
+
if result.get('image'):
|
545 |
+
images.append(result['image'])
|
546 |
+
captions.append(result['info'])
|
547 |
+
|
548 |
+
return result_text, [(img, cap) for img, cap in zip(images, captions)]
|
549 |
+
|
550 |
+
search_button.click(
|
551 |
+
fn=lambda class_name: search_similar_objects(class_name=class_name),
|
552 |
+
inputs=search_class,
|
553 |
+
outputs=[search_result_text, search_result_gallery]
|
554 |
+
)
|
555 |
+
|
556 |
+
|
557 |
+
|
558 |
+
# Launch the app
|
559 |
+
if __name__ == "__main__":
|
560 |
+
demo.launch()
|
requirements.txt
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Core dependencies
|
2 |
+
gradio>=4.0.0
|
3 |
+
torch>=2.0.0
|
4 |
+
transformers>=4.30.0
|
5 |
+
Pillow>=9.0.0
|
6 |
+
|
7 |
+
# Object detection models
|
8 |
+
ultralytics>=8.0.0 # YOLOv8
|
9 |
+
timm>=0.9.0 # Vision Transformer support
|
10 |
+
|
11 |
+
# API dependencies
|
12 |
+
flask>=2.0.0
|
13 |
+
flask-cors>=3.0.0
|
14 |
+
matplotlib>=3.5.0
|
15 |
+
numpy>=1.20.0
|
16 |
+
|
17 |
+
# For future phases
|
18 |
+
fastapi>=0.100.0
|
19 |
+
uvicorn[standard]>=0.22.0
|
20 |
+
python-multipart>=0.0.5
|
21 |
+
|
22 |
+
# Llama 4 integration
|
23 |
+
accelerator>=0.20.0
|
24 |
+
bitsandbytes>=0.41.0
|
25 |
+
sentencepiece>=0.1.99
|
26 |
+
protobuf>=4.23.0
|
27 |
+
|
28 |
+
# Vector DB and image similarity search
|
29 |
+
chroma-hnswlib>=0.7.3
|
30 |
+
chromadb>=0.4.18
|
31 |
+
scipy>=1.11.0
|
32 |
+
open-clip-torch>=2.20.0
|
33 |
+
pysqlite3-binary>=0.5.0
|
static/asset-manifest.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"files": {
|
3 |
+
"main.css": "/static/css/main.59c2a54e.chunk.css",
|
4 |
+
"main.js": "/static/js/main.ad7f086c.chunk.js",
|
5 |
+
"main.js.map": "/static/js/main.ad7f086c.chunk.js.map",
|
6 |
+
"runtime-main.js": "/static/js/runtime-main.25710301.js",
|
7 |
+
"runtime-main.js.map": "/static/js/runtime-main.25710301.js.map",
|
8 |
+
"static/js/2.252de3c4.chunk.js": "/static/js/2.252de3c4.chunk.js",
|
9 |
+
"static/js/2.252de3c4.chunk.js.map": "/static/js/2.252de3c4.chunk.js.map",
|
10 |
+
"static/js/3.9013e23f.chunk.js": "/static/js/3.9013e23f.chunk.js",
|
11 |
+
"static/js/3.9013e23f.chunk.js.map": "/static/js/3.9013e23f.chunk.js.map",
|
12 |
+
"index.html": "/index.html",
|
13 |
+
"precache-manifest.053b14ee2ebd7996a78e6e055f2144fe.js": "/precache-manifest.053b14ee2ebd7996a78e6e055f2144fe.js",
|
14 |
+
"service-worker.js": "/service-worker.js",
|
15 |
+
"static/css/main.59c2a54e.chunk.css.map": "/static/css/main.59c2a54e.chunk.css.map",
|
16 |
+
"static/js/2.252de3c4.chunk.js.LICENSE.txt": "/static/js/2.252de3c4.chunk.js.LICENSE.txt"
|
17 |
+
},
|
18 |
+
"entrypoints": [
|
19 |
+
"static/js/runtime-main.25710301.js",
|
20 |
+
"static/js/2.252de3c4.chunk.js",
|
21 |
+
"static/css/main.59c2a54e.chunk.css",
|
22 |
+
"static/js/main.ad7f086c.chunk.js"
|
23 |
+
]
|
24 |
+
}
|
static/css/main.59c2a54e.chunk.css
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
body{margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI","Roboto","Oxygen","Ubuntu","Cantarell","Fira Sans","Droid Sans","Helvetica Neue",sans-serif;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;background-color:#f5f5f5}code{font-family:source-code-pro,Menlo,Monaco,Consolas,"Courier New",monospace}.App{text-align:center}.preview-image{max-width:100%;max-height:300px;margin-top:16px}.result-image{max-width:100%;border:1px solid #ddd;border-radius:4px;padding:4px}.detection-list{margin-top:16px;text-align:left}.model-card{cursor:pointer;transition:all .3s}.model-card:hover{transform:translateY(-5px);box-shadow:0 10px 20px rgba(0,0,0,.1)}.model-card.selected{border:2px solid #3f51b5;background-color:#e8eaf6}.model-card.disabled{opacity:.6;cursor:not-allowed}.performance-info{margin-top:16px;font-size:.9rem;color:#666}
|
2 |
+
/*# sourceMappingURL=main.59c2a54e.chunk.css.map */
|
static/css/main.59c2a54e.chunk.css.map
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"version":3,"sources":["index.css","App.css"],"names":[],"mappings":"AAAA,KACE,QAAS,CACT,mJAEY,CACZ,kCAAmC,CACnC,iCAAkC,CAClC,wBACF,CAEA,KACE,yEAEF,CCbA,KACE,iBACF,CAEA,eACE,cAAe,CACf,gBAAiB,CACjB,eACF,CAEA,cACE,cAAe,CACf,qBAAsB,CACtB,iBAAkB,CAClB,WACF,CAEA,gBACE,eAAgB,CAChB,eACF,CAEA,YACE,cAAe,CACf,kBACF,CAEA,kBACE,0BAA2B,CAC3B,qCACF,CAEA,qBACE,wBAAyB,CACzB,wBACF,CAEA,qBACE,UAAY,CACZ,kBACF,CAEA,kBACE,eAAgB,CAChB,eAAiB,CACjB,UACF","file":"main.59c2a54e.chunk.css","sourcesContent":["body {\n margin: 0;\n font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',\n 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',\n sans-serif;\n -webkit-font-smoothing: antialiased;\n -moz-osx-font-smoothing: grayscale;\n background-color: #f5f5f5;\n}\n\ncode {\n font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',\n monospace;\n}\n",".App {\n text-align: center;\n}\n\n.preview-image {\n max-width: 100%;\n max-height: 300px;\n margin-top: 16px;\n}\n\n.result-image {\n max-width: 100%;\n border: 1px solid #ddd;\n border-radius: 4px;\n padding: 4px;\n}\n\n.detection-list {\n margin-top: 16px;\n text-align: left;\n}\n\n.model-card {\n cursor: pointer;\n transition: all 0.3s;\n}\n\n.model-card:hover {\n transform: translateY(-5px);\n box-shadow: 0 10px 20px rgba(0,0,0,0.1);\n}\n\n.model-card.selected {\n border: 2px solid #3f51b5;\n background-color: #e8eaf6;\n}\n\n.model-card.disabled {\n opacity: 0.6;\n cursor: not-allowed;\n}\n\n.performance-info {\n margin-top: 16px;\n font-size: 0.9rem;\n color: #666;\n}\n"]}
|
static/index.html
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="/favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="Multi-Model Object Detection Demo"/><link rel="apple-touch-icon" href="/logo192.png"/><link rel="manifest" href="/manifest.json"/><title>Vision Web App</title><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap"/><link href="/static/css/main.59c2a54e.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script>!function(e){function r(r){for(var n,i,a=r[0],c=r[1],l=r[2],p=0,s=[];p<a.length;p++)i=a[p],Object.prototype.hasOwnProperty.call(o,i)&&o[i]&&s.push(o[i][0]),o[i]=0;for(n in c)Object.prototype.hasOwnProperty.call(c,n)&&(e[n]=c[n]);for(f&&f(r);s.length;)s.shift()();return u.push.apply(u,l||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,a=1;a<t.length;a++){var c=t[a];0!==o[c]&&(n=!1)}n&&(u.splice(r--,1),e=i(i.s=t[0]))}return e}var n={},o={1:0},u=[];function i(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,i),t.l=!0,t.exports}i.e=function(e){var r=[],t=o[e];if(0!==t)if(t)r.push(t[2]);else{var n=new Promise((function(r,n){t=o[e]=[r,n]}));r.push(t[2]=n);var u,a=document.createElement("script");a.charset="utf-8",a.timeout=120,i.nc&&a.setAttribute("nonce",i.nc),a.src=function(e){return i.p+"static/js/"+({}[e]||e)+"."+{3:"9013e23f"}[e]+".chunk.js"}(e);var c=new Error;u=function(r){a.onerror=a.onload=null,clearTimeout(l);var t=o[e];if(0!==t){if(t){var n=r&&("load"===r.type?"missing":r.type),u=r&&r.target&&r.target.src;c.message="Loading chunk "+e+" failed.\n("+n+": "+u+")",c.name="ChunkLoadError",c.type=n,c.request=u,t[1](c)}o[e]=void 0}};var l=setTimeout((function(){u({type:"timeout",target:a})}),12e4);a.onerror=a.onload=u,document.head.appendChild(a)}return Promise.all(r)},i.m=e,i.c=n,i.d=function(e,r,t){i.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},i.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},i.t=function(e,r){if(1&r&&(e=i(e)),8&r)return e;if(4&r&&"object"==typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(i.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)i.d(t,n,function(r){return e[r]}.bind(null,n));return t},i.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return i.d(r,"a",r),r},i.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},i.p="/",i.oe=function(e){throw console.error(e),e};var a=this["webpackJsonpvision-web-app"]=this["webpackJsonpvision-web-app"]||[],c=a.push.bind(a);a.push=r,a=a.slice();for(var l=0;l<a.length;l++)r(a[l]);var f=c;t()}([])</script><script src="/static/js/2.252de3c4.chunk.js"></script><script src="/static/js/main.ad7f086c.chunk.js"></script></body></html>
|
static/js/2.252de3c4.chunk.js
ADDED
The diff for this file is too large to render.
See raw diff
|
|
static/js/2.252de3c4.chunk.js.LICENSE.txt
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/*
|
2 |
+
object-assign
|
3 |
+
(c) Sindre Sorhus
|
4 |
+
@license MIT
|
5 |
+
*/
|
6 |
+
|
7 |
+
/**
|
8 |
+
* A better abstraction over CSS.
|
9 |
+
*
|
10 |
+
* @copyright Oleg Isonen (Slobodskoi) / Isonen 2014-present
|
11 |
+
* @website https://github.com/cssinjs/jss
|
12 |
+
* @license MIT
|
13 |
+
*/
|
14 |
+
|
15 |
+
/** @license React v0.19.1
|
16 |
+
* scheduler.production.min.js
|
17 |
+
*
|
18 |
+
* Copyright (c) Facebook, Inc. and its affiliates.
|
19 |
+
*
|
20 |
+
* This source code is licensed under the MIT license found in the
|
21 |
+
* LICENSE file in the root directory of this source tree.
|
22 |
+
*/
|
23 |
+
|
24 |
+
/** @license React v16.13.1
|
25 |
+
* react-is.production.min.js
|
26 |
+
*
|
27 |
+
* Copyright (c) Facebook, Inc. and its affiliates.
|
28 |
+
*
|
29 |
+
* This source code is licensed under the MIT license found in the
|
30 |
+
* LICENSE file in the root directory of this source tree.
|
31 |
+
*/
|
32 |
+
|
33 |
+
/** @license React v16.14.0
|
34 |
+
* react-dom.production.min.js
|
35 |
+
*
|
36 |
+
* Copyright (c) Facebook, Inc. and its affiliates.
|
37 |
+
*
|
38 |
+
* This source code is licensed under the MIT license found in the
|
39 |
+
* LICENSE file in the root directory of this source tree.
|
40 |
+
*/
|
41 |
+
|
42 |
+
/** @license React v16.14.0
|
43 |
+
* react.production.min.js
|
44 |
+
*
|
45 |
+
* Copyright (c) Facebook, Inc. and its affiliates.
|
46 |
+
*
|
47 |
+
* This source code is licensed under the MIT license found in the
|
48 |
+
* LICENSE file in the root directory of this source tree.
|
49 |
+
*/
|
50 |
+
|
51 |
+
/** @license React v17.0.2
|
52 |
+
* react-is.production.min.js
|
53 |
+
*
|
54 |
+
* Copyright (c) Facebook, Inc. and its affiliates.
|
55 |
+
*
|
56 |
+
* This source code is licensed under the MIT license found in the
|
57 |
+
* LICENSE file in the root directory of this source tree.
|
58 |
+
*/
|
static/js/2.252de3c4.chunk.js.map
ADDED
The diff for this file is too large to render.
See raw diff
|
|
static/js/3.9013e23f.chunk.js
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
(this["webpackJsonpvision-web-app"]=this["webpackJsonpvision-web-app"]||[]).push([[3],{170:function(t,n,e){"use strict";e.r(n),e.d(n,"getCLS",(function(){return l})),e.d(n,"getFCP",(function(){return g})),e.d(n,"getFID",(function(){return h})),e.d(n,"getLCP",(function(){return y})),e.d(n,"getTTFB",(function(){return F}));var i,a,r=function(){return"".concat(Date.now(),"-").concat(Math.floor(8999999999999*Math.random())+1e12)},o=function(t){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:-1;return{name:t,value:n,delta:0,entries:[],id:r(),isFinal:!1}},u=function(t,n){try{if(PerformanceObserver.supportedEntryTypes.includes(t)){var e=new PerformanceObserver((function(t){return t.getEntries().map(n)}));return e.observe({type:t,buffered:!0}),e}}catch(t){}},s=!1,c=!1,p=function(t){s=!t.persisted},d=function(){addEventListener("pagehide",p),addEventListener("beforeunload",(function(){}))},f=function(t){var n=arguments.length>1&&void 0!==arguments[1]&&arguments[1];c||(d(),c=!0),addEventListener("visibilitychange",(function(n){var e=n.timeStamp;"hidden"===document.visibilityState&&t({timeStamp:e,isUnloading:s})}),{capture:!0,once:n})},v=function(t,n,e,i){var a;return function(){e&&n.isFinal&&e.disconnect(),n.value>=0&&(i||n.isFinal||"hidden"===document.visibilityState)&&(n.delta=n.value-(a||0),(n.delta||n.isFinal||void 0===a)&&(t(n),a=n.value))}},l=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],i=o("CLS",0),a=function(t){t.hadRecentInput||(i.value+=t.value,i.entries.push(t),n())},r=u("layout-shift",a);r&&(n=v(t,i,r,e),f((function(t){var e=t.isUnloading;r.takeRecords().map(a),e&&(i.isFinal=!0),n()})))},m=function(){return void 0===i&&(i="hidden"===document.visibilityState?0:1/0,f((function(t){var n=t.timeStamp;return i=n}),!0)),{get timeStamp(){return i}}},g=function(t){var n,e=o("FCP"),i=m(),a=u("paint",(function(t){"first-contentful-paint"===t.name&&t.startTime<i.timeStamp&&(e.value=t.startTime,e.isFinal=!0,e.entries.push(t),n())}));a&&(n=v(t,e,a))},h=function(t){var n=o("FID"),e=m(),i=function(t){t.startTime<e.timeStamp&&(n.value=t.processingStart-t.startTime,n.entries.push(t),n.isFinal=!0,r())},a=u("first-input",i),r=v(t,n,a);a?f((function(){a.takeRecords().map(i),a.disconnect()}),!0):window.perfMetrics&&window.perfMetrics.onFirstInputDelay&&window.perfMetrics.onFirstInputDelay((function(t,i){i.timeStamp<e.timeStamp&&(n.value=t,n.isFinal=!0,n.entries=[{entryType:"first-input",name:i.type,target:i.target,cancelable:i.cancelable,startTime:i.timeStamp,processingStart:i.timeStamp+t}],r())}))},S=function(){return a||(a=new Promise((function(t){return["scroll","keydown","pointerdown"].map((function(n){addEventListener(n,t,{once:!0,passive:!0,capture:!0})}))}))),a},y=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],i=o("LCP"),a=m(),r=function(t){var e=t.startTime;e<a.timeStamp?(i.value=e,i.entries.push(t)):i.isFinal=!0,n()},s=u("largest-contentful-paint",r);if(s){n=v(t,i,s,e);var c=function(){i.isFinal||(s.takeRecords().map(r),i.isFinal=!0,n())};S().then(c),f(c,!0)}},F=function(t){var n,e=o("TTFB");n=function(){try{var n=performance.getEntriesByType("navigation")[0]||function(){var t=performance.timing,n={entryType:"navigation",startTime:0};for(var e in t)"navigationStart"!==e&&"toJSON"!==e&&(n[e]=Math.max(t[e]-t.navigationStart,0));return n}();e.value=e.delta=n.responseStart,e.entries=[n],e.isFinal=!0,t(e)}catch(t){}},"complete"===document.readyState?setTimeout(n,0):addEventListener("pageshow",n)}}}]);
|
2 |
+
//# sourceMappingURL=3.9013e23f.chunk.js.map
|
static/js/3.9013e23f.chunk.js.map
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"version":3,"sources":["../node_modules/web-vitals/dist/web-vitals.es5.min.js"],"names":["v","t","n","e","concat","Date","now","Math","floor","random","i","arguments","length","name","value","delta","entries","id","isFinal","a","PerformanceObserver","supportedEntryTypes","includes","getEntries","map","observe","type","buffered","r","o","s","persisted","u","addEventListener","c","timeStamp","document","visibilityState","isUnloading","capture","once","l","disconnect","p","hadRecentInput","push","takeRecords","d","startTime","f","processingStart","window","perfMetrics","onFirstInputDelay","entryType","target","cancelable","m","Promise","passive","g","then","h","performance","getEntriesByType","timing","max","navigationStart","responseStart","readyState","setTimeout"],"mappings":"wHAAA,gFAAAA,KAAA,0HAAIC,EAAEC,EAAEC,EAAE,WAAW,MAAM,GAAGC,OAAOC,KAAKC,MAAM,KAAKF,OAAOG,KAAKC,MAAM,cAAcD,KAAKE,UAAU,OAAOC,EAAE,SAAST,GAAG,IAAIC,EAAES,UAAUC,OAAO,QAAG,IAASD,UAAU,GAAGA,UAAU,IAAI,EAAE,MAAM,CAACE,KAAKZ,EAAEa,MAAMZ,EAAEa,MAAM,EAAEC,QAAQ,GAAGC,GAAGd,IAAIe,SAAQ,IAAKC,EAAE,SAASlB,EAAEC,GAAG,IAAI,GAAGkB,oBAAoBC,oBAAoBC,SAASrB,GAAG,CAAC,IAAIE,EAAE,IAAIiB,qBAAqB,SAASnB,GAAG,OAAOA,EAAEsB,aAAaC,IAAItB,MAAM,OAAOC,EAAEsB,QAAQ,CAACC,KAAKzB,EAAE0B,UAAS,IAAKxB,GAAG,MAAMF,MAAM2B,GAAE,EAAGC,GAAE,EAAGC,EAAE,SAAS7B,GAAG2B,GAAG3B,EAAE8B,WAAWC,EAAE,WAAWC,iBAAiB,WAAWH,GAAGG,iBAAiB,gBAAgB,gBAAgBC,EAAE,SAASjC,GAAG,IAAIC,EAAES,UAAUC,OAAO,QAAG,IAASD,UAAU,IAAIA,UAAU,GAAGkB,IAAIG,IAAIH,GAAE,GAAII,iBAAiB,oBAAoB,SAAS/B,GAAG,IAAIC,EAAED,EAAEiC,UAAU,WAAWC,SAASC,iBAAiBpC,EAAE,CAACkC,UAAUhC,EAAEmC,YAAYV,MAAM,CAACW,SAAQ,EAAGC,KAAKtC,KAAKuC,EAAE,SAASxC,EAAEC,EAAEC,EAAEO,GAAG,IAAIS,EAAE,OAAO,WAAWhB,GAAGD,EAAEgB,SAASf,EAAEuC,aAAaxC,EAAEY,OAAO,IAAIJ,GAAGR,EAAEgB,SAAS,WAAWkB,SAASC,mBAAmBnC,EAAEa,MAAMb,EAAEY,OAAOK,GAAG,IAAIjB,EAAEa,OAAOb,EAAEgB,cAAS,IAASC,KAAKlB,EAAEC,GAAGiB,EAAEjB,EAAEY,UAAU6B,EAAE,SAAS1C,GAAG,IAAIC,EAAEC,EAAEQ,UAAUC,OAAO,QAAG,IAASD,UAAU,IAAIA,UAAU,GAAGiB,EAAElB,EAAE,MAAM,GAAGmB,EAAE,SAAS5B,GAAGA,EAAE2C,iBAAiBhB,EAAEd,OAAOb,EAAEa,MAAMc,EAAEZ,QAAQ6B,KAAK5C,GAAGC,MAAM4B,EAAEX,EAAE,eAAeU,GAAGC,IAAI5B,EAAEuC,EAAExC,EAAE2B,EAAEE,EAAE3B,GAAG+B,GAAG,SAASjC,GAAG,IAAIE,EAAEF,EAAEqC,YAAYR,EAAEgB,cAActB,IAAIK,GAAG1B,IAAIyB,EAAEV,SAAQ,GAAIhB,SAAS6C,EAAE,WAAW,YAAO,IAAS9C,IAAIA,EAAE,WAAWmC,SAASC,gBAAgB,EAAE,IAAIH,GAAG,SAAShC,GAAG,IAAIC,EAAED,EAAEiC,UAAU,OAAOlC,EAAEE,KAAI,IAAK,CAAC,gBAAgB,OAAOF,KAAKD,EAAE,SAASC,GAAG,IAAIC,EAAEC,EAAEO,EAAE,OAAOkB,EAAEmB,IAAIlB,EAAEV,EAAE,SAAS,SAASlB,GAAG,2BAA2BA,EAAEY,MAAMZ,EAAE+C,UAAUpB,EAAEO,YAAYhC,EAAEW,MAAMb,EAAE+C,UAAU7C,EAAEe,SAAQ,EAAGf,EAAEa,QAAQ6B,KAAK5C,GAAGC,QAAQ2B,IAAI3B,EAAEuC,EAAExC,EAAEE,EAAE0B,KAAKoB,EAAE,SAAShD,GAAG,IAAIC,EAAEQ,EAAE,OAAOP,EAAE4C,IAAInB,EAAE,SAAS3B,GAAGA,EAAE+C,UAAU7C,EAAEgC,YAAYjC,EAAEY,MAAMb,EAAEiD,gBAAgBjD,EAAE+C,UAAU9C,EAAEc,QAAQ6B,KAAK5C,GAAGC,EAAEgB,SAAQ,EAAGY,MAAMD,EAAEV,EAAE,cAAcS,GAAGE,EAAEW,EAAExC,EAAEC,EAAE2B,GAAGA,EAAEK,GAAG,WAAWL,EAAEiB,cAActB,IAAII,GAAGC,EAAEa,gBAAe,GAAIS,OAAOC,aAAaD,OAAOC,YAAYC,mBAAmBF,OAAOC,YAAYC,mBAAmB,SAASpD,EAAES,GAAGA,EAAEyB,UAAUhC,EAAEgC,YAAYjC,EAAEY,MAAMb,EAAEC,EAAEgB,SAAQ,EAAGhB,EAAEc,QAAQ,CAAC,CAACsC,UAAU,cAAczC,KAAKH,EAAEgB,KAAK6B,OAAO7C,EAAE6C,OAAOC,WAAW9C,EAAE8C,WAAWR,UAAUtC,EAAEyB,UAAUe,gBAAgBxC,EAAEyB,UAAUlC,IAAI6B,SAAS2B,EAAE,WAAW,OAAOvD,IAAIA,EAAE,IAAIwD,SAAS,SAASzD,GAAG,MAAM,CAAC,SAAS,UAAU,eAAeuB,KAAK,SAAStB,GAAG+B,iBAAiB/B,EAAED,EAAE,CAACuC,MAAK,EAAGmB,SAAQ,EAAGpB,SAAQ,WAAYrC,GAAG0D,EAAE,SAAS3D,GAAG,IAAIC,EAAEC,EAAEQ,UAAUC,OAAO,QAAG,IAASD,UAAU,IAAIA,UAAU,GAAGiB,EAAElB,EAAE,OAAOmB,EAAEkB,IAAIjB,EAAE,SAAS7B,GAAG,IAAIE,EAAEF,EAAE+C,UAAU7C,EAAE0B,EAAEM,WAAWP,EAAEd,MAAMX,EAAEyB,EAAEZ,QAAQ6B,KAAK5C,IAAI2B,EAAEV,SAAQ,EAAGhB,KAAK8B,EAAEb,EAAE,2BAA2BW,GAAG,GAAGE,EAAE,CAAC9B,EAAEuC,EAAExC,EAAE2B,EAAEI,EAAE7B,GAAG,IAAIwC,EAAE,WAAWf,EAAEV,UAAUc,EAAEc,cAActB,IAAIM,GAAGF,EAAEV,SAAQ,EAAGhB,MAAMuD,IAAII,KAAKlB,GAAGT,EAAES,GAAE,KAAMmB,EAAE,SAAS7D,GAAG,IAAIC,EAAEC,EAAEO,EAAE,QAAQR,EAAE,WAAW,IAAI,IAAIA,EAAE6D,YAAYC,iBAAiB,cAAc,IAAI,WAAW,IAAI/D,EAAE8D,YAAYE,OAAO/D,EAAE,CAACoD,UAAU,aAAaN,UAAU,GAAG,IAAI,IAAI7C,KAAKF,EAAE,oBAAoBE,GAAG,WAAWA,IAAID,EAAEC,GAAGI,KAAK2D,IAAIjE,EAAEE,GAAGF,EAAEkE,gBAAgB,IAAI,OAAOjE,EAAhL,GAAqLC,EAAEW,MAAMX,EAAEY,MAAMb,EAAEkE,cAAcjE,EAAEa,QAAQ,CAACd,GAAGC,EAAEe,SAAQ,EAAGjB,EAAEE,GAAG,MAAMF,MAAM,aAAamC,SAASiC,WAAWC,WAAWpE,EAAE,GAAG+B,iBAAiB,WAAW/B","file":"static/js/3.9013e23f.chunk.js","sourcesContent":["var t,n,e=function(){return\"\".concat(Date.now(),\"-\").concat(Math.floor(8999999999999*Math.random())+1e12)},i=function(t){var n=arguments.length>1&&void 0!==arguments[1]?arguments[1]:-1;return{name:t,value:n,delta:0,entries:[],id:e(),isFinal:!1}},a=function(t,n){try{if(PerformanceObserver.supportedEntryTypes.includes(t)){var e=new PerformanceObserver((function(t){return t.getEntries().map(n)}));return e.observe({type:t,buffered:!0}),e}}catch(t){}},r=!1,o=!1,s=function(t){r=!t.persisted},u=function(){addEventListener(\"pagehide\",s),addEventListener(\"beforeunload\",(function(){}))},c=function(t){var n=arguments.length>1&&void 0!==arguments[1]&&arguments[1];o||(u(),o=!0),addEventListener(\"visibilitychange\",(function(n){var e=n.timeStamp;\"hidden\"===document.visibilityState&&t({timeStamp:e,isUnloading:r})}),{capture:!0,once:n})},l=function(t,n,e,i){var a;return function(){e&&n.isFinal&&e.disconnect(),n.value>=0&&(i||n.isFinal||\"hidden\"===document.visibilityState)&&(n.delta=n.value-(a||0),(n.delta||n.isFinal||void 0===a)&&(t(n),a=n.value))}},p=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],r=i(\"CLS\",0),o=function(t){t.hadRecentInput||(r.value+=t.value,r.entries.push(t),n())},s=a(\"layout-shift\",o);s&&(n=l(t,r,s,e),c((function(t){var e=t.isUnloading;s.takeRecords().map(o),e&&(r.isFinal=!0),n()})))},d=function(){return void 0===t&&(t=\"hidden\"===document.visibilityState?0:1/0,c((function(n){var e=n.timeStamp;return t=e}),!0)),{get timeStamp(){return t}}},v=function(t){var n,e=i(\"FCP\"),r=d(),o=a(\"paint\",(function(t){\"first-contentful-paint\"===t.name&&t.startTime<r.timeStamp&&(e.value=t.startTime,e.isFinal=!0,e.entries.push(t),n())}));o&&(n=l(t,e,o))},f=function(t){var n=i(\"FID\"),e=d(),r=function(t){t.startTime<e.timeStamp&&(n.value=t.processingStart-t.startTime,n.entries.push(t),n.isFinal=!0,s())},o=a(\"first-input\",r),s=l(t,n,o);o?c((function(){o.takeRecords().map(r),o.disconnect()}),!0):window.perfMetrics&&window.perfMetrics.onFirstInputDelay&&window.perfMetrics.onFirstInputDelay((function(t,i){i.timeStamp<e.timeStamp&&(n.value=t,n.isFinal=!0,n.entries=[{entryType:\"first-input\",name:i.type,target:i.target,cancelable:i.cancelable,startTime:i.timeStamp,processingStart:i.timeStamp+t}],s())}))},m=function(){return n||(n=new Promise((function(t){return[\"scroll\",\"keydown\",\"pointerdown\"].map((function(n){addEventListener(n,t,{once:!0,passive:!0,capture:!0})}))}))),n},g=function(t){var n,e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],r=i(\"LCP\"),o=d(),s=function(t){var e=t.startTime;e<o.timeStamp?(r.value=e,r.entries.push(t)):r.isFinal=!0,n()},u=a(\"largest-contentful-paint\",s);if(u){n=l(t,r,u,e);var p=function(){r.isFinal||(u.takeRecords().map(s),r.isFinal=!0,n())};m().then(p),c(p,!0)}},h=function(t){var n,e=i(\"TTFB\");n=function(){try{var n=performance.getEntriesByType(\"navigation\")[0]||function(){var t=performance.timing,n={entryType:\"navigation\",startTime:0};for(var e in t)\"navigationStart\"!==e&&\"toJSON\"!==e&&(n[e]=Math.max(t[e]-t.navigationStart,0));return n}();e.value=e.delta=n.responseStart,e.entries=[n],e.isFinal=!0,t(e)}catch(t){}},\"complete\"===document.readyState?setTimeout(n,0):addEventListener(\"pageshow\",n)};export{p as getCLS,v as getFCP,f as getFID,g as getLCP,h as getTTFB};\n"],"sourceRoot":""}
|
static/js/main.ad7f086c.chunk.js
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
(this["webpackJsonpvision-web-app"]=this["webpackJsonpvision-web-app"]||[]).push([[0],{102:function(e,a,t){},103:function(e,a,t){"use strict";t.r(a);var n=t(0),r=t.n(n),l=t(11),o=t.n(l),c=(t(97),t(78)),i=t(155),s=t(159),m=t(156),d=t(157),g=t(48),u=t(158),p=t(138),E=t(81),b=t(143),h=t(136),f=t(137),y=t(63),v=t.n(y),x=t(75),S=t.n(x),C=t(133);const j=Object(C.a)(e=>({paper:{padding:e.spacing(2),display:"flex",flexDirection:"column",alignItems:"center",height:"100%",minHeight:300,transition:"all 0.3s ease"},dragActive:{border:"2px dashed #3f51b5",backgroundColor:"rgba(63, 81, 181, 0.05)"},dragInactive:{border:"2px dashed #ccc",backgroundColor:"white"},uploadBox:{display:"flex",flexDirection:"column",alignItems:"center",justifyContent:"center",height:"100%",width:"100%",cursor:"pointer"},uploadIcon:{fontSize:60,color:"#3f51b5",marginBottom:e.spacing(2)},supportText:{marginTop:e.spacing(2)},previewBox:{display:"flex",flexDirection:"column",alignItems:"center",width:"100%",height:"100%",position:"relative"},imageContainer:{position:"relative",width:"100%",height:"100%",display:"flex",justifyContent:"center",alignItems:"center",overflow:"hidden",marginTop:e.spacing(2)},deleteButton:{position:"absolute",top:0,right:0,backgroundColor:"rgba(255, 255, 255, 0.7)","&:hover":{backgroundColor:"rgba(255, 255, 255, 0.9)"}}}));var w=e=>{let{onImageUpload:a}=e;const t=j(),[l,o]=Object(n.useState)(null),[c,i]=Object(n.useState)(!1),m=Object(n.useRef)(null),d=e=>{e.preventDefault(),e.stopPropagation(),"dragenter"===e.type||"dragover"===e.type?i(!0):"dragleave"===e.type&&i(!1)},u=e=>{e.type.startsWith("image/")?(o(URL.createObjectURL(e)),a(e)):alert("Please upload an image file")};return r.a.createElement(E.a,{className:"".concat(t.paper," ").concat(c?t.dragActive:t.dragInactive),onDragEnter:d,onDragLeave:d,onDragOver:d,onDrop:e=>{e.preventDefault(),e.stopPropagation(),i(!1),e.dataTransfer.files&&e.dataTransfer.files[0]&&u(e.dataTransfer.files[0])}},r.a.createElement("input",{ref:m,type:"file",accept:"image/*",onChange:e=>{e.preventDefault(),e.target.files&&e.target.files[0]&&u(e.target.files[0])},style:{display:"none"}}),l?r.a.createElement(s.a,{className:t.previewBox},r.a.createElement(g.a,{variant:"h6",gutterBottom:!0},"Preview"),r.a.createElement(s.a,{className:t.imageContainer},r.a.createElement("img",{src:l,alt:"Preview",className:"preview-image"}),r.a.createElement(f.a,{"aria-label":"delete",className:t.deleteButton,onClick:()=>{o(null),a(null),m.current.value=""}},r.a.createElement(S.a,null)))):r.a.createElement(s.a,{className:t.uploadBox,onClick:()=>{m.current.click()}},r.a.createElement(v.a,{className:t.uploadIcon}),r.a.createElement(g.a,{variant:"h6",gutterBottom:!0},"Drag & Drop an image here"),r.a.createElement(g.a,{variant:"body2",color:"textSecondary",gutterBottom:!0},"or"),r.a.createElement(h.a,{variant:"contained",color:"primary",component:"span",startIcon:r.a.createElement(v.a,null)},"Browse Files"),r.a.createElement(g.a,{variant:"body2",color:"textSecondary",className:t.supportText},"Supported formats: JPG, PNG, GIF")))},N=t(139),B=t(140),T=t(166),O=t(141),I=t(64),k=t.n(I),D=t(76),P=t.n(D),F=t(77),R=t.n(F);const A=Object(C.a)(e=>({card:{height:"100%",display:"flex",flexDirection:"column"},selectedCard:{border:"2px solid #3f51b5"},unavailableCard:{opacity:.6},cardContent:{flexGrow:1},chipContainer:{marginBottom:e.spacing(1.5)},successChip:{backgroundColor:"#34C759",color:"#fff"},errorChip:{backgroundColor:"#FF3B3F",color:"#fff"},modelType:{marginTop:e.spacing(1)},processButton:{marginTop:e.spacing(3),textAlign:"center"}}));var L=e=>{let{onModelSelect:a,onProcess:t,isProcessing:n,modelsStatus:l,selectedModel:o,imageSelected:c}=e;const i=A(),m=[{id:"yolo",name:"YOLOv8",description:"Fast and accurate object detection",icon:r.a.createElement(k.a,null),available:l.yolo},{id:"detr",name:"DETR",description:"DEtection TRansformer for object detection",icon:r.a.createElement(k.a,null),available:l.detr},{id:"vit",name:"ViT",description:"Vision Transformer for image classification",icon:r.a.createElement(P.a,null),available:l.vit}],d=e=>{m.find(a=>a.id===e).available&&a(e)};return r.a.createElement(s.a,{sx:{p:2,height:"100%"}},r.a.createElement(g.a,{variant:"h6",gutterBottom:!0},"Select Model"),r.a.createElement(p.a,{container:!0,spacing:2},m.map(e=>r.a.createElement(p.a,{item:!0,xs:12,sm:4,key:e.id},r.a.createElement(N.a,{className:"\n ".concat(i.card," \n ").concat(o===e.id?i.selectedCard:""," \n ").concat(e.available?"":i.unavailableCard,"\n "),onClick:()=>d(e.id)},r.a.createElement(B.a,{className:i.cardContent},r.a.createElement(s.a,{sx:{mb:2,color:"primary"}},e.icon),r.a.createElement(g.a,{variant:"h5",component:"div",gutterBottom:!0},e.name),r.a.createElement("div",{className:i.chipContainer},e.available?r.a.createElement(T.a,{label:"Available",className:i.successChip,size:"small"}):r.a.createElement(T.a,{label:"Not Available",className:i.errorChip,size:"small"})),r.a.createElement(g.a,{variant:"body2",color:"textSecondary"},e.description)),r.a.createElement(O.a,null,r.a.createElement(h.a,{size:"small",onClick:()=>d(e.id),disabled:!e.available,color:o===e.id?"primary":"default",variant:o===e.id?"contained":"outlined",fullWidth:!0},o===e.id?"Selected":"Select")))))),r.a.createElement("div",{className:i.processButton},r.a.createElement(h.a,{variant:"contained",color:"primary",size:"large",startIcon:r.a.createElement(R.a,null),onClick:t,disabled:!o||!c||n},n?"Processing...":"Process Image")))},M=t(153),z=t(150),W=t(105),_=t(154),H=t(142),V=t(164),J=t(163),U=t(145),G=t(146),Y=t(147),q=t(167),Q=t(160),K=t(151),X=t(169),Z=t(152),$=t(161);const ee=Object(C.a)(e=>({root:{marginTop:e.spacing(2),marginBottom:e.spacing(2),padding:e.spacing(2),backgroundColor:"#f5f5f5",borderRadius:e.shape.borderRadius},button:{marginRight:e.spacing(2)},searchDialog:{minWidth:"500px"},formControl:{marginBottom:e.spacing(2),minWidth:"100%"},searchResults:{marginTop:e.spacing(2)},resultCard:{marginBottom:e.spacing(2)},resultImage:{height:140,objectFit:"contain"},chip:{margin:e.spacing(.5)},similarityChip:{backgroundColor:e.palette.primary.main,color:"white"}}));var ae=e=>{let{results:a}=e;const t=ee(),[l,o]=Object(n.useState)(!1),[c,i]=Object(n.useState)(!1),[m,d]=Object(n.useState)(null),[u,E]=Object(n.useState)(!1),[f,y]=Object(n.useState)("image"),[v,x]=Object(n.useState)(""),[S,C]=Object(n.useState)([]),[j,w]=Object(n.useState)(!1),[O,I]=Object(n.useState)(null),{model:k,data:D}=a,P=()=>{E(!1)},F=()=>"xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx".replace(/[xy]/g,(function(e){const a=16*Math.random()|0;return("x"===e?a:3&a|8).toString(16)}));return r.a.createElement(s.a,{className:t.root},r.a.createElement(g.a,{variant:"h6",gutterBottom:!0},"Vector Database Actions"),r.a.createElement(s.a,{display:"flex",alignItems:"center",mb:2},r.a.createElement(h.a,{variant:"contained",color:"primary",onClick:async()=>{o(!0),d(null);try{let e;if(e="vit"===k?await fetch("/api/add-to-collection",{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({image:D.image,metadata:{model:"vit",classifications:D.classifications}})}):await fetch("/api/add-detected-objects",{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({image:D.image,objects:D.detections,imageId:F()})}),!e.ok)throw new Error("HTTP error! Status: ".concat(e.status));const a=await e.json();if(a.error)throw new Error(a.error);i(!0),setTimeout(()=>i(!1),5e3)}catch(e){console.error("Error saving to vector DB:",e),d("Error saving to vector DB: ".concat(e.message))}finally{o(!1)}},disabled:l,className:t.button},l?r.a.createElement(r.a.Fragment,null,r.a.createElement(b.a,{size:20,color:"inherit",style:{marginRight:8}}),"Saving..."):"Save to Vector DB"),r.a.createElement(h.a,{variant:"outlined",color:"primary",onClick:()=>{E(!0),C([]),I(null)},className:t.button},"Search Similar")),m&&r.a.createElement($.a,{severity:"error",style:{marginTop:8}},m),r.a.createElement(V.a,{open:c,autoHideDuration:5e3,onClose:()=>i(!1)},r.a.createElement($.a,{severity:"success"},"vit"===k?"Image and classifications successfully saved to vector DB!":"Detected objects successfully saved to vector DB!")),r.a.createElement(J.a,{open:u,onClose:P,maxWidth:"md",fullWidth:!0},r.a.createElement(U.a,null,"Search Vector Database"),r.a.createElement(G.a,null,r.a.createElement(Y.a,{className:t.formControl},r.a.createElement(q.a,{id:"search-type-label"},"Search Type"),r.a.createElement(Q.a,{labelId:"search-type-label",id:"search-type",value:f,onChange:e=>{y(e.target.value),C([]),I(null)}},r.a.createElement(K.a,{value:"image"},"Search by Current Image"),r.a.createElement(K.a,{value:"class"},"Search by Class Name"))),"class"===f&&r.a.createElement(Y.a,{className:t.formControl},r.a.createElement(X.a,{label:"Class Name",value:v,onChange:e=>{x(e.target.value)},placeholder:"e.g. person, car, dog...",fullWidth:!0})),O&&r.a.createElement($.a,{severity:"error",style:{marginBottom:16}},O),r.a.createElement(s.a,{className:t.searchResults},j?r.a.createElement(s.a,{display:"flex",justifyContent:"center",alignItems:"center",p:4},r.a.createElement(b.a,null),r.a.createElement(g.a,{variant:"body1",style:{marginLeft:16}},"Searching...")):r.a.createElement(r.a.Fragment,null,console.log("Search dialog render - searchResults:",S),S.length>0?(console.log("Rendering search results:",S),console.log("Search results length:",S.length),0===S.length?(console.log("No results to render"),r.a.createElement(g.a,{variant:"body1"},"No results found.")):r.a.createElement(p.a,{container:!0,spacing:2},S.map((e,a)=>{const n=100*(1-e.distance);return r.a.createElement(p.a,{item:!0,xs:12,sm:6,key:a},r.a.createElement(N.a,{className:t.resultCard},e.metadata&&e.metadata.image_data?r.a.createElement(H.a,{className:t.resultImage,component:"img",height:"200",image:"data:image/jpeg;base64,".concat(e.metadata.image_data),alt:e.metadata&&e.metadata.class?e.metadata.class:"Object"}):r.a.createElement(s.a,{className:t.resultImage,style:{backgroundColor:"#f0f0f0",display:"flex",alignItems:"center",justifyContent:"center",height:200}},r.a.createElement(g.a,{variant:"body2",color:"textSecondary"},e.metadata&&e.metadata.class?e.metadata.class:"Object"," Image")),r.a.createElement(B.a,null,r.a.createElement(s.a,{display:"flex",justifyContent:"space-between",alignItems:"center",mb:1},r.a.createElement(g.a,{variant:"subtitle1"},"Result #",a+1),r.a.createElement(T.a,{label:"Similarity: ".concat(n.toFixed(2),"%"),className:t.similarityChip,size:"small"})),r.a.createElement(g.a,{variant:"body2",color:"textSecondary"},r.a.createElement("strong",null,"Class:")," ",e.metadata.class||"N/A"),e.metadata.confidence&&r.a.createElement(g.a,{variant:"body2",color:"textSecondary"},r.a.createElement("strong",null,"Confidence:")," ",(100*e.metadata.confidence).toFixed(2),"%"),r.a.createElement(g.a,{variant:"body2",color:"textSecondary"},r.a.createElement("strong",null,"Object ID:")," ",e.id))))}))):r.a.createElement(g.a,{variant:"body1"},"No results found. Please try another search.")))),r.a.createElement(Z.a,null,r.a.createElement(h.a,{onClick:P,color:"default"},"Close"),r.a.createElement(h.a,{onClick:async()=>{w(!0),I(null);try{let e={};if("image"===f)e={searchType:"image",image:D.image,n_results:5};else{if(!v.trim())throw new Error("Please enter a class name");e={searchType:"class",class_name:v.trim(),n_results:5}}const a=await fetch("/api/search-similar-objects",{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify(e)});if(!a.ok)throw new Error("HTTP error! Status: ".concat(a.status));const t=await a.json();if(t.error)throw new Error(t.error);if(console.log("Search API response:",t),!t.success||!Array.isArray(t.results))throw console.error("Unexpected API response format:",t),new Error("Unexpected API response format");console.log("Setting search results array:",t.results),console.log("Results array length:",t.results.length),console.log("First result item:",t.results[0]),C(t.results)}catch(e){console.error("Error searching vector DB:",e),I("Error searching vector DB: ".concat(e.message))}finally{w(!1)}},color:"primary",variant:"contained",disabled:j||"class"===f&&!v.trim()},"Search"))))};const te=Object(C.a)(e=>({paper:{padding:e.spacing(2)},marginBottom:{marginBottom:e.spacing(2)},resultImage:{maxWidth:"100%",maxHeight:"400px",objectFit:"contain"},dividerMargin:{margin:"".concat(e.spacing(2),"px 0")},chipContainer:{display:"flex",gap:e.spacing(1),flexWrap:"wrap"}}));var ne=e=>{let{results:a}=e;const t=te();if(!a)return null;const{model:n,data:l}=a;if(l.error)return r.a.createElement(E.a,{sx:{p:2,bgcolor:"#ffebee"}},r.a.createElement(g.a,{color:"error"},l.error));const o=()=>l.performance?r.a.createElement(s.a,{className:"performance-info"},r.a.createElement(M.a,{className:t.dividerMargin}),r.a.createElement(g.a,{variant:"body2"},"Inference time: ",(e=>{if(void 0===e||null===e||isNaN(e))return"-";const a=Number(e);return a<1e3?"".concat(a.toFixed(2)," ms"):"".concat((a/1e3).toFixed(2)," s")})(l.performance.inference_time)," on ",l.performance.device)):null;return"yolo"===n||"detr"===n?r.a.createElement(E.a,{className:t.paper},r.a.createElement(g.a,{variant:"h6",gutterBottom:!0},"yolo"===n?"YOLOv8":"DETR"," Detection Results"),r.a.createElement(p.a,{container:!0,spacing:3},r.a.createElement(p.a,{item:!0,xs:12,md:6},l.image&&r.a.createElement(s.a,{className:t.marginBottom},r.a.createElement(g.a,{variant:"subtitle1",gutterBottom:!0},"Detection Result"),r.a.createElement("img",{src:"data:image/png;base64,".concat(l.image),alt:"Detection Result",className:t.resultImage}))),r.a.createElement(p.a,{item:!0,xs:12,md:6},r.a.createElement(s.a,{className:t.marginBottom},r.a.createElement(g.a,{variant:"subtitle1",gutterBottom:!0},"Detected Objects:"),l.detections&&l.detections.length>0?r.a.createElement(z.a,null,l.detections.map((e,a)=>r.a.createElement(r.a.Fragment,{key:a},r.a.createElement(W.a,null,r.a.createElement(_.a,{primary:r.a.createElement(s.a,{style:{display:"flex",alignItems:"center"}},r.a.createElement(g.a,{variant:"body1",component:"span"},e.class),r.a.createElement(T.a,{label:"".concat((100*e.confidence).toFixed(0),"%"),size:"small",color:"primary",style:{marginLeft:8}})),secondary:"Bounding Box: [".concat(e.bbox.join(", "),"]")})),a<l.detections.length-1&&r.a.createElement(M.a,null)))):r.a.createElement(g.a,{variant:"body1"},"No objects detected")))),o(),r.a.createElement(ae,{results:a})):"vit"===n?r.a.createElement(E.a,{className:t.paper},r.a.createElement(g.a,{variant:"h6",gutterBottom:!0},"ViT Classification Results"),r.a.createElement(g.a,{variant:"subtitle1",gutterBottom:!0},"Top Predictions:"),l.top_predictions&&l.top_predictions.length>0?r.a.createElement(z.a,null,l.top_predictions.map((e,a)=>r.a.createElement(r.a.Fragment,{key:a},r.a.createElement(W.a,null,r.a.createElement(_.a,{primary:r.a.createElement(s.a,{style:{display:"flex",alignItems:"center"}},r.a.createElement(g.a,{variant:"body1",component:"span"},e.rank,". ",e.class),r.a.createElement(T.a,{label:"".concat((100*e.probability).toFixed(1),"%"),size:"small",color:0===a?"primary":"default",style:{marginLeft:8}}))})),a<l.top_predictions.length-1&&r.a.createElement(M.a,null)))):r.a.createElement(g.a,{variant:"body1"},"No classifications available"),o(),r.a.createElement(ae,{results:a})):null};const re=Object(C.a)(e=>({paper:{padding:e.spacing(2),marginTop:e.spacing(2)},marginBottom:{marginBottom:e.spacing(2)},dividerMargin:{margin:"".concat(e.spacing(2),"px 0")},responseBox:{padding:e.spacing(2),backgroundColor:"#f5f5f5",borderRadius:e.shape.borderRadius,marginTop:e.spacing(2),whiteSpace:"pre-wrap"},buttonProgress:{marginLeft:e.spacing(1)}}));var le=e=>{let{visionResults:a,model:t}=e;const l=re(),[o,c]=Object(n.useState)(""),[i,m]=Object(n.useState)(!1),[d,u]=Object(n.useState)(null),[p,f]=Object(n.useState)(null);return a?r.a.createElement(E.a,{className:l.paper},r.a.createElement(g.a,{variant:"h6",gutterBottom:!0},"Ask AI about the ","vit"===t?"Classification":"Detection"," Results"),r.a.createElement(g.a,{variant:"body2",className:l.marginBottom},"Ask a question about the detected objects or classifications to get an AI-powered analysis."),r.a.createElement(X.a,{fullWidth:!0,label:"Your question about the image",variant:"outlined",value:o,onChange:e=>c(e.target.value),disabled:i,className:l.marginBottom,placeholder:"vit"===t?"E.g., What category does this image belong to?":"E.g., How many people are in this image?"}),r.a.createElement(h.a,{variant:"contained",color:"primary",onClick:async()=>{if(o.trim()){m(!0),f(null);try{const e=await fetch("/api/analyze",{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({visionResults:a,userQuery:o})});if(!e.ok)throw new Error("HTTP error! Status: ".concat(e.status));const t=await e.json();t.error?f(t.error):u(t)}catch(e){console.error("Error analyzing with LLM:",e),f("Error analyzing with LLM: ".concat(e.message))}finally{m(!1)}}},disabled:i||!o.trim()},"Analyze with AI",i&&r.a.createElement(b.a,{size:24,className:l.buttonProgress})),p&&r.a.createElement(s.a,{mt:2},r.a.createElement(g.a,{color:"error"},p)),d&&r.a.createElement(r.a.Fragment,null,r.a.createElement(M.a,{className:l.dividerMargin}),r.a.createElement(g.a,{variant:"subtitle1",gutterBottom:!0},"AI Analysis:"),r.a.createElement(s.a,{className:l.responseBox},r.a.createElement(g.a,{variant:"body1"},d.response)),d.performance&&r.a.createElement(s.a,{mt:1},r.a.createElement(g.a,{variant:"body2",color:"textSecondary"},"Analysis time: ",(e=>{if(void 0===e||null===e||isNaN(e))return"-";const a=Number(e);return a<1e3?"".concat(a.toFixed(2)," ms"):"".concat((a/1e3).toFixed(2)," s")})(d.performance.inference_time)," on ",d.performance.device)))):null};t(102);const oe=Object(c.a)({palette:{primary:{main:"#3f51b5"},secondary:{main:"#f50057"}},typography:{fontFamily:"Roboto, Arial, sans-serif"}});var ce=function(){const[e,a]=Object(n.useState)(null),[t,l]=Object(n.useState)(""),[o,c]=Object(n.useState)(!1),[h,f]=Object(n.useState)(null),[y,v]=Object(n.useState)(null),[x,S]=Object(n.useState)({yolo:!1,detr:!1,vit:!1});return Object(n.useEffect)(()=>{fetch("/api/status").then(e=>e.json()).then(e=>{S(e.models)}).catch(e=>{console.error("Error checking API status:",e),v("Error connecting to the backend API. Please make sure the server is running.")})},[]),r.a.createElement(i.a,{theme:oe},r.a.createElement(s.a,{style:{flexGrow:1}},r.a.createElement(m.a,{position:"static"},r.a.createElement(d.a,null,r.a.createElement(g.a,{variant:"h6",style:{flexGrow:1}},"Multi-Model Object Detection Demo"))),r.a.createElement(u.a,{maxWidth:"lg",style:{marginTop:oe.spacing(4),marginBottom:oe.spacing(4)}},r.a.createElement(p.a,{container:!0,spacing:3},r.a.createElement(p.a,{item:!0,xs:12},r.a.createElement(E.a,{style:{padding:oe.spacing(2)}},r.a.createElement(g.a,{variant:"h5",gutterBottom:!0},"Upload an image to see how each model performs!"),r.a.createElement(g.a,{variant:"body1",paragraph:!0},"This demo showcases three different object detection and image classification models:"),r.a.createElement(g.a,{variant:"body1",component:"div"},r.a.createElement("ul",null,r.a.createElement("li",null,r.a.createElement("strong",null,"YOLOv8"),": Fast and accurate object detection"),r.a.createElement("li",null,r.a.createElement("strong",null,"DETR"),": DEtection TRansformer for object detection"),r.a.createElement("li",null,r.a.createElement("strong",null,"ViT"),": Vision Transformer for image classification"))))),r.a.createElement(p.a,{item:!0,xs:12,md:6},r.a.createElement(w,{onImageUpload:e=>{a(e),f(null),v(null)}})),r.a.createElement(p.a,{item:!0,xs:12,md:6},r.a.createElement(L,{onModelSelect:e=>{l(e),f(null),v(null)},onProcess:async()=>{if(!e||!t)return void v("Please select both an image and a model");c(!0),v(null);const a=new FormData;a.append("image",e);let n="";switch(t){case"yolo":n="/api/detect/yolo";break;case"detr":n="/api/detect/detr";break;case"vit":n="/api/classify/vit";break;default:return v("Invalid model selection"),void c(!1)}try{const e=await fetch(n,{method:"POST",body:a});if(!e.ok)throw new Error("HTTP error! Status: ".concat(e.status));const r=await e.json();f({model:t,data:r})}catch(r){console.error("Error processing image:",r),v("Error processing image: ".concat(r.message))}finally{c(!1)}},isProcessing:o,modelsStatus:x,selectedModel:t,imageSelected:!!e})),y&&r.a.createElement(p.a,{item:!0,xs:12},r.a.createElement(E.a,{style:{padding:oe.spacing(2),backgroundColor:"#ffebee"}},r.a.createElement(g.a,{color:"error"},y))),o&&r.a.createElement(p.a,{item:!0,xs:12,style:{textAlign:"center",margin:"".concat(oe.spacing(4),"px 0")}},r.a.createElement(b.a,null),r.a.createElement(g.a,{variant:"h6",style:{marginTop:oe.spacing(2)}},"Processing image...")),h&&r.a.createElement(r.a.Fragment,null,r.a.createElement(p.a,{item:!0,xs:12},r.a.createElement(ne,{results:h})),r.a.createElement(p.a,{item:!0,xs:12},r.a.createElement(le,{visionResults:h.data,model:h.model})))))))};var ie=e=>{e&&e instanceof Function&&t.e(3).then(t.bind(null,170)).then(a=>{let{getCLS:t,getFID:n,getFCP:r,getLCP:l,getTTFB:o}=a;t(e),n(e),r(e),l(e),o(e)})};o.a.render(r.a.createElement(r.a.StrictMode,null,r.a.createElement(ce,null)),document.getElementById("root")),ie()},92:function(e,a,t){e.exports=t(103)},97:function(e,a,t){}},[[92,1,2]]]);
|
2 |
+
//# sourceMappingURL=main.ad7f086c.chunk.js.map
|
static/js/main.ad7f086c.chunk.js.map
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"version":3,"sources":["components/ImageUploader.js","components/ModelSelector.js","components/VectorDBActions.js","components/ResultDisplay.js","components/LlmAnalysis.js","App.js","reportWebVitals.js","index.js"],"names":["useStyles","makeStyles","theme","paper","padding","spacing","display","flexDirection","alignItems","height","minHeight","transition","dragActive","border","backgroundColor","dragInactive","uploadBox","justifyContent","width","cursor","uploadIcon","fontSize","color","marginBottom","supportText","marginTop","previewBox","position","imageContainer","overflow","deleteButton","top","right","ImageUploader","_ref","onImageUpload","classes","previewUrl","setPreviewUrl","useState","setDragActive","fileInputRef","useRef","handleDrag","e","preventDefault","stopPropagation","type","handleFiles","file","startsWith","URL","createObjectURL","alert","React","createElement","Paper","className","concat","onDragEnter","onDragLeave","onDragOver","onDrop","dataTransfer","files","ref","accept","onChange","target","style","Box","Typography","variant","gutterBottom","src","alt","IconButton","aria-label","onClick","handleRemoveImage","current","value","DeleteIcon","onButtonClick","click","CloudUploadIcon","Button","component","startIcon","card","selectedCard","unavailableCard","opacity","cardContent","flexGrow","chipContainer","successChip","errorChip","modelType","processButton","textAlign","ModelSelector","onModelSelect","onProcess","isProcessing","modelsStatus","selectedModel","imageSelected","models","id","name","description","icon","VisibilityIcon","available","yolo","detr","CategoryIcon","vit","handleModelClick","modelId","find","m","sx","p","Grid","container","map","model","item","xs","sm","key","Card","CardContent","mb","Chip","label","size","CardActions","disabled","fullWidth","PlayArrowIcon","root","borderRadius","shape","button","marginRight","searchDialog","minWidth","formControl","searchResults","resultCard","resultImage","objectFit","chip","margin","similarityChip","palette","primary","main","VectorDBActions","results","isSaving","setIsSaving","saveSuccess","setSaveSuccess","saveError","setSaveError","openSearchDialog","setOpenSearchDialog","searchType","setSearchType","searchClass","setSearchClass","setSearchResults","isSearching","setIsSearching","searchError","setSearchError","data","handleCloseSearchDialog","generateUUID","replace","c","r","Math","random","toString","async","response","fetch","method","headers","body","JSON","stringify","image","metadata","classifications","objects","detections","imageId","ok","Error","status","result","json","error","setTimeout","err","console","message","Fragment","CircularProgress","handleOpenSearchDialog","Alert","severity","Snackbar","open","autoHideDuration","onClose","Dialog","maxWidth","DialogTitle","DialogContent","FormControl","InputLabel","Select","labelId","event","MenuItem","TextField","placeholder","marginLeft","log","length","index","similarity","distance","image_data","CardMedia","class","toFixed","confidence","DialogActions","requestBody","n_results","trim","class_name","success","Array","isArray","maxHeight","dividerMargin","gap","flexWrap","ResultDisplay","bgcolor","renderPerformanceInfo","performance","Divider","ms","undefined","isNaN","num","Number","formatTime","inference_time","device","md","List","detection","ListItem","ListItemText","secondary","bbox","join","top_predictions","prediction","rank","probability","responseBox","whiteSpace","buttonProgress","LlmAnalysis","visionResults","userQuery","setUserQuery","isAnalyzing","setIsAnalyzing","analysisResult","setAnalysisResult","setError","mt","createMuiTheme","typography","fontFamily","App","selectedImage","setSelectedImage","setSelectedModel","setIsProcessing","setResults","setModelsStatus","useEffect","then","catch","ThemeProvider","AppBar","Toolbar","Container","paragraph","formData","FormData","append","endpoint","reportWebVitals","onPerfEntry","Function","getCLS","getFID","getFCP","getLCP","getTTFB","ReactDOM","render","StrictMode","document","getElementById"],"mappings":"sVAYA,MAAMA,EAAYC,YAAYC,IAAK,CACjCC,MAAO,CACLC,QAASF,EAAMG,QAAQ,GACvBC,QAAS,OACTC,cAAe,SACfC,WAAY,SACZC,OAAQ,OACRC,UAAW,IACXC,WAAY,iBAEdC,WAAY,CACVC,OAAQ,qBACRC,gBAAiB,2BAEnBC,aAAc,CACZF,OAAQ,kBACRC,gBAAiB,SAEnBE,UAAW,CACTV,QAAS,OACTC,cAAe,SACfC,WAAY,SACZS,eAAgB,SAChBR,OAAQ,OACRS,MAAO,OACPC,OAAQ,WAEVC,WAAY,CACVC,SAAU,GACVC,MAAO,UACPC,aAAcrB,EAAMG,QAAQ,IAE9BmB,YAAa,CACXC,UAAWvB,EAAMG,QAAQ,IAE3BqB,WAAY,CACVpB,QAAS,OACTC,cAAe,SACfC,WAAY,SACZU,MAAO,OACPT,OAAQ,OACRkB,SAAU,YAEZC,eAAgB,CACdD,SAAU,WACVT,MAAO,OACPT,OAAQ,OACRH,QAAS,OACTW,eAAgB,SAChBT,WAAY,SACZqB,SAAU,SACVJ,UAAWvB,EAAMG,QAAQ,IAE3ByB,aAAc,CACZH,SAAU,WACVI,IAAK,EACLC,MAAO,EACPlB,gBAAiB,2BACjB,UAAW,CACTA,gBAAiB,gCAyHRmB,MApHOC,IAAwB,IAAvB,cAAEC,GAAeD,EACtC,MAAME,EAAUpC,KACTqC,EAAYC,GAAiBC,mBAAS,OACtC3B,EAAY4B,GAAiBD,oBAAS,GACvCE,EAAeC,iBAAO,MAEtBC,EAAcC,IAClBA,EAAEC,iBACFD,EAAEE,kBACa,cAAXF,EAAEG,MAAmC,aAAXH,EAAEG,KAC9BP,GAAc,GACM,cAAXI,EAAEG,MACXP,GAAc,IAoBZQ,EAAeC,IACfA,EAAKF,KAAKG,WAAW,WACvBZ,EAAca,IAAIC,gBAAgBH,IAClCd,EAAcc,IAEdI,MAAM,gCAcV,OACEC,IAAAC,cAACC,IAAK,CACJC,UAAS,GAAAC,OAAKtB,EAAQjC,MAAK,KAAAuD,OAAI9C,EAAawB,EAAQxB,WAAawB,EAAQrB,cACzE4C,YAAahB,EACbiB,YAAajB,EACbkB,WAAYlB,EACZmB,OAzCgBlB,IAClBA,EAAEC,iBACFD,EAAEE,kBACFN,GAAc,GACVI,EAAEmB,aAAaC,OAASpB,EAAEmB,aAAaC,MAAM,IAC/ChB,EAAYJ,EAAEmB,aAAaC,MAAM,MAsCjCV,IAAAC,cAAA,SACEU,IAAKxB,EACLM,KAAK,OACLmB,OAAO,UACPC,SAtCgBvB,IACpBA,EAAEC,iBACED,EAAEwB,OAAOJ,OAASpB,EAAEwB,OAAOJ,MAAM,IACnChB,EAAYJ,EAAEwB,OAAOJ,MAAM,KAoCzBK,MAAO,CAAE/D,QAAS,UAGlB+B,EAyBAiB,IAAAC,cAACe,IAAG,CAACb,UAAWrB,EAAQV,YACtB4B,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GAAC,WAGtCnB,IAAAC,cAACe,IAAG,CAACb,UAAWrB,EAAQR,gBACtB0B,IAAAC,cAAA,OACEmB,IAAKrC,EACLsC,IAAI,UACJlB,UAAU,kBAEZH,IAAAC,cAACqB,IAAU,CACTC,aAAW,SACXpB,UAAWrB,EAAQN,aACnBgD,QA5DcC,KACxBzC,EAAc,MACdH,EAAc,MACdM,EAAauC,QAAQC,MAAQ,KA2DnB3B,IAAAC,cAAC2B,IAAU,SAvCjB5B,IAAAC,cAACe,IAAG,CACFb,UAAWrB,EAAQpB,UACnB8D,QA7BcK,KACpB1C,EAAauC,QAAQI,UA8Bf9B,IAAAC,cAAC8B,IAAe,CAAC5B,UAAWrB,EAAQhB,aACpCkC,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GAAC,6BAGtCnB,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,gBAAgBmD,cAAY,GAAC,MAG/DnB,IAAAC,cAAC+B,IAAM,CACLd,QAAQ,YACRlD,MAAM,UACNiE,UAAU,OACVC,UAAWlC,IAAAC,cAAC8B,IAAe,OAC5B,gBAGD/B,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,gBAAgBmC,UAAWrB,EAAQZ,aAAa,uC,uFCnJ5F,MAAMxB,EAAYC,YAAYC,IAAK,CACjCuF,KAAM,CACJhF,OAAQ,OACRH,QAAS,OACTC,cAAe,UAEjBmF,aAAc,CACZ7E,OAAQ,qBAEV8E,gBAAiB,CACfC,QAAS,IAEXC,YAAa,CACXC,SAAU,GAEZC,cAAe,CACbxE,aAAcrB,EAAMG,QAAQ,MAE9B2F,YAAa,CACXlF,gBAAiB,UACjBQ,MAAO,QAET2E,UAAW,CACTnF,gBAAiB,UACjBQ,MAAO,QAET4E,UAAW,CACTzE,UAAWvB,EAAMG,QAAQ,IAE3B8F,cAAe,CACb1E,UAAWvB,EAAMG,QAAQ,GACzB+F,UAAW,aAwHAC,MApHOnE,IAOf,IAPgB,cACrBoE,EAAa,UACbC,EAAS,aACTC,EAAY,aACZC,EAAY,cACZC,EAAa,cACbC,GACDzE,EACC,MAAME,EAAUpC,IAEV4G,EAAS,CACb,CACEC,GAAI,OACJC,KAAM,SACNC,YAAa,qCACbC,KAAM1D,IAAAC,cAAC0D,IAAc,MACrBC,UAAWT,EAAaU,MAE1B,CACEN,GAAI,OACJC,KAAM,OACNC,YAAa,6CACbC,KAAM1D,IAAAC,cAAC0D,IAAc,MACrBC,UAAWT,EAAaW,MAE1B,CACEP,GAAI,MACJC,KAAM,MACNC,YAAa,8CACbC,KAAM1D,IAAAC,cAAC8D,IAAY,MACnBH,UAAWT,EAAaa,MAItBC,EAAoBC,IACpBZ,EAAOa,KAAKC,GAAKA,EAAEb,KAAOW,GAASN,WACrCZ,EAAckB,IAIlB,OACElE,IAAAC,cAACe,IAAG,CAACqD,GAAI,CAAEC,EAAG,EAAGnH,OAAQ,SACvB6C,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GAAC,gBAItCnB,IAAAC,cAACsE,IAAI,CAACC,WAAS,EAACzH,QAAS,GACtBuG,EAAOmB,IAAKC,GACX1E,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,GAAIC,GAAI,EAAGC,IAAKJ,EAAMnB,IACnCvD,IAAAC,cAAC8E,IAAI,CACH5E,UAAS,qBAAAC,OACLtB,EAAQqD,KAAI,uBAAA/B,OACZgD,IAAkBsB,EAAMnB,GAAKzE,EAAQsD,aAAe,GAAE,uBAAAhC,OACrDsE,EAAMd,UAAsC,GAA1B9E,EAAQuD,gBAAoB,oBAEnDb,QAASA,IAAMyC,EAAiBS,EAAMnB,KAEtCvD,IAAAC,cAAC+E,IAAW,CAAC7E,UAAWrB,EAAQyD,aAC9BvC,IAAAC,cAACe,IAAG,CAACqD,GAAI,CAAEY,GAAI,EAAGjH,MAAO,YACtB0G,EAAMhB,MAET1D,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKe,UAAU,MAAMd,cAAY,GAClDuD,EAAMlB,MAETxD,IAAAC,cAAA,OAAKE,UAAWrB,EAAQ2D,eACrBiC,EAAMd,UACL5D,IAAAC,cAACiF,IAAI,CACHC,MAAM,YACNhF,UAAWrB,EAAQ4D,YACnB0C,KAAK,UAGPpF,IAAAC,cAACiF,IAAI,CACHC,MAAM,gBACNhF,UAAWrB,EAAQ6D,UACnByC,KAAK,WAIXpF,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,iBAC/B0G,EAAMjB,cAGXzD,IAAAC,cAACoF,IAAW,KACVrF,IAAAC,cAAC+B,IAAM,CACLoD,KAAK,QACL5D,QAASA,IAAMyC,EAAiBS,EAAMnB,IACtC+B,UAAWZ,EAAMd,UACjB5F,MAAOoF,IAAkBsB,EAAMnB,GAAK,UAAY,UAChDrC,QAASkC,IAAkBsB,EAAMnB,GAAK,YAAc,WACpDgC,WAAS,GAERnC,IAAkBsB,EAAMnB,GAAK,WAAa,eAQvDvD,IAAAC,cAAA,OAAKE,UAAWrB,EAAQ+D,eACtB7C,IAAAC,cAAC+B,IAAM,CACLd,QAAQ,YACRlD,MAAM,UACNoH,KAAK,QACLlD,UAAWlC,IAAAC,cAACuF,IAAa,MACzBhE,QAASyB,EACTqC,UAAWlC,IAAkBC,GAAiBH,GAE7CA,EAAe,gBAAkB,oB,gJCvI5C,MAAMxG,GAAYC,YAAYC,IAAK,CACjC6I,KAAM,CACJtH,UAAWvB,EAAMG,QAAQ,GACzBkB,aAAcrB,EAAMG,QAAQ,GAC5BD,QAASF,EAAMG,QAAQ,GACvBS,gBAAiB,UACjBkI,aAAc9I,EAAM+I,MAAMD,cAE5BE,OAAQ,CACNC,YAAajJ,EAAMG,QAAQ,IAE7B+I,aAAc,CACZC,SAAU,SAEZC,YAAa,CACX/H,aAAcrB,EAAMG,QAAQ,GAC5BgJ,SAAU,QAEZE,cAAe,CACb9H,UAAWvB,EAAMG,QAAQ,IAE3BmJ,WAAY,CACVjI,aAAcrB,EAAMG,QAAQ,IAE9BoJ,YAAa,CACXhJ,OAAQ,IACRiJ,UAAW,WAEbC,KAAM,CACJC,OAAQ1J,EAAMG,QAAQ,KAExBwJ,eAAgB,CACd/I,gBAAiBZ,EAAM4J,QAAQC,QAAQC,KACvC1I,MAAO,YAuXI2I,OAnXS/H,IAAkB,IAAjB,QAAEgI,GAAShI,EAClC,MAAME,EAAUpC,MACTmK,EAAUC,GAAe7H,oBAAS,IAClC8H,EAAaC,GAAkB/H,oBAAS,IACxCgI,EAAWC,GAAgBjI,mBAAS,OACpCkI,EAAkBC,GAAuBnI,oBAAS,IAClDoI,EAAYC,GAAiBrI,mBAAS,UACtCsI,EAAaC,GAAkBvI,mBAAS,KACxCgH,EAAewB,GAAoBxI,mBAAS,KAC5CyI,EAAaC,GAAkB1I,oBAAS,IACxC2I,EAAaC,GAAkB5I,mBAAS,OAGzC,MAAEyF,EAAK,KAAEoD,GAASlB,EAoElBmB,EAA0BA,KAC9BX,GAAoB,IAkFhBY,EAAeA,IACZ,uCAAuCC,QAAQ,SAAS,SAASC,GACtE,MAAMC,EAAoB,GAAhBC,KAAKC,SAAgB,EAE/B,OADgB,MAANH,EAAYC,EAAS,EAAJA,EAAU,GAC5BG,SAAS,OA6EtB,OACEtI,IAAAC,cAACe,IAAG,CAACb,UAAWrB,EAAQ2G,MACtBzF,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GAAC,2BAItCnB,IAAAC,cAACe,IAAG,CAAChE,QAAQ,OAAOE,WAAW,SAAS+H,GAAI,GAC1CjF,IAAAC,cAAC+B,IAAM,CACLd,QAAQ,YACRlD,MAAM,UACNwD,QA/OqB+G,UAC3BzB,GAAY,GACZI,EAAa,MAEb,IACE,IAAIsB,EAgCJ,GA5BEA,EAFY,QAAV9D,QAEe+D,MAAM,yBAA0B,CAC/CC,OAAQ,OACRC,QAAS,CACP,eAAgB,oBAElBC,KAAMC,KAAKC,UAAU,CACnBC,MAAOjB,EAAKiB,MACZC,SAAU,CACRtE,MAAO,MACPuE,gBAAiBnB,EAAKmB,2BAMXR,MAAM,4BAA6B,CAClDC,OAAQ,OACRC,QAAS,CACP,eAAgB,oBAElBC,KAAMC,KAAKC,UAAU,CACnBC,MAAOjB,EAAKiB,MACZG,QAASpB,EAAKqB,WACdC,QAASpB,SAKVQ,EAASa,GACZ,MAAM,IAAIC,MAAM,uBAADlJ,OAAwBoI,EAASe,SAGlD,MAAMC,QAAehB,EAASiB,OAE9B,GAAID,EAAOE,MACT,MAAM,IAAIJ,MAAME,EAAOE,OAGzB1C,GAAe,GACf2C,WAAW,IAAM3C,GAAe,GAAQ,KACxC,MAAO4C,GACPC,QAAQH,MAAM,6BAA8BE,GAC5C1C,EAAa,8BAAD9G,OAA+BwJ,EAAIE,UAChD,QACChD,GAAY,KA2LRxB,SAAUuB,EACV1G,UAAWrB,EAAQ8G,QAElBiB,EACC7G,IAAAC,cAAAD,IAAA+J,SAAA,KACE/J,IAAAC,cAAC+J,IAAgB,CAAC5E,KAAM,GAAIpH,MAAM,UAAU+C,MAAO,CAAE8E,YAAa,KAAO,aAI3E,qBAIJ7F,IAAAC,cAAC+B,IAAM,CACLd,QAAQ,WACRlD,MAAM,UACNwD,QAtMuByI,KAC7B7C,GAAoB,GACpBK,EAAiB,IACjBI,EAAe,OAoMT1H,UAAWrB,EAAQ8G,QACpB,mBAKFqB,GACCjH,IAAAC,cAACiK,IAAK,CAACC,SAAS,QAAQpJ,MAAO,CAAE5C,UAAW,IACzC8I,GAILjH,IAAAC,cAACmK,IAAQ,CAACC,KAAMtD,EAAauD,iBAAkB,IAAMC,QAASA,IAAMvD,GAAe,IACjFhH,IAAAC,cAACiK,IAAK,CAACC,SAAS,WACH,QAAVzF,EACC,6DAEA,sDAMN1E,IAAAC,cAACuK,IAAM,CACLH,KAAMlD,EACNoD,QAASxC,EACT0C,SAAS,KACTlF,WAAS,GAETvF,IAAAC,cAACyK,IAAW,KAAC,0BACb1K,IAAAC,cAAC0K,IAAa,KACZ3K,IAAAC,cAAC2K,IAAW,CAACzK,UAAWrB,EAAQkH,aAC9BhG,IAAAC,cAAC4K,IAAU,CAACtH,GAAG,qBAAoB,eACnCvD,IAAAC,cAAC6K,IAAM,CACLC,QAAQ,oBACRxH,GAAG,cACH5B,MAAO0F,EACPxG,SAhOoBmK,IAC9B1D,EAAc0D,EAAMlK,OAAOa,OAC3B8F,EAAiB,IACjBI,EAAe,QA+NL7H,IAAAC,cAACgL,IAAQ,CAACtJ,MAAM,SAAQ,2BACxB3B,IAAAC,cAACgL,IAAQ,CAACtJ,MAAM,SAAQ,0BAIZ,UAAf0F,GACCrH,IAAAC,cAAC2K,IAAW,CAACzK,UAAWrB,EAAQkH,aAC9BhG,IAAAC,cAACiL,IAAS,CACR/F,MAAM,aACNxD,MAAO4F,EACP1G,SArOmBmK,IAC/BxD,EAAewD,EAAMlK,OAAOa,QAqOhBwJ,YAAY,2BACZ5F,WAAS,KAKdqC,GACC5H,IAAAC,cAACiK,IAAK,CAACC,SAAS,QAAQpJ,MAAO,CAAE9C,aAAc,KAC5C2J,GAIL5H,IAAAC,cAACe,IAAG,CAACb,UAAWrB,EAAQmH,eACrByB,EACC1H,IAAAC,cAACe,IAAG,CAAChE,QAAQ,OAAOW,eAAe,SAAST,WAAW,SAASoH,EAAG,GACjEtE,IAAAC,cAAC+J,IAAgB,MACjBhK,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQH,MAAO,CAAEqK,WAAY,KAAM,iBAKzDpL,IAAAC,cAAAD,IAAA+J,SAAA,KACGF,QAAQwB,IAAI,wCAAyCpF,GACrDA,EAAcqF,OAAS,GA5KpCzB,QAAQwB,IAAI,4BAA6BpF,GACzC4D,QAAQwB,IAAI,yBAA0BpF,EAAcqF,QAEvB,IAAzBrF,EAAcqF,QAChBzB,QAAQwB,IAAI,wBAEVrL,IAAAC,cAACgB,IAAU,CAACC,QAAQ,SAAQ,sBAK9BlB,IAAAC,cAACsE,IAAI,CAACC,WAAS,EAACzH,QAAS,GACtBkJ,EAAcxB,IAAI,CAAC+E,EAAQ+B,KAC1B,MAAMC,EAAqC,KAAvB,EAAIhC,EAAOiC,UAE/B,OACEzL,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,GAAIC,GAAI,EAAGC,IAAKyG,GAC7BvL,IAAAC,cAAC8E,IAAI,CAAC5E,UAAWrB,EAAQoH,YACtBsD,EAAOR,UAAYQ,EAAOR,SAAS0C,WAClC1L,IAAAC,cAAC0L,IAAS,CACRxL,UAAWrB,EAAQqH,YACnBlE,UAAU,MACV9E,OAAO,MACP4L,MAAK,0BAAA3I,OAA4BoJ,EAAOR,SAAS0C,YACjDrK,IAAKmI,EAAOR,UAAYQ,EAAOR,SAAS4C,MAAQpC,EAAOR,SAAS4C,MAAQ,WAG1E5L,IAAAC,cAACe,IAAG,CACFb,UAAWrB,EAAQqH,YACnBpF,MAAO,CACLvD,gBAAiB,UACjBR,QAAS,OACTE,WAAY,SACZS,eAAgB,SAChBR,OAAQ,MAGV6C,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,iBAC/BwL,EAAOR,UAAYQ,EAAOR,SAAS4C,MAAQpC,EAAOR,SAAS4C,MAAQ,SAAS,WAInF5L,IAAAC,cAAC+E,IAAW,KACVhF,IAAAC,cAACe,IAAG,CAAChE,QAAQ,OAAOW,eAAe,gBAAgBT,WAAW,SAAS+H,GAAI,GACzEjF,IAAAC,cAACgB,IAAU,CAACC,QAAQ,aAAY,WAASqK,EAAQ,GACjDvL,IAAAC,cAACiF,IAAI,CACHC,MAAK,eAAA/E,OAAiBoL,EAAWK,QAAQ,GAAE,KAC3C1L,UAAWrB,EAAQyH,eACnBnB,KAAK,WAGTpF,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,iBAChCgC,IAAAC,cAAA,cAAQ,UAAe,IAAEuJ,EAAOR,SAAS4C,OAAS,OAEnDpC,EAAOR,SAAS8C,YACf9L,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,iBAChCgC,IAAAC,cAAA,cAAQ,eAAoB,KAAgC,IAA7BuJ,EAAOR,SAAS8C,YAAkBD,QAAQ,GAAG,KAGhF7L,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,iBAChCgC,IAAAC,cAAA,cAAQ,cAAmB,IAAEuJ,EAAOjG,WAiHtCvD,IAAAC,cAACgB,IAAU,CAACC,QAAQ,SAAQ,mDAMtClB,IAAAC,cAAC8L,IAAa,KACZ/L,IAAAC,cAAC+B,IAAM,CAACR,QAASuG,EAAyB/J,MAAM,WAAU,SAG1DgC,IAAAC,cAAC+B,IAAM,CACLR,QApQW+G,UACnBZ,GAAe,GACfE,EAAe,MAEf,IACE,IAAImE,EAAc,GAElB,GAAmB,UAAf3E,EAEF2E,EAAc,CACZ3E,WAAY,QACZ0B,MAAOjB,EAAKiB,MACZkD,UAAW,OAER,CAEL,IAAK1E,EAAY2E,OACf,MAAM,IAAI5C,MAAM,6BAGlB0C,EAAc,CACZ3E,WAAY,QACZ8E,WAAY5E,EAAY2E,OACxBD,UAAW,GAIf,MAAMzD,QAAiBC,MAAM,8BAA+B,CAC1DC,OAAQ,OACRC,QAAS,CACP,eAAgB,oBAElBC,KAAMC,KAAKC,UAAUkD,KAGvB,IAAKxD,EAASa,GACZ,MAAM,IAAIC,MAAM,uBAADlJ,OAAwBoI,EAASe,SAGlD,MAAMC,QAAehB,EAASiB,OAE9B,GAAID,EAAOE,MACT,MAAM,IAAIJ,MAAME,EAAOE,OAMzB,GAHAG,QAAQwB,IAAI,uBAAwB7B,IAGhCA,EAAO4C,UAAWC,MAAMC,QAAQ9C,EAAO5C,SAOzC,MADAiD,QAAQH,MAAM,kCAAmCF,GAC3C,IAAIF,MAAM,kCANhBO,QAAQwB,IAAI,gCAAiC7B,EAAO5C,SACpDiD,QAAQwB,IAAI,wBAAyB7B,EAAO5C,QAAQ0E,QACpDzB,QAAQwB,IAAI,qBAAsB7B,EAAO5C,QAAQ,IACjDa,EAAiB+B,EAAO5C,SAK1B,MAAOgD,GACPC,QAAQH,MAAM,6BAA8BE,GAC5C/B,EAAe,8BAADzH,OAA+BwJ,EAAIE,UAClD,QACCnC,GAAe,KAwMT3J,MAAM,UACNkD,QAAQ,YACRoE,SAAUoC,GAA+B,UAAfL,IAA2BE,EAAY2E,QAClE,cCzZX,MAAMxP,GAAYC,YAAYC,IAAK,CACjCC,MAAO,CACLC,QAASF,EAAMG,QAAQ,IAEzBkB,aAAc,CACZA,aAAcrB,EAAMG,QAAQ,IAE9BoJ,YAAa,CACXsE,SAAU,OACV8B,UAAW,QACXnG,UAAW,WAEboG,cAAe,CACblG,OAAO,GAADlG,OAAKxD,EAAMG,QAAQ,GAAE,SAE7B0F,cAAe,CACbzF,QAAS,OACTyP,IAAK7P,EAAMG,QAAQ,GACnB2P,SAAU,WAoKCC,OAhKO/N,IAAkB,IAAjB,QAAEgI,GAAShI,EAChC,MAAME,EAAUpC,KAChB,IAAKkK,EAAS,OAAO,KAErB,MAAM,MAAElC,EAAK,KAAEoD,GAASlB,EAWxB,GAAIkB,EAAK4B,MACP,OACE1J,IAAAC,cAACC,IAAK,CAACmE,GAAI,CAAEC,EAAG,EAAGsI,QAAS,YAC1B5M,IAAAC,cAACgB,IAAU,CAACjD,MAAM,SAAS8J,EAAK4B,QAMtC,MAAMmD,EAAwBA,IACvB/E,EAAKgF,YAGR9M,IAAAC,cAACe,IAAG,CAACb,UAAU,oBACbH,IAAAC,cAAC8M,IAAO,CAAC5M,UAAWrB,EAAQ0N,gBAC5BxM,IAAAC,cAACgB,IAAU,CAACC,QAAQ,SAAQ,mBAvBd8L,KAClB,QAAWC,IAAPD,GAA2B,OAAPA,GAAeE,MAAMF,GAAK,MAAO,IACzD,MAAMG,EAAMC,OAAOJ,GACnB,OAAIG,EAAM,IAAY,GAAN/M,OAAU+M,EAAItB,QAAQ,GAAE,OAClC,GAANzL,QAAW+M,EAAM,KAAMtB,QAAQ,GAAE,OAoBVwB,CAAWvF,EAAKgF,YAAYQ,gBAAgB,OAAKxF,EAAKgF,YAAYS,SAN3D,KAahC,MAAc,SAAV7I,GAA8B,SAAVA,EAEpB1E,IAAAC,cAACC,IAAK,CAACC,UAAWrB,EAAQjC,OACxBmD,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GACxB,SAAVuD,EAAmB,SAAW,OAAO,sBAGxC1E,IAAAC,cAACsE,IAAI,CAACC,WAAS,EAACzH,QAAS,GACvBiD,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,GAAI4I,GAAI,GACpB1F,EAAKiB,OACJ/I,IAAAC,cAACe,IAAG,CAACb,UAAWrB,EAAQb,cACtB+B,IAAAC,cAACgB,IAAU,CAACC,QAAQ,YAAYC,cAAY,GAAC,oBAG7CnB,IAAAC,cAAA,OACEmB,IAAG,yBAAAhB,OAA2B0H,EAAKiB,OACnC1H,IAAI,mBACJlB,UAAWrB,EAAQqH,gBAM3BnG,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,GAAI4I,GAAI,GACrBxN,IAAAC,cAACe,IAAG,CAACb,UAAWrB,EAAQb,cACtB+B,IAAAC,cAACgB,IAAU,CAACC,QAAQ,YAAYC,cAAY,GAAC,qBAI5C2G,EAAKqB,YAAcrB,EAAKqB,WAAWmC,OAAS,EAC3CtL,IAAAC,cAACwN,IAAI,KACF3F,EAAKqB,WAAW1E,IAAI,CAACiJ,EAAWnC,IAC/BvL,IAAAC,cAACD,IAAM+J,SAAQ,CAACjF,IAAKyG,GACnBvL,IAAAC,cAAC0N,IAAQ,KACP3N,IAAAC,cAAC2N,IAAY,CACXnH,QACEzG,IAAAC,cAACe,IAAG,CAACD,MAAO,CAAE/D,QAAS,OAAQE,WAAY,WACzC8C,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQe,UAAU,QACnCyL,EAAU9B,OAEb5L,IAAAC,cAACiF,IAAI,CACHC,MAAK,GAAA/E,QAA6B,IAAvBsN,EAAU5B,YAAkBD,QAAQ,GAAE,KACjDzG,KAAK,QACLpH,MAAM,UACN+C,MAAO,CAAEqK,WAAY,MAI3ByC,UAAS,kBAAAzN,OAAoBsN,EAAUI,KAAKC,KAAK,MAAK,QAGzDxC,EAAQzD,EAAKqB,WAAWmC,OAAS,GAAKtL,IAAAC,cAAC8M,IAAO,SAKrD/M,IAAAC,cAACgB,IAAU,CAACC,QAAQ,SAAQ,0BAMnC2L,IAGD7M,IAAAC,cAAC0G,GAAe,CAACC,QAASA,KAMlB,QAAVlC,EAEA1E,IAAAC,cAACC,IAAK,CAACC,UAAWrB,EAAQjC,OACxBmD,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GAAC,8BAItCnB,IAAAC,cAACgB,IAAU,CAACC,QAAQ,YAAYC,cAAY,GAAC,oBAI5C2G,EAAKkG,iBAAmBlG,EAAKkG,gBAAgB1C,OAAS,EACrDtL,IAAAC,cAACwN,IAAI,KACF3F,EAAKkG,gBAAgBvJ,IAAI,CAACwJ,EAAY1C,IACrCvL,IAAAC,cAACD,IAAM+J,SAAQ,CAACjF,IAAKyG,GACnBvL,IAAAC,cAAC0N,IAAQ,KACP3N,IAAAC,cAAC2N,IAAY,CACXnH,QACEzG,IAAAC,cAACe,IAAG,CAACD,MAAO,CAAE/D,QAAS,OAAQE,WAAY,WACzC8C,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQe,UAAU,QACnCgM,EAAWC,KAAK,KAAGD,EAAWrC,OAEjC5L,IAAAC,cAACiF,IAAI,CACHC,MAAK,GAAA/E,QAA+B,IAAzB6N,EAAWE,aAAmBtC,QAAQ,GAAE,KACnDzG,KAAK,QACLpH,MAAiB,IAAVuN,EAAc,UAAY,UACjCxK,MAAO,CAAEqK,WAAY,SAM9BG,EAAQzD,EAAKkG,gBAAgB1C,OAAS,GAAKtL,IAAAC,cAAC8M,IAAO,SAK1D/M,IAAAC,cAACgB,IAAU,CAACC,QAAQ,SAAQ,gCAG7B2L,IAGD7M,IAAAC,cAAC0G,GAAe,CAACC,QAASA,KAKzB,MCtLT,MAAMlK,GAAYC,YAAYC,IAAK,CACjCC,MAAO,CACLC,QAASF,EAAMG,QAAQ,GACvBoB,UAAWvB,EAAMG,QAAQ,IAE3BkB,aAAc,CACZA,aAAcrB,EAAMG,QAAQ,IAE9ByP,cAAe,CACblG,OAAO,GAADlG,OAAKxD,EAAMG,QAAQ,GAAE,SAE7BqR,YAAa,CACXtR,QAASF,EAAMG,QAAQ,GACvBS,gBAAiB,UACjBkI,aAAc9I,EAAM+I,MAAMD,aAC1BvH,UAAWvB,EAAMG,QAAQ,GACzBsR,WAAY,YAEdC,eAAgB,CACdlD,WAAYxO,EAAMG,QAAQ,OA4HfwR,OAxHK3P,IAA+B,IAA9B,cAAE4P,EAAa,MAAE9J,GAAO9F,EAC3C,MAAME,EAAUpC,MACT+R,EAAWC,GAAgBzP,mBAAS,KACpC0P,EAAaC,GAAkB3P,oBAAS,IACxC4P,EAAgBC,GAAqB7P,mBAAS,OAC9CyK,EAAOqF,GAAY9P,mBAAS,MA+CnC,OAAKuP,EAGHxO,IAAAC,cAACC,IAAK,CAACC,UAAWrB,EAAQjC,OACxBmD,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GAAC,oBACR,QAAVuD,EAAkB,iBAAmB,YAAY,YAGrE1E,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQf,UAAWrB,EAAQb,cAAc,+FAI7D+B,IAAAC,cAACiL,IAAS,CACR3F,WAAS,EACTJ,MAAM,gCACNjE,QAAQ,WACRS,MAAO8M,EACP5N,SAAWvB,GAAMoP,EAAapP,EAAEwB,OAAOa,OACvC2D,SAAUqJ,EACVxO,UAAWrB,EAAQb,aACnBkN,YAAuB,QAAVzG,EACT,iDACA,6CAGN1E,IAAAC,cAAC+B,IAAM,CACLd,QAAQ,YACRlD,MAAM,UACNwD,QAjEgB+G,UACpB,GAAKkG,EAAUvC,OAAf,CAEA0C,GAAe,GACfG,EAAS,MAET,IACE,MAAMvG,QAAiBC,MAAM,eAAgB,CAC3CC,OAAQ,OACRC,QAAS,CACP,eAAgB,oBAElBC,KAAMC,KAAKC,UAAU,CACnB0F,cAAeA,EACfC,UAAWA,MAIf,IAAKjG,EAASa,GACZ,MAAM,IAAIC,MAAM,uBAADlJ,OAAwBoI,EAASe,SAGlD,MAAMzB,QAAaU,EAASiB,OAExB3B,EAAK4B,MACPqF,EAASjH,EAAK4B,OAEdoF,EAAkBhH,GAEpB,MAAO8B,GACPC,QAAQH,MAAM,4BAA6BE,GAC3CmF,EAAS,6BAAD3O,OAA8BwJ,EAAIE,UAC3C,QACC8E,GAAe,MAiCbtJ,SAAUqJ,IAAgBF,EAAUvC,QACrC,kBAEEyC,GAAe3O,IAAAC,cAAC+J,IAAgB,CAAC5E,KAAM,GAAIjF,UAAWrB,EAAQwP,kBAGhE5E,GACC1J,IAAAC,cAACe,IAAG,CAACgO,GAAI,GACPhP,IAAAC,cAACgB,IAAU,CAACjD,MAAM,SAAS0L,IAI9BmF,GACC7O,IAAAC,cAAAD,IAAA+J,SAAA,KACE/J,IAAAC,cAAC8M,IAAO,CAAC5M,UAAWrB,EAAQ0N,gBAE5BxM,IAAAC,cAACgB,IAAU,CAACC,QAAQ,YAAYC,cAAY,GAAC,gBAI7CnB,IAAAC,cAACe,IAAG,CAACb,UAAWrB,EAAQsP,aACtBpO,IAAAC,cAACgB,IAAU,CAACC,QAAQ,SACjB2N,EAAerG,WAInBqG,EAAe/B,aACd9M,IAAAC,cAACe,IAAG,CAACgO,GAAI,GACPhP,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQlD,MAAM,iBAAgB,kBArG1CgP,KAClB,QAAWC,IAAPD,GAA2B,OAAPA,GAAeE,MAAMF,GAAK,MAAO,IACzD,MAAMG,EAAMC,OAAOJ,GACnB,OAAIG,EAAM,IAAY,GAAN/M,OAAU+M,EAAItB,QAAQ,GAAE,OAClC,GAANzL,QAAW+M,EAAM,KAAMtB,QAAQ,GAAE,OAkGLwB,CAAWwB,EAAe/B,YAAYQ,gBAAgB,OAAKuB,EAAe/B,YAAYS,WA1DzF,M,OCnE7B,MAAM3Q,GAAQqS,YAAe,CAC3BzI,QAAS,CACPC,QAAS,CACPC,KAAM,WAERmH,UAAW,CACTnH,KAAM,YAGVwI,WAAY,CACVC,WAAY,+BA0KDC,OAtKf,WACE,MAAOC,EAAeC,GAAoBrQ,mBAAS,OAC5CmE,EAAemM,GAAoBtQ,mBAAS,KAC5CiE,EAAcsM,GAAmBvQ,oBAAS,IAC1C2H,EAAS6I,GAAcxQ,mBAAS,OAChCyK,EAAOqF,GAAY9P,mBAAS,OAC5BkE,EAAcuM,GAAmBzQ,mBAAS,CAC/C4E,MAAM,EACNC,MAAM,EACNE,KAAK,IA8EP,OA1EA2L,oBAAU,KACRlH,MAAM,eACHmH,KAAKpH,GAAYA,EAASiB,QAC1BmG,KAAK9H,IACJ4H,EAAgB5H,EAAKxE,UAEtBuM,MAAMjG,IACLC,QAAQH,MAAM,6BAA8BE,GAC5CmF,EAAS,mFAEZ,IAiED/O,IAAAC,cAAC6P,IAAa,CAAClT,MAAOA,IACpBoD,IAAAC,cAACe,IAAG,CAACD,MAAO,CAAEyB,SAAU,IACtBxC,IAAAC,cAAC8P,IAAM,CAAC1R,SAAS,UACf2B,IAAAC,cAAC+P,IAAO,KACNhQ,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKH,MAAO,CAAEyB,SAAU,IAAK,uCAKrDxC,IAAAC,cAACgQ,IAAS,CAACxF,SAAS,KAAK1J,MAAO,CAAE5C,UAAWvB,GAAMG,QAAQ,GAAIkB,aAAcrB,GAAMG,QAAQ,KACzFiD,IAAAC,cAACsE,IAAI,CAACC,WAAS,EAACzH,QAAS,GACvBiD,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,IACb5E,IAAAC,cAACC,IAAK,CAACa,MAAO,CAAEjE,QAASF,GAAMG,QAAQ,KACrCiD,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKC,cAAY,GAAC,mDAGtCnB,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQgP,WAAS,GAAC,yFAGtClQ,IAAAC,cAACgB,IAAU,CAACC,QAAQ,QAAQe,UAAU,OACpCjC,IAAAC,cAAA,UACED,IAAAC,cAAA,UAAID,IAAAC,cAAA,cAAQ,UAAe,wCAC3BD,IAAAC,cAAA,UAAID,IAAAC,cAAA,cAAQ,QAAa,gDACzBD,IAAAC,cAAA,UAAID,IAAAC,cAAA,cAAQ,OAAY,qDAMhCD,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,GAAI4I,GAAI,GACrBxN,IAAAC,cAACtB,EAAa,CAACE,cA7FAkK,IACzBuG,EAAiBvG,GACjB0G,EAAW,MACXV,EAAS,UA6FD/O,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,GAAI4I,GAAI,GACrBxN,IAAAC,cAAC8C,EAAa,CACZC,cA5Fa0B,IACzB6K,EAAiB7K,GACjB+K,EAAW,MACXV,EAAS,OA0FG9L,UAvFOsF,UACnB,IAAK8G,IAAkBjM,EAErB,YADA2L,EAAS,2CAIXS,GAAgB,GAChBT,EAAS,MAGT,MAAMoB,EAAW,IAAIC,SACrBD,EAASE,OAAO,QAAShB,GAEzB,IAAIiB,EAAW,GACf,OAAQlN,GACN,IAAK,OACHkN,EAAW,mBACX,MACF,IAAK,OACHA,EAAW,mBACX,MACF,IAAK,MACHA,EAAW,oBACX,MACF,QAGE,OAFAvB,EAAS,gCACTS,GAAgB,GAIpB,IACE,MAAMhH,QAAiBC,MAAM6H,EAAU,CACrC5H,OAAQ,OACRE,KAAMuH,IAGR,IAAK3H,EAASa,GACZ,MAAM,IAAIC,MAAM,uBAADlJ,OAAwBoI,EAASe,SAGlD,MAAMzB,QAAaU,EAASiB,OAC5BgG,EAAW,CAAE/K,MAAOtB,EAAe0E,SACnC,MAAO8B,GACPC,QAAQH,MAAM,0BAA2BE,GACzCmF,EAAS,2BAAD3O,OAA4BwJ,EAAIE,UACzC,QACC0F,GAAgB,KA0CNtM,aAAcA,EACdC,aAAcA,EACdC,cAAeA,EACfC,gBAAiBgM,KAIpB3F,GACC1J,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,IACb5E,IAAAC,cAACC,IAAK,CAACa,MAAO,CAAEjE,QAASF,GAAMG,QAAQ,GAAIS,gBAAiB,YAC1DwC,IAAAC,cAACgB,IAAU,CAACjD,MAAM,SAAS0L,KAKhCxG,GACClD,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,GAAI7D,MAAO,CAAE+B,UAAW,SAAUwD,OAAO,GAADlG,OAAKxD,GAAMG,QAAQ,GAAE,UAC1EiD,IAAAC,cAAC+J,IAAgB,MACjBhK,IAAAC,cAACgB,IAAU,CAACC,QAAQ,KAAKH,MAAO,CAAE5C,UAAWvB,GAAMG,QAAQ,KAAM,wBAMpE6J,GACC5G,IAAAC,cAAAD,IAAA+J,SAAA,KACE/J,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,IACb5E,IAAAC,cAAC0M,GAAa,CAAC/F,QAASA,KAE1B5G,IAAAC,cAACsE,IAAI,CAACI,MAAI,EAACC,GAAI,IACb5E,IAAAC,cAACsO,GAAW,CAACC,cAAe5H,EAAQkB,KAAMpD,MAAOkC,EAAQlC,eCjL5D6L,OAZUC,IACnBA,GAAeA,aAAuBC,UACxC,8BAAqBb,KAAKhR,IAAkD,IAAjD,OAAE8R,EAAM,OAAEC,EAAM,OAAEC,EAAM,OAAEC,EAAM,QAAEC,GAASlS,EACpE8R,EAAOF,GACPG,EAAOH,GACPI,EAAOJ,GACPK,EAAOL,GACPM,EAAQN,MCDdO,IAASC,OACPhR,IAAAC,cAACD,IAAMiR,WAAU,KACfjR,IAAAC,cAACmP,GAAG,OAEN8B,SAASC,eAAe,SAM1BZ,M","file":"static/js/main.ad7f086c.chunk.js","sourcesContent":["import React, { useState, useRef } from 'react';\nimport { \n Paper, \n Typography, \n Box, \n Button, \n IconButton \n} from '@material-ui/core';\nimport CloudUploadIcon from '@material-ui/icons/CloudUpload';\nimport DeleteIcon from '@material-ui/icons/Delete';\nimport { makeStyles } from '@material-ui/core/styles';\n\nconst useStyles = makeStyles((theme) => ({\n paper: {\n padding: theme.spacing(2),\n display: 'flex',\n flexDirection: 'column',\n alignItems: 'center',\n height: '100%',\n minHeight: 300,\n transition: 'all 0.3s ease'\n },\n dragActive: {\n border: '2px dashed #3f51b5',\n backgroundColor: 'rgba(63, 81, 181, 0.05)'\n },\n dragInactive: {\n border: '2px dashed #ccc',\n backgroundColor: 'white'\n },\n uploadBox: {\n display: 'flex',\n flexDirection: 'column',\n alignItems: 'center',\n justifyContent: 'center',\n height: '100%',\n width: '100%',\n cursor: 'pointer'\n },\n uploadIcon: {\n fontSize: 60,\n color: '#3f51b5',\n marginBottom: theme.spacing(2)\n },\n supportText: {\n marginTop: theme.spacing(2)\n },\n previewBox: {\n display: 'flex',\n flexDirection: 'column',\n alignItems: 'center',\n width: '100%',\n height: '100%',\n position: 'relative'\n },\n imageContainer: {\n position: 'relative',\n width: '100%',\n height: '100%',\n display: 'flex',\n justifyContent: 'center',\n alignItems: 'center',\n overflow: 'hidden',\n marginTop: theme.spacing(2)\n },\n deleteButton: {\n position: 'absolute',\n top: 0,\n right: 0,\n backgroundColor: 'rgba(255, 255, 255, 0.7)',\n '&:hover': {\n backgroundColor: 'rgba(255, 255, 255, 0.9)',\n }\n }\n}));\n\nconst ImageUploader = ({ onImageUpload }) => {\n const classes = useStyles();\n const [previewUrl, setPreviewUrl] = useState(null);\n const [dragActive, setDragActive] = useState(false);\n const fileInputRef = useRef(null);\n\n const handleDrag = (e) => {\n e.preventDefault();\n e.stopPropagation();\n if (e.type === 'dragenter' || e.type === 'dragover') {\n setDragActive(true);\n } else if (e.type === 'dragleave') {\n setDragActive(false);\n }\n };\n\n const handleDrop = (e) => {\n e.preventDefault();\n e.stopPropagation();\n setDragActive(false);\n if (e.dataTransfer.files && e.dataTransfer.files[0]) {\n handleFiles(e.dataTransfer.files[0]);\n }\n };\n\n const handleChange = (e) => {\n e.preventDefault();\n if (e.target.files && e.target.files[0]) {\n handleFiles(e.target.files[0]);\n }\n };\n\n const handleFiles = (file) => {\n if (file.type.startsWith('image/')) {\n setPreviewUrl(URL.createObjectURL(file));\n onImageUpload(file);\n } else {\n alert('Please upload an image file');\n }\n };\n\n const onButtonClick = () => {\n fileInputRef.current.click();\n };\n\n const handleRemoveImage = () => {\n setPreviewUrl(null);\n onImageUpload(null);\n fileInputRef.current.value = \"\";\n };\n\n return (\n <Paper \n className={`${classes.paper} ${dragActive ? classes.dragActive : classes.dragInactive}`}\n onDragEnter={handleDrag}\n onDragLeave={handleDrag}\n onDragOver={handleDrag}\n onDrop={handleDrop}\n >\n <input\n ref={fileInputRef}\n type=\"file\"\n accept=\"image/*\"\n onChange={handleChange}\n style={{ display: 'none' }}\n />\n\n {!previewUrl ? (\n <Box \n className={classes.uploadBox}\n onClick={onButtonClick}\n >\n <CloudUploadIcon className={classes.uploadIcon} />\n <Typography variant=\"h6\" gutterBottom>\n Drag & Drop an image here\n </Typography>\n <Typography variant=\"body2\" color=\"textSecondary\" gutterBottom>\n or\n </Typography>\n <Button\n variant=\"contained\"\n color=\"primary\"\n component=\"span\"\n startIcon={<CloudUploadIcon />}\n >\n Browse Files\n </Button>\n <Typography variant=\"body2\" color=\"textSecondary\" className={classes.supportText}>\n Supported formats: JPG, PNG, GIF\n </Typography>\n </Box>\n ) : (\n <Box className={classes.previewBox}>\n <Typography variant=\"h6\" gutterBottom>\n Preview\n </Typography>\n <Box className={classes.imageContainer}>\n <img\n src={previewUrl}\n alt=\"Preview\"\n className=\"preview-image\"\n />\n <IconButton\n aria-label=\"delete\"\n className={classes.deleteButton}\n onClick={handleRemoveImage}\n >\n <DeleteIcon />\n </IconButton>\n </Box>\n </Box>\n )}\n </Paper>\n );\n};\n\nexport default ImageUploader;\n","import React from 'react';\nimport { \n Grid, \n Card, \n CardContent, \n CardActions, \n Typography, \n Button, \n Chip,\n Box\n} from '@material-ui/core';\nimport VisibilityIcon from '@material-ui/icons/Visibility';\nimport CategoryIcon from '@material-ui/icons/Category';\nimport PlayArrowIcon from '@material-ui/icons/PlayArrow';\nimport { makeStyles } from '@material-ui/core/styles';\n\nconst useStyles = makeStyles((theme) => ({\n card: {\n height: '100%',\n display: 'flex',\n flexDirection: 'column',\n },\n selectedCard: {\n border: '2px solid #3f51b5',\n },\n unavailableCard: {\n opacity: 0.6,\n },\n cardContent: {\n flexGrow: 1,\n },\n chipContainer: {\n marginBottom: theme.spacing(1.5),\n },\n successChip: {\n backgroundColor: '#34C759',\n color: '#fff',\n },\n errorChip: {\n backgroundColor: '#FF3B3F',\n color: '#fff',\n },\n modelType: {\n marginTop: theme.spacing(1),\n },\n processButton: {\n marginTop: theme.spacing(3),\n textAlign: 'center',\n }\n}));\n\nconst ModelSelector = ({ \n onModelSelect, \n onProcess, \n isProcessing, \n modelsStatus, \n selectedModel,\n imageSelected \n}) => {\n const classes = useStyles();\n \n const models = [\n {\n id: 'yolo',\n name: 'YOLOv8',\n description: 'Fast and accurate object detection',\n icon: <VisibilityIcon />,\n available: modelsStatus.yolo\n },\n {\n id: 'detr',\n name: 'DETR',\n description: 'DEtection TRansformer for object detection',\n icon: <VisibilityIcon />,\n available: modelsStatus.detr\n },\n {\n id: 'vit',\n name: 'ViT',\n description: 'Vision Transformer for image classification',\n icon: <CategoryIcon />,\n available: modelsStatus.vit\n }\n ];\n\n const handleModelClick = (modelId) => {\n if (models.find(m => m.id === modelId).available) {\n onModelSelect(modelId);\n }\n };\n\n return (\n <Box sx={{ p: 2, height: '100%' }}>\n <Typography variant=\"h6\" gutterBottom>\n Select Model\n </Typography>\n \n <Grid container spacing={2}>\n {models.map((model) => (\n <Grid item xs={12} sm={4} key={model.id}>\n <Card \n className={`\n ${classes.card} \n ${selectedModel === model.id ? classes.selectedCard : ''} \n ${!model.available ? classes.unavailableCard : ''}\n `}\n onClick={() => handleModelClick(model.id)}\n >\n <CardContent className={classes.cardContent}>\n <Box sx={{ mb: 2, color: 'primary' }}>\n {model.icon}\n </Box>\n <Typography variant=\"h5\" component=\"div\" gutterBottom>\n {model.name}\n </Typography>\n <div className={classes.chipContainer}>\n {model.available ? (\n <Chip \n label=\"Available\" \n className={classes.successChip}\n size=\"small\" \n />\n ) : (\n <Chip \n label=\"Not Available\" \n className={classes.errorChip}\n size=\"small\" \n />\n )}\n </div>\n <Typography variant=\"body2\" color=\"textSecondary\">\n {model.description}\n </Typography>\n </CardContent>\n <CardActions>\n <Button \n size=\"small\" \n onClick={() => handleModelClick(model.id)}\n disabled={!model.available}\n color={selectedModel === model.id ? \"primary\" : \"default\"}\n variant={selectedModel === model.id ? \"contained\" : \"outlined\"}\n fullWidth\n >\n {selectedModel === model.id ? 'Selected' : 'Select'}\n </Button>\n </CardActions>\n </Card>\n </Grid>\n ))}\n </Grid>\n\n <div className={classes.processButton}>\n <Button\n variant=\"contained\"\n color=\"primary\"\n size=\"large\"\n startIcon={<PlayArrowIcon />}\n onClick={onProcess}\n disabled={!selectedModel || !imageSelected || isProcessing}\n >\n {isProcessing ? 'Processing...' : 'Process Image'}\n </Button>\n </div>\n </Box>\n );\n};\n\nexport default ModelSelector;\n","import React, { useState } from 'react';\nimport { \n Button, \n Box, \n Typography, \n CircularProgress, \n Snackbar,\n Dialog,\n DialogTitle,\n DialogContent,\n DialogActions,\n TextField,\n FormControl,\n InputLabel,\n Select,\n MenuItem,\n Grid,\n Card,\n CardMedia,\n CardContent,\n Chip\n} from '@material-ui/core';\nimport { Alert } from '@material-ui/lab';\nimport { makeStyles } from '@material-ui/core/styles';\n\nconst useStyles = makeStyles((theme) => ({\n root: {\n marginTop: theme.spacing(2),\n marginBottom: theme.spacing(2),\n padding: theme.spacing(2),\n backgroundColor: '#f5f5f5',\n borderRadius: theme.shape.borderRadius,\n },\n button: {\n marginRight: theme.spacing(2),\n },\n searchDialog: {\n minWidth: '500px',\n },\n formControl: {\n marginBottom: theme.spacing(2),\n minWidth: '100%',\n },\n searchResults: {\n marginTop: theme.spacing(2),\n },\n resultCard: {\n marginBottom: theme.spacing(2),\n },\n resultImage: {\n height: 140,\n objectFit: 'contain',\n },\n chip: {\n margin: theme.spacing(0.5),\n },\n similarityChip: {\n backgroundColor: theme.palette.primary.main,\n color: 'white',\n }\n}));\n\nconst VectorDBActions = ({ results }) => {\n const classes = useStyles();\n const [isSaving, setIsSaving] = useState(false);\n const [saveSuccess, setSaveSuccess] = useState(false);\n const [saveError, setSaveError] = useState(null);\n const [openSearchDialog, setOpenSearchDialog] = useState(false);\n const [searchType, setSearchType] = useState('image');\n const [searchClass, setSearchClass] = useState('');\n const [searchResults, setSearchResults] = useState([]);\n const [isSearching, setIsSearching] = useState(false);\n const [searchError, setSearchError] = useState(null);\n \n // Extract model and data from results\n const { model, data } = results;\n \n // Handle saving to vector DB\n const handleSaveToVectorDB = async () => {\n setIsSaving(true);\n setSaveError(null);\n \n try {\n let response;\n \n if (model === 'vit') {\n // For ViT, save the whole image with classifications\n response = await fetch('/api/add-to-collection', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify({\n image: data.image,\n metadata: {\n model: 'vit',\n classifications: data.classifications\n }\n })\n });\n } else {\n // For YOLO and DETR, save detected objects\n response = await fetch('/api/add-detected-objects', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify({\n image: data.image,\n objects: data.detections,\n imageId: generateUUID()\n })\n });\n }\n \n if (!response.ok) {\n throw new Error(`HTTP error! Status: ${response.status}`);\n }\n \n const result = await response.json();\n \n if (result.error) {\n throw new Error(result.error);\n }\n \n setSaveSuccess(true);\n setTimeout(() => setSaveSuccess(false), 5000);\n } catch (err) {\n console.error('Error saving to vector DB:', err);\n setSaveError(`Error saving to vector DB: ${err.message}`);\n } finally {\n setIsSaving(false);\n }\n };\n \n // Handle opening search dialog\n const handleOpenSearchDialog = () => {\n setOpenSearchDialog(true);\n setSearchResults([]);\n setSearchError(null);\n };\n \n // Handle closing search dialog\n const handleCloseSearchDialog = () => {\n setOpenSearchDialog(false);\n };\n \n // Handle search type change\n const handleSearchTypeChange = (event) => {\n setSearchType(event.target.value);\n setSearchResults([]);\n setSearchError(null);\n };\n \n // Handle search class change\n const handleSearchClassChange = (event) => {\n setSearchClass(event.target.value);\n };\n \n // Handle search\n const handleSearch = async () => {\n setIsSearching(true);\n setSearchError(null);\n \n try {\n let requestBody = {};\n \n if (searchType === 'image') {\n // Search by current image\n requestBody = {\n searchType: 'image',\n image: data.image,\n n_results: 5\n };\n } else {\n // Search by class name\n if (!searchClass.trim()) {\n throw new Error('Please enter a class name');\n }\n \n requestBody = {\n searchType: 'class',\n class_name: searchClass.trim(),\n n_results: 5\n };\n }\n \n const response = await fetch('/api/search-similar-objects', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify(requestBody)\n });\n \n if (!response.ok) {\n throw new Error(`HTTP error! Status: ${response.status}`);\n }\n \n const result = await response.json();\n \n if (result.error) {\n throw new Error(result.error);\n }\n \n console.log('Search API response:', result);\n \n // 백엔드는 {success, searchType, results} 구조로 응답하므로 results 배열만 추출\n if (result.success && Array.isArray(result.results)) {\n console.log('Setting search results array:', result.results);\n console.log('Results array length:', result.results.length);\n console.log('First result item:', result.results[0]);\n setSearchResults(result.results);\n } else {\n console.error('Unexpected API response format:', result);\n throw new Error('Unexpected API response format');\n }\n } catch (err) {\n console.error('Error searching vector DB:', err);\n setSearchError(`Error searching vector DB: ${err.message}`);\n } finally {\n setIsSearching(false);\n }\n };\n \n // Generate UUID for image ID\n const generateUUID = () => {\n return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {\n const r = Math.random() * 16 | 0;\n const v = c === 'x' ? r : (r & 0x3 | 0x8);\n return v.toString(16);\n });\n };\n \n // Render search results\n const renderSearchResults = () => {\n console.log('Rendering search results:', searchResults);\n console.log('Search results length:', searchResults.length);\n \n if (searchResults.length === 0) {\n console.log('No results to render');\n return (\n <Typography variant=\"body1\">No results found.</Typography>\n );\n }\n \n return (\n <Grid container spacing={2}>\n {searchResults.map((result, index) => {\n const similarity = (1 - result.distance) * 100;\n \n return (\n <Grid item xs={12} sm={6} key={index}>\n <Card className={classes.resultCard}>\n {result.metadata && result.metadata.image_data ? (\n <CardMedia\n className={classes.resultImage}\n component=\"img\"\n height=\"200\"\n image={`data:image/jpeg;base64,${result.metadata.image_data}`}\n alt={result.metadata && result.metadata.class ? result.metadata.class : 'Object'}\n />\n ) : (\n <Box \n className={classes.resultImage}\n style={{ \n backgroundColor: '#f0f0f0', \n display: 'flex', \n alignItems: 'center', \n justifyContent: 'center',\n height: 200\n }}\n >\n <Typography variant=\"body2\" color=\"textSecondary\">\n {result.metadata && result.metadata.class ? result.metadata.class : 'Object'} Image\n </Typography>\n </Box>\n )}\n <CardContent>\n <Box display=\"flex\" justifyContent=\"space-between\" alignItems=\"center\" mb={1}>\n <Typography variant=\"subtitle1\">Result #{index + 1}</Typography>\n <Chip \n label={`Similarity: ${similarity.toFixed(2)}%`}\n className={classes.similarityChip}\n size=\"small\"\n />\n </Box>\n <Typography variant=\"body2\" color=\"textSecondary\">\n <strong>Class:</strong> {result.metadata.class || 'N/A'}\n </Typography>\n {result.metadata.confidence && (\n <Typography variant=\"body2\" color=\"textSecondary\">\n <strong>Confidence:</strong> {(result.metadata.confidence * 100).toFixed(2)}%\n </Typography>\n )}\n <Typography variant=\"body2\" color=\"textSecondary\">\n <strong>Object ID:</strong> {result.id}\n </Typography>\n </CardContent>\n </Card>\n </Grid>\n );\n })}\n </Grid>\n );\n };\n \n return (\n <Box className={classes.root}>\n <Typography variant=\"h6\" gutterBottom>\n Vector Database Actions\n </Typography>\n \n <Box display=\"flex\" alignItems=\"center\" mb={2}>\n <Button\n variant=\"contained\"\n color=\"primary\"\n onClick={handleSaveToVectorDB}\n disabled={isSaving}\n className={classes.button}\n >\n {isSaving ? (\n <>\n <CircularProgress size={20} color=\"inherit\" style={{ marginRight: 8 }} />\n Saving...\n </>\n ) : (\n 'Save to Vector DB'\n )}\n </Button>\n \n <Button\n variant=\"outlined\"\n color=\"primary\"\n onClick={handleOpenSearchDialog}\n className={classes.button}\n >\n Search Similar\n </Button>\n </Box>\n \n {saveError && (\n <Alert severity=\"error\" style={{ marginTop: 8 }}>\n {saveError}\n </Alert>\n )}\n \n <Snackbar open={saveSuccess} autoHideDuration={5000} onClose={() => setSaveSuccess(false)}>\n <Alert severity=\"success\">\n {model === 'vit' ? (\n 'Image and classifications successfully saved to vector DB!'\n ) : (\n 'Detected objects successfully saved to vector DB!'\n )}\n </Alert>\n </Snackbar>\n \n {/* Search Dialog */}\n <Dialog\n open={openSearchDialog}\n onClose={handleCloseSearchDialog}\n maxWidth=\"md\"\n fullWidth\n >\n <DialogTitle>Search Vector Database</DialogTitle>\n <DialogContent>\n <FormControl className={classes.formControl}>\n <InputLabel id=\"search-type-label\">Search Type</InputLabel>\n <Select\n labelId=\"search-type-label\"\n id=\"search-type\"\n value={searchType}\n onChange={handleSearchTypeChange}\n >\n <MenuItem value=\"image\">Search by Current Image</MenuItem>\n <MenuItem value=\"class\">Search by Class Name</MenuItem>\n </Select>\n </FormControl>\n \n {searchType === 'class' && (\n <FormControl className={classes.formControl}>\n <TextField\n label=\"Class Name\"\n value={searchClass}\n onChange={handleSearchClassChange}\n placeholder=\"e.g. person, car, dog...\"\n fullWidth\n />\n </FormControl>\n )}\n \n {searchError && (\n <Alert severity=\"error\" style={{ marginBottom: 16 }}>\n {searchError}\n </Alert>\n )}\n \n <Box className={classes.searchResults}>\n {isSearching ? (\n <Box display=\"flex\" justifyContent=\"center\" alignItems=\"center\" p={4}>\n <CircularProgress />\n <Typography variant=\"body1\" style={{ marginLeft: 16 }}>\n Searching...\n </Typography>\n </Box>\n ) : (\n <>\n {console.log('Search dialog render - searchResults:', searchResults)}\n {searchResults.length > 0 ? renderSearchResults() : \n <Typography variant=\"body1\">No results found. Please try another search.</Typography>\n }\n </>\n )}\n </Box>\n </DialogContent>\n <DialogActions>\n <Button onClick={handleCloseSearchDialog} color=\"default\">\n Close\n </Button>\n <Button \n onClick={handleSearch} \n color=\"primary\" \n variant=\"contained\"\n disabled={isSearching || (searchType === 'class' && !searchClass.trim())}\n >\n Search\n </Button>\n </DialogActions>\n </Dialog>\n </Box>\n );\n};\n\nexport default VectorDBActions;\n","import React from 'react';\nimport { \n Paper, \n Typography, \n Box, \n List, \n ListItem, \n ListItemText, \n Divider,\n Grid,\n Chip\n} from '@material-ui/core';\nimport VectorDBActions from './VectorDBActions';\nimport { makeStyles } from '@material-ui/core/styles';\n\nconst useStyles = makeStyles((theme) => ({\n paper: {\n padding: theme.spacing(2)\n },\n marginBottom: {\n marginBottom: theme.spacing(2)\n },\n resultImage: {\n maxWidth: '100%',\n maxHeight: '400px',\n objectFit: 'contain'\n },\n dividerMargin: {\n margin: `${theme.spacing(2)}px 0`\n },\n chipContainer: {\n display: 'flex',\n gap: theme.spacing(1),\n flexWrap: 'wrap'\n }\n}));\n\nconst ResultDisplay = ({ results }) => {\n const classes = useStyles();\n if (!results) return null;\n \n const { model, data } = results;\n \n // Helper to format times nicely\n const formatTime = (ms) => {\n if (ms === undefined || ms === null || isNaN(ms)) return '-';\n const num = Number(ms);\n if (num < 1000) return `${num.toFixed(2)} ms`;\n return `${(num / 1000).toFixed(2)} s`;\n };\n \n // Check if there's an error\n if (data.error) {\n return (\n <Paper sx={{ p: 2, bgcolor: '#ffebee' }}>\n <Typography color=\"error\">{data.error}</Typography>\n </Paper>\n );\n }\n\n // Display performance info\n const renderPerformanceInfo = () => {\n if (!data.performance) return null;\n \n return (\n <Box className=\"performance-info\">\n <Divider className={classes.dividerMargin} />\n <Typography variant=\"body2\">\n Inference time: {formatTime(data.performance.inference_time)} on {data.performance.device}\n </Typography>\n </Box>\n );\n };\n\n // Render for YOLO and DETR (object detection)\n if (model === 'yolo' || model === 'detr') {\n return (\n <Paper className={classes.paper}>\n <Typography variant=\"h6\" gutterBottom>\n {model === 'yolo' ? 'YOLOv8' : 'DETR'} Detection Results\n </Typography>\n \n <Grid container spacing={3}>\n <Grid item xs={12} md={6}>\n {data.image && (\n <Box className={classes.marginBottom}>\n <Typography variant=\"subtitle1\" gutterBottom>\n Detection Result\n </Typography>\n <img \n src={`data:image/png;base64,${data.image}`} \n alt=\"Detection Result\" \n className={classes.resultImage}\n />\n </Box>\n )}\n </Grid>\n \n <Grid item xs={12} md={6}>\n <Box className={classes.marginBottom}>\n <Typography variant=\"subtitle1\" gutterBottom>\n Detected Objects:\n </Typography>\n \n {data.detections && data.detections.length > 0 ? (\n <List>\n {data.detections.map((detection, index) => (\n <React.Fragment key={index}>\n <ListItem>\n <ListItemText \n primary={\n <Box style={{ display: 'flex', alignItems: 'center' }}>\n <Typography variant=\"body1\" component=\"span\">\n {detection.class}\n </Typography>\n <Chip \n label={`${(detection.confidence * 100).toFixed(0)}%`}\n size=\"small\"\n color=\"primary\"\n style={{ marginLeft: 8 }}\n />\n </Box>\n } \n secondary={`Bounding Box: [${detection.bbox.join(', ')}]`} \n />\n </ListItem>\n {index < data.detections.length - 1 && <Divider />}\n </React.Fragment>\n ))}\n </List>\n ) : (\n <Typography variant=\"body1\">No objects detected</Typography>\n )}\n </Box>\n </Grid>\n </Grid>\n \n {renderPerformanceInfo()}\n \n {/* Vector DB Actions for Object Detection */}\n <VectorDBActions results={results} />\n </Paper>\n );\n }\n \n // Render for ViT (classification)\n if (model === 'vit') {\n return (\n <Paper className={classes.paper}>\n <Typography variant=\"h6\" gutterBottom>\n ViT Classification Results\n </Typography>\n \n <Typography variant=\"subtitle1\" gutterBottom>\n Top Predictions:\n </Typography>\n \n {data.top_predictions && data.top_predictions.length > 0 ? (\n <List>\n {data.top_predictions.map((prediction, index) => (\n <React.Fragment key={index}>\n <ListItem>\n <ListItemText \n primary={\n <Box style={{ display: 'flex', alignItems: 'center' }}>\n <Typography variant=\"body1\" component=\"span\">\n {prediction.rank}. {prediction.class}\n </Typography>\n <Chip \n label={`${(prediction.probability * 100).toFixed(1)}%`}\n size=\"small\"\n color={index === 0 ? \"primary\" : \"default\"}\n style={{ marginLeft: 8 }}\n />\n </Box>\n } \n />\n </ListItem>\n {index < data.top_predictions.length - 1 && <Divider />}\n </React.Fragment>\n ))}\n </List>\n ) : (\n <Typography variant=\"body1\">No classifications available</Typography>\n )}\n \n {renderPerformanceInfo()}\n \n {/* Vector DB Actions for ViT Classification */}\n <VectorDBActions results={results} />\n </Paper>\n );\n }\n \n return null;\n};\n\nexport default ResultDisplay;\n","import React, { useState } from 'react';\nimport { \n Paper, \n Typography, \n Box, \n TextField, \n Button, \n CircularProgress,\n Divider\n} from '@material-ui/core';\nimport { makeStyles } from '@material-ui/core/styles';\n\nconst useStyles = makeStyles((theme) => ({\n paper: {\n padding: theme.spacing(2),\n marginTop: theme.spacing(2)\n },\n marginBottom: {\n marginBottom: theme.spacing(2)\n },\n dividerMargin: {\n margin: `${theme.spacing(2)}px 0`\n },\n responseBox: {\n padding: theme.spacing(2),\n backgroundColor: '#f5f5f5',\n borderRadius: theme.shape.borderRadius,\n marginTop: theme.spacing(2),\n whiteSpace: 'pre-wrap'\n },\n buttonProgress: {\n marginLeft: theme.spacing(1)\n }\n}));\n\nconst LlmAnalysis = ({ visionResults, model }) => {\n const classes = useStyles();\n const [userQuery, setUserQuery] = useState('');\n const [isAnalyzing, setIsAnalyzing] = useState(false);\n const [analysisResult, setAnalysisResult] = useState(null);\n const [error, setError] = useState(null);\n\n // Format time for display\n const formatTime = (ms) => {\n if (ms === undefined || ms === null || isNaN(ms)) return '-';\n const num = Number(ms);\n if (num < 1000) return `${num.toFixed(2)} ms`;\n return `${(num / 1000).toFixed(2)} s`;\n };\n\n const handleAnalyze = async () => {\n if (!userQuery.trim()) return;\n \n setIsAnalyzing(true);\n setError(null);\n \n try {\n const response = await fetch('/api/analyze', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify({\n visionResults: visionResults,\n userQuery: userQuery\n }),\n });\n\n if (!response.ok) {\n throw new Error(`HTTP error! Status: ${response.status}`);\n }\n\n const data = await response.json();\n \n if (data.error) {\n setError(data.error);\n } else {\n setAnalysisResult(data);\n }\n } catch (err) {\n console.error('Error analyzing with LLM:', err);\n setError(`Error analyzing with LLM: ${err.message}`);\n } finally {\n setIsAnalyzing(false);\n }\n };\n\n if (!visionResults) return null;\n\n return (\n <Paper className={classes.paper}>\n <Typography variant=\"h6\" gutterBottom>\n Ask AI about the {model === 'vit' ? 'Classification' : 'Detection'} Results\n </Typography>\n \n <Typography variant=\"body2\" className={classes.marginBottom}>\n Ask a question about the detected objects or classifications to get an AI-powered analysis.\n </Typography>\n \n <TextField\n fullWidth\n label=\"Your question about the image\"\n variant=\"outlined\"\n value={userQuery}\n onChange={(e) => setUserQuery(e.target.value)}\n disabled={isAnalyzing}\n className={classes.marginBottom}\n placeholder={model === 'vit' \n ? \"E.g., What category does this image belong to?\" \n : \"E.g., How many people are in this image?\"}\n />\n \n <Button \n variant=\"contained\" \n color=\"primary\"\n onClick={handleAnalyze}\n disabled={isAnalyzing || !userQuery.trim()}\n >\n Analyze with AI\n {isAnalyzing && <CircularProgress size={24} className={classes.buttonProgress} />}\n </Button>\n \n {error && (\n <Box mt={2}>\n <Typography color=\"error\">{error}</Typography>\n </Box>\n )}\n \n {analysisResult && (\n <>\n <Divider className={classes.dividerMargin} />\n \n <Typography variant=\"subtitle1\" gutterBottom>\n AI Analysis:\n </Typography>\n \n <Box className={classes.responseBox}>\n <Typography variant=\"body1\">\n {analysisResult.response}\n </Typography>\n </Box>\n \n {analysisResult.performance && (\n <Box mt={1}>\n <Typography variant=\"body2\" color=\"textSecondary\">\n Analysis time: {formatTime(analysisResult.performance.inference_time)} on {analysisResult.performance.device}\n </Typography>\n </Box>\n )}\n </>\n )}\n </Paper>\n );\n};\n\nexport default LlmAnalysis;\n","import React, { useState, useEffect } from 'react';\nimport { \n Container, \n Typography, \n Box, \n Paper, \n Grid, \n CircularProgress,\n AppBar,\n Toolbar,\n ThemeProvider,\n createMuiTheme\n} from '@material-ui/core';\nimport ImageUploader from './components/ImageUploader';\nimport ModelSelector from './components/ModelSelector';\nimport ResultDisplay from './components/ResultDisplay';\nimport LlmAnalysis from './components/LlmAnalysis';\nimport './App.css';\n\n// Create a theme\nconst theme = createMuiTheme({\n palette: {\n primary: {\n main: '#3f51b5',\n },\n secondary: {\n main: '#f50057',\n },\n },\n typography: {\n fontFamily: 'Roboto, Arial, sans-serif',\n },\n});\n\nfunction App() {\n const [selectedImage, setSelectedImage] = useState(null);\n const [selectedModel, setSelectedModel] = useState('');\n const [isProcessing, setIsProcessing] = useState(false);\n const [results, setResults] = useState(null);\n const [error, setError] = useState(null);\n const [modelsStatus, setModelsStatus] = useState({\n yolo: false,\n detr: false,\n vit: false\n });\n\n // Check API status on component mount\n useEffect(() => {\n fetch('/api/status')\n .then(response => response.json())\n .then(data => {\n setModelsStatus(data.models);\n })\n .catch(err => {\n console.error('Error checking API status:', err);\n setError('Error connecting to the backend API. Please make sure the server is running.');\n });\n }, []);\n\n const handleImageUpload = (image) => {\n setSelectedImage(image);\n setResults(null);\n setError(null);\n };\n\n const handleModelSelect = (model) => {\n setSelectedModel(model);\n setResults(null);\n setError(null);\n };\n\n const processImage = async () => {\n if (!selectedImage || !selectedModel) {\n setError('Please select both an image and a model');\n return;\n }\n\n setIsProcessing(true);\n setError(null);\n\n // Create form data for the image\n const formData = new FormData();\n formData.append('image', selectedImage);\n\n let endpoint = '';\n switch (selectedModel) {\n case 'yolo':\n endpoint = '/api/detect/yolo';\n break;\n case 'detr':\n endpoint = '/api/detect/detr';\n break;\n case 'vit':\n endpoint = '/api/classify/vit';\n break;\n default:\n setError('Invalid model selection');\n setIsProcessing(false);\n return;\n }\n\n try {\n const response = await fetch(endpoint, {\n method: 'POST',\n body: formData,\n });\n\n if (!response.ok) {\n throw new Error(`HTTP error! Status: ${response.status}`);\n }\n\n const data = await response.json();\n setResults({ model: selectedModel, data });\n } catch (err) {\n console.error('Error processing image:', err);\n setError(`Error processing image: ${err.message}`);\n } finally {\n setIsProcessing(false);\n }\n };\n\n return (\n <ThemeProvider theme={theme}>\n <Box style={{ flexGrow: 1 }}>\n <AppBar position=\"static\">\n <Toolbar>\n <Typography variant=\"h6\" style={{ flexGrow: 1 }}>\n Multi-Model Object Detection Demo\n </Typography>\n </Toolbar>\n </AppBar>\n <Container maxWidth=\"lg\" style={{ marginTop: theme.spacing(4), marginBottom: theme.spacing(4) }}>\n <Grid container spacing={3}>\n <Grid item xs={12}>\n <Paper style={{ padding: theme.spacing(2) }}>\n <Typography variant=\"h5\" gutterBottom>\n Upload an image to see how each model performs!\n </Typography>\n <Typography variant=\"body1\" paragraph>\n This demo showcases three different object detection and image classification models:\n </Typography>\n <Typography variant=\"body1\" component=\"div\">\n <ul>\n <li><strong>YOLOv8</strong>: Fast and accurate object detection</li>\n <li><strong>DETR</strong>: DEtection TRansformer for object detection</li>\n <li><strong>ViT</strong>: Vision Transformer for image classification</li>\n </ul>\n </Typography>\n </Paper>\n </Grid>\n \n <Grid item xs={12} md={6}>\n <ImageUploader onImageUpload={handleImageUpload} />\n </Grid>\n \n <Grid item xs={12} md={6}>\n <ModelSelector \n onModelSelect={handleModelSelect} \n onProcess={processImage}\n isProcessing={isProcessing}\n modelsStatus={modelsStatus}\n selectedModel={selectedModel}\n imageSelected={!!selectedImage}\n />\n </Grid>\n \n {error && (\n <Grid item xs={12}>\n <Paper style={{ padding: theme.spacing(2), backgroundColor: '#ffebee' }}>\n <Typography color=\"error\">{error}</Typography>\n </Paper>\n </Grid>\n )}\n \n {isProcessing && (\n <Grid item xs={12} style={{ textAlign: 'center', margin: `${theme.spacing(4)}px 0` }}>\n <CircularProgress />\n <Typography variant=\"h6\" style={{ marginTop: theme.spacing(2) }}>\n Processing image...\n </Typography>\n </Grid>\n )}\n \n {results && (\n <>\n <Grid item xs={12}>\n <ResultDisplay results={results} />\n </Grid>\n <Grid item xs={12}>\n <LlmAnalysis visionResults={results.data} model={results.model} />\n </Grid>\n </>\n )}\n </Grid>\n </Container>\n </Box>\n </ThemeProvider>\n );\n}\n\nexport default App;\n","const reportWebVitals = (onPerfEntry) => {\n if (onPerfEntry && onPerfEntry instanceof Function) {\n import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {\n getCLS(onPerfEntry);\n getFID(onPerfEntry);\n getFCP(onPerfEntry);\n getLCP(onPerfEntry);\n getTTFB(onPerfEntry);\n });\n }\n};\n\nexport default reportWebVitals;\n","import React from 'react';\nimport ReactDOM from 'react-dom';\nimport './index.css';\nimport App from './App';\nimport reportWebVitals from './reportWebVitals';\n\nReactDOM.render(\n <React.StrictMode>\n <App />\n </React.StrictMode>,\n document.getElementById('root')\n);\n\n// If you want to start measuring performance in your app, pass a function\n// to log results (for example: reportWebVitals(console.log))\n// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals\nreportWebVitals();\n"],"sourceRoot":""}
|
static/js/runtime-main.25710301.js
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
!function(e){function r(r){for(var n,i,a=r[0],c=r[1],l=r[2],p=0,s=[];p<a.length;p++)i=a[p],Object.prototype.hasOwnProperty.call(o,i)&&o[i]&&s.push(o[i][0]),o[i]=0;for(n in c)Object.prototype.hasOwnProperty.call(c,n)&&(e[n]=c[n]);for(f&&f(r);s.length;)s.shift()();return u.push.apply(u,l||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,a=1;a<t.length;a++){var c=t[a];0!==o[c]&&(n=!1)}n&&(u.splice(r--,1),e=i(i.s=t[0]))}return e}var n={},o={1:0},u=[];function i(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,i),t.l=!0,t.exports}i.e=function(e){var r=[],t=o[e];if(0!==t)if(t)r.push(t[2]);else{var n=new Promise((function(r,n){t=o[e]=[r,n]}));r.push(t[2]=n);var u,a=document.createElement("script");a.charset="utf-8",a.timeout=120,i.nc&&a.setAttribute("nonce",i.nc),a.src=function(e){return i.p+"static/js/"+({}[e]||e)+"."+{3:"9013e23f"}[e]+".chunk.js"}(e);var c=new Error;u=function(r){a.onerror=a.onload=null,clearTimeout(l);var t=o[e];if(0!==t){if(t){var n=r&&("load"===r.type?"missing":r.type),u=r&&r.target&&r.target.src;c.message="Loading chunk "+e+" failed.\n("+n+": "+u+")",c.name="ChunkLoadError",c.type=n,c.request=u,t[1](c)}o[e]=void 0}};var l=setTimeout((function(){u({type:"timeout",target:a})}),12e4);a.onerror=a.onload=u,document.head.appendChild(a)}return Promise.all(r)},i.m=e,i.c=n,i.d=function(e,r,t){i.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},i.r=function(e){"undefined"!==typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},i.t=function(e,r){if(1&r&&(e=i(e)),8&r)return e;if(4&r&&"object"===typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(i.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)i.d(t,n,function(r){return e[r]}.bind(null,n));return t},i.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return i.d(r,"a",r),r},i.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},i.p="/",i.oe=function(e){throw console.error(e),e};var a=this["webpackJsonpvision-web-app"]=this["webpackJsonpvision-web-app"]||[],c=a.push.bind(a);a.push=r,a=a.slice();for(var l=0;l<a.length;l++)r(a[l]);var f=c;t()}([]);
|
2 |
+
//# sourceMappingURL=runtime-main.25710301.js.map
|
static/js/runtime-main.25710301.js.map
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"version":3,"sources":["../webpack/bootstrap"],"names":["webpackJsonpCallback","data","moduleId","chunkId","chunkIds","moreModules","executeModules","i","resolves","length","Object","prototype","hasOwnProperty","call","installedChunks","push","modules","parentJsonpFunction","shift","deferredModules","apply","checkDeferredModules","result","deferredModule","fulfilled","j","depId","splice","__webpack_require__","s","installedModules","1","exports","module","l","e","promises","installedChunkData","promise","Promise","resolve","reject","onScriptComplete","script","document","createElement","charset","timeout","nc","setAttribute","src","p","jsonpScriptSrc","error","Error","event","onerror","onload","clearTimeout","chunk","errorType","type","realSrc","target","message","name","request","undefined","setTimeout","head","appendChild","all","m","c","d","getter","o","defineProperty","enumerable","get","r","Symbol","toStringTag","value","t","mode","__esModule","ns","create","key","bind","n","object","property","oe","err","console","jsonpArray","this","oldJsonpFunction","slice"],"mappings":"aACE,SAASA,EAAqBC,GAQ7B,IAPA,IAMIC,EAAUC,EANVC,EAAWH,EAAK,GAChBI,EAAcJ,EAAK,GACnBK,EAAiBL,EAAK,GAIHM,EAAI,EAAGC,EAAW,GACpCD,EAAIH,EAASK,OAAQF,IACzBJ,EAAUC,EAASG,GAChBG,OAAOC,UAAUC,eAAeC,KAAKC,EAAiBX,IAAYW,EAAgBX,IACpFK,EAASO,KAAKD,EAAgBX,GAAS,IAExCW,EAAgBX,GAAW,EAE5B,IAAID,KAAYG,EACZK,OAAOC,UAAUC,eAAeC,KAAKR,EAAaH,KACpDc,EAAQd,GAAYG,EAAYH,IAKlC,IAFGe,GAAqBA,EAAoBhB,GAEtCO,EAASC,QACdD,EAASU,OAATV,GAOD,OAHAW,EAAgBJ,KAAKK,MAAMD,EAAiBb,GAAkB,IAGvDe,IAER,SAASA,IAER,IADA,IAAIC,EACIf,EAAI,EAAGA,EAAIY,EAAgBV,OAAQF,IAAK,CAG/C,IAFA,IAAIgB,EAAiBJ,EAAgBZ,GACjCiB,GAAY,EACRC,EAAI,EAAGA,EAAIF,EAAed,OAAQgB,IAAK,CAC9C,IAAIC,EAAQH,EAAeE,GACG,IAA3BX,EAAgBY,KAAcF,GAAY,GAE3CA,IACFL,EAAgBQ,OAAOpB,IAAK,GAC5Be,EAASM,EAAoBA,EAAoBC,EAAIN,EAAe,KAItE,OAAOD,EAIR,IAAIQ,EAAmB,GAKnBhB,EAAkB,CACrBiB,EAAG,GAGAZ,EAAkB,GAQtB,SAASS,EAAoB1B,GAG5B,GAAG4B,EAAiB5B,GACnB,OAAO4B,EAAiB5B,GAAU8B,QAGnC,IAAIC,EAASH,EAAiB5B,GAAY,CACzCK,EAAGL,EACHgC,GAAG,EACHF,QAAS,IAUV,OANAhB,EAAQd,GAAUW,KAAKoB,EAAOD,QAASC,EAAQA,EAAOD,QAASJ,GAG/DK,EAAOC,GAAI,EAGJD,EAAOD,QAKfJ,EAAoBO,EAAI,SAAuBhC,GAC9C,IAAIiC,EAAW,GAKXC,EAAqBvB,EAAgBX,GACzC,GAA0B,IAAvBkC,EAGF,GAAGA,EACFD,EAASrB,KAAKsB,EAAmB,QAC3B,CAEN,IAAIC,EAAU,IAAIC,SAAQ,SAASC,EAASC,GAC3CJ,EAAqBvB,EAAgBX,GAAW,CAACqC,EAASC,MAE3DL,EAASrB,KAAKsB,EAAmB,GAAKC,GAGtC,IACII,EADAC,EAASC,SAASC,cAAc,UAGpCF,EAAOG,QAAU,QACjBH,EAAOI,QAAU,IACbnB,EAAoBoB,IACvBL,EAAOM,aAAa,QAASrB,EAAoBoB,IAElDL,EAAOO,IA1DV,SAAwB/C,GACvB,OAAOyB,EAAoBuB,EAAI,cAAgB,GAAGhD,IAAUA,GAAW,IAAM,CAAC,EAAI,YAAYA,GAAW,YAyD1FiD,CAAejD,GAG5B,IAAIkD,EAAQ,IAAIC,MAChBZ,EAAmB,SAAUa,GAE5BZ,EAAOa,QAAUb,EAAOc,OAAS,KACjCC,aAAaX,GACb,IAAIY,EAAQ7C,EAAgBX,GAC5B,GAAa,IAAVwD,EAAa,CACf,GAAGA,EAAO,CACT,IAAIC,EAAYL,IAAyB,SAAfA,EAAMM,KAAkB,UAAYN,EAAMM,MAChEC,EAAUP,GAASA,EAAMQ,QAAUR,EAAMQ,OAAOb,IACpDG,EAAMW,QAAU,iBAAmB7D,EAAU,cAAgByD,EAAY,KAAOE,EAAU,IAC1FT,EAAMY,KAAO,iBACbZ,EAAMQ,KAAOD,EACbP,EAAMa,QAAUJ,EAChBH,EAAM,GAAGN,GAEVvC,EAAgBX,QAAWgE,IAG7B,IAAIpB,EAAUqB,YAAW,WACxB1B,EAAiB,CAAEmB,KAAM,UAAWE,OAAQpB,MAC1C,MACHA,EAAOa,QAAUb,EAAOc,OAASf,EACjCE,SAASyB,KAAKC,YAAY3B,GAG5B,OAAOJ,QAAQgC,IAAInC,IAIpBR,EAAoB4C,EAAIxD,EAGxBY,EAAoB6C,EAAI3C,EAGxBF,EAAoB8C,EAAI,SAAS1C,EAASiC,EAAMU,GAC3C/C,EAAoBgD,EAAE5C,EAASiC,IAClCvD,OAAOmE,eAAe7C,EAASiC,EAAM,CAAEa,YAAY,EAAMC,IAAKJ,KAKhE/C,EAAoBoD,EAAI,SAAShD,GACX,qBAAXiD,QAA0BA,OAAOC,aAC1CxE,OAAOmE,eAAe7C,EAASiD,OAAOC,YAAa,CAAEC,MAAO,WAE7DzE,OAAOmE,eAAe7C,EAAS,aAAc,CAAEmD,OAAO,KAQvDvD,EAAoBwD,EAAI,SAASD,EAAOE,GAEvC,GADU,EAAPA,IAAUF,EAAQvD,EAAoBuD,IAC/B,EAAPE,EAAU,OAAOF,EACpB,GAAW,EAAPE,GAA8B,kBAAVF,GAAsBA,GAASA,EAAMG,WAAY,OAAOH,EAChF,IAAII,EAAK7E,OAAO8E,OAAO,MAGvB,GAFA5D,EAAoBoD,EAAEO,GACtB7E,OAAOmE,eAAeU,EAAI,UAAW,CAAET,YAAY,EAAMK,MAAOA,IACtD,EAAPE,GAA4B,iBAATF,EAAmB,IAAI,IAAIM,KAAON,EAAOvD,EAAoB8C,EAAEa,EAAIE,EAAK,SAASA,GAAO,OAAON,EAAMM,IAAQC,KAAK,KAAMD,IAC9I,OAAOF,GAIR3D,EAAoB+D,EAAI,SAAS1D,GAChC,IAAI0C,EAAS1C,GAAUA,EAAOqD,WAC7B,WAAwB,OAAOrD,EAAgB,SAC/C,WAA8B,OAAOA,GAEtC,OADAL,EAAoB8C,EAAEC,EAAQ,IAAKA,GAC5BA,GAIR/C,EAAoBgD,EAAI,SAASgB,EAAQC,GAAY,OAAOnF,OAAOC,UAAUC,eAAeC,KAAK+E,EAAQC,IAGzGjE,EAAoBuB,EAAI,IAGxBvB,EAAoBkE,GAAK,SAASC,GAA2B,MAApBC,QAAQ3C,MAAM0C,GAAYA,GAEnE,IAAIE,EAAaC,KAAK,8BAAgCA,KAAK,+BAAiC,GACxFC,EAAmBF,EAAWlF,KAAK2E,KAAKO,GAC5CA,EAAWlF,KAAOf,EAClBiG,EAAaA,EAAWG,QACxB,IAAI,IAAI7F,EAAI,EAAGA,EAAI0F,EAAWxF,OAAQF,IAAKP,EAAqBiG,EAAW1F,IAC3E,IAAIU,EAAsBkF,EAI1B9E,I","file":"static/js/runtime-main.25710301.js","sourcesContent":[" \t// install a JSONP callback for chunk loading\n \tfunction webpackJsonpCallback(data) {\n \t\tvar chunkIds = data[0];\n \t\tvar moreModules = data[1];\n \t\tvar executeModules = data[2];\n\n \t\t// add \"moreModules\" to the modules object,\n \t\t// then flag all \"chunkIds\" as loaded and fire callback\n \t\tvar moduleId, chunkId, i = 0, resolves = [];\n \t\tfor(;i < chunkIds.length; i++) {\n \t\t\tchunkId = chunkIds[i];\n \t\t\tif(Object.prototype.hasOwnProperty.call(installedChunks, chunkId) && installedChunks[chunkId]) {\n \t\t\t\tresolves.push(installedChunks[chunkId][0]);\n \t\t\t}\n \t\t\tinstalledChunks[chunkId] = 0;\n \t\t}\n \t\tfor(moduleId in moreModules) {\n \t\t\tif(Object.prototype.hasOwnProperty.call(moreModules, moduleId)) {\n \t\t\t\tmodules[moduleId] = moreModules[moduleId];\n \t\t\t}\n \t\t}\n \t\tif(parentJsonpFunction) parentJsonpFunction(data);\n\n \t\twhile(resolves.length) {\n \t\t\tresolves.shift()();\n \t\t}\n\n \t\t// add entry modules from loaded chunk to deferred list\n \t\tdeferredModules.push.apply(deferredModules, executeModules || []);\n\n \t\t// run deferred modules when all chunks ready\n \t\treturn checkDeferredModules();\n \t};\n \tfunction checkDeferredModules() {\n \t\tvar result;\n \t\tfor(var i = 0; i < deferredModules.length; i++) {\n \t\t\tvar deferredModule = deferredModules[i];\n \t\t\tvar fulfilled = true;\n \t\t\tfor(var j = 1; j < deferredModule.length; j++) {\n \t\t\t\tvar depId = deferredModule[j];\n \t\t\t\tif(installedChunks[depId] !== 0) fulfilled = false;\n \t\t\t}\n \t\t\tif(fulfilled) {\n \t\t\t\tdeferredModules.splice(i--, 1);\n \t\t\t\tresult = __webpack_require__(__webpack_require__.s = deferredModule[0]);\n \t\t\t}\n \t\t}\n\n \t\treturn result;\n \t}\n\n \t// The module cache\n \tvar installedModules = {};\n\n \t// object to store loaded and loading chunks\n \t// undefined = chunk not loaded, null = chunk preloaded/prefetched\n \t// Promise = chunk loading, 0 = chunk loaded\n \tvar installedChunks = {\n \t\t1: 0\n \t};\n\n \tvar deferredModules = [];\n\n \t// script path function\n \tfunction jsonpScriptSrc(chunkId) {\n \t\treturn __webpack_require__.p + \"static/js/\" + ({}[chunkId]||chunkId) + \".\" + {\"3\":\"9013e23f\"}[chunkId] + \".chunk.js\"\n \t}\n\n \t// The require function\n \tfunction __webpack_require__(moduleId) {\n\n \t\t// Check if module is in cache\n \t\tif(installedModules[moduleId]) {\n \t\t\treturn installedModules[moduleId].exports;\n \t\t}\n \t\t// Create a new module (and put it into the cache)\n \t\tvar module = installedModules[moduleId] = {\n \t\t\ti: moduleId,\n \t\t\tl: false,\n \t\t\texports: {}\n \t\t};\n\n \t\t// Execute the module function\n \t\tmodules[moduleId].call(module.exports, module, module.exports, __webpack_require__);\n\n \t\t// Flag the module as loaded\n \t\tmodule.l = true;\n\n \t\t// Return the exports of the module\n \t\treturn module.exports;\n \t}\n\n \t// This file contains only the entry chunk.\n \t// The chunk loading function for additional chunks\n \t__webpack_require__.e = function requireEnsure(chunkId) {\n \t\tvar promises = [];\n\n\n \t\t// JSONP chunk loading for javascript\n\n \t\tvar installedChunkData = installedChunks[chunkId];\n \t\tif(installedChunkData !== 0) { // 0 means \"already installed\".\n\n \t\t\t// a Promise means \"currently loading\".\n \t\t\tif(installedChunkData) {\n \t\t\t\tpromises.push(installedChunkData[2]);\n \t\t\t} else {\n \t\t\t\t// setup Promise in chunk cache\n \t\t\t\tvar promise = new Promise(function(resolve, reject) {\n \t\t\t\t\tinstalledChunkData = installedChunks[chunkId] = [resolve, reject];\n \t\t\t\t});\n \t\t\t\tpromises.push(installedChunkData[2] = promise);\n\n \t\t\t\t// start chunk loading\n \t\t\t\tvar script = document.createElement('script');\n \t\t\t\tvar onScriptComplete;\n\n \t\t\t\tscript.charset = 'utf-8';\n \t\t\t\tscript.timeout = 120;\n \t\t\t\tif (__webpack_require__.nc) {\n \t\t\t\t\tscript.setAttribute(\"nonce\", __webpack_require__.nc);\n \t\t\t\t}\n \t\t\t\tscript.src = jsonpScriptSrc(chunkId);\n\n \t\t\t\t// create error before stack unwound to get useful stacktrace later\n \t\t\t\tvar error = new Error();\n \t\t\t\tonScriptComplete = function (event) {\n \t\t\t\t\t// avoid mem leaks in IE.\n \t\t\t\t\tscript.onerror = script.onload = null;\n \t\t\t\t\tclearTimeout(timeout);\n \t\t\t\t\tvar chunk = installedChunks[chunkId];\n \t\t\t\t\tif(chunk !== 0) {\n \t\t\t\t\t\tif(chunk) {\n \t\t\t\t\t\t\tvar errorType = event && (event.type === 'load' ? 'missing' : event.type);\n \t\t\t\t\t\t\tvar realSrc = event && event.target && event.target.src;\n \t\t\t\t\t\t\terror.message = 'Loading chunk ' + chunkId + ' failed.\\n(' + errorType + ': ' + realSrc + ')';\n \t\t\t\t\t\t\terror.name = 'ChunkLoadError';\n \t\t\t\t\t\t\terror.type = errorType;\n \t\t\t\t\t\t\terror.request = realSrc;\n \t\t\t\t\t\t\tchunk[1](error);\n \t\t\t\t\t\t}\n \t\t\t\t\t\tinstalledChunks[chunkId] = undefined;\n \t\t\t\t\t}\n \t\t\t\t};\n \t\t\t\tvar timeout = setTimeout(function(){\n \t\t\t\t\tonScriptComplete({ type: 'timeout', target: script });\n \t\t\t\t}, 120000);\n \t\t\t\tscript.onerror = script.onload = onScriptComplete;\n \t\t\t\tdocument.head.appendChild(script);\n \t\t\t}\n \t\t}\n \t\treturn Promise.all(promises);\n \t};\n\n \t// expose the modules object (__webpack_modules__)\n \t__webpack_require__.m = modules;\n\n \t// expose the module cache\n \t__webpack_require__.c = installedModules;\n\n \t// define getter function for harmony exports\n \t__webpack_require__.d = function(exports, name, getter) {\n \t\tif(!__webpack_require__.o(exports, name)) {\n \t\t\tObject.defineProperty(exports, name, { enumerable: true, get: getter });\n \t\t}\n \t};\n\n \t// define __esModule on exports\n \t__webpack_require__.r = function(exports) {\n \t\tif(typeof Symbol !== 'undefined' && Symbol.toStringTag) {\n \t\t\tObject.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });\n \t\t}\n \t\tObject.defineProperty(exports, '__esModule', { value: true });\n \t};\n\n \t// create a fake namespace object\n \t// mode & 1: value is a module id, require it\n \t// mode & 2: merge all properties of value into the ns\n \t// mode & 4: return value when already ns object\n \t// mode & 8|1: behave like require\n \t__webpack_require__.t = function(value, mode) {\n \t\tif(mode & 1) value = __webpack_require__(value);\n \t\tif(mode & 8) return value;\n \t\tif((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;\n \t\tvar ns = Object.create(null);\n \t\t__webpack_require__.r(ns);\n \t\tObject.defineProperty(ns, 'default', { enumerable: true, value: value });\n \t\tif(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));\n \t\treturn ns;\n \t};\n\n \t// getDefaultExport function for compatibility with non-harmony modules\n \t__webpack_require__.n = function(module) {\n \t\tvar getter = module && module.__esModule ?\n \t\t\tfunction getDefault() { return module['default']; } :\n \t\t\tfunction getModuleExports() { return module; };\n \t\t__webpack_require__.d(getter, 'a', getter);\n \t\treturn getter;\n \t};\n\n \t// Object.prototype.hasOwnProperty.call\n \t__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };\n\n \t// __webpack_public_path__\n \t__webpack_require__.p = \"/\";\n\n \t// on error function for async loading\n \t__webpack_require__.oe = function(err) { console.error(err); throw err; };\n\n \tvar jsonpArray = this[\"webpackJsonpvision-web-app\"] = this[\"webpackJsonpvision-web-app\"] || [];\n \tvar oldJsonpFunction = jsonpArray.push.bind(jsonpArray);\n \tjsonpArray.push = webpackJsonpCallback;\n \tjsonpArray = jsonpArray.slice();\n \tfor(var i = 0; i < jsonpArray.length; i++) webpackJsonpCallback(jsonpArray[i]);\n \tvar parentJsonpFunction = oldJsonpFunction;\n\n\n \t// run deferred modules from other chunks\n \tcheckDeferredModules();\n"],"sourceRoot":""}
|
static/manifest.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"short_name": "Vision Web App",
|
3 |
+
"name": "Multi-Model Object Detection Demo",
|
4 |
+
"icons": [
|
5 |
+
{
|
6 |
+
"src": "favicon.ico",
|
7 |
+
"sizes": "64x64 32x32 24x24 16x16",
|
8 |
+
"type": "image/x-icon"
|
9 |
+
}
|
10 |
+
],
|
11 |
+
"start_url": ".",
|
12 |
+
"display": "standalone",
|
13 |
+
"theme_color": "#000000",
|
14 |
+
"background_color": "#ffffff"
|
15 |
+
}
|
static/precache-manifest.053b14ee2ebd7996a78e6e055f2144fe.js
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
self.__precacheManifest = (self.__precacheManifest || []).concat([
|
2 |
+
{
|
3 |
+
"revision": "55f7de6a753a31dd65b7dc1b57b2f81a",
|
4 |
+
"url": "/index.html"
|
5 |
+
},
|
6 |
+
{
|
7 |
+
"revision": "289219e1c2ed943c38e7",
|
8 |
+
"url": "/static/css/main.59c2a54e.chunk.css"
|
9 |
+
},
|
10 |
+
{
|
11 |
+
"revision": "836368d73d2b52d0618a",
|
12 |
+
"url": "/static/js/2.252de3c4.chunk.js"
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"revision": "89a1b2dcd30c03705b2bceeb141b76b6",
|
16 |
+
"url": "/static/js/2.252de3c4.chunk.js.LICENSE.txt"
|
17 |
+
},
|
18 |
+
{
|
19 |
+
"revision": "70d68b45d511e1e11f23",
|
20 |
+
"url": "/static/js/3.9013e23f.chunk.js"
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"revision": "289219e1c2ed943c38e7",
|
24 |
+
"url": "/static/js/main.ad7f086c.chunk.js"
|
25 |
+
},
|
26 |
+
{
|
27 |
+
"revision": "d8c310b0ac7ffa6d8151",
|
28 |
+
"url": "/static/js/runtime-main.25710301.js"
|
29 |
+
}
|
30 |
+
]);
|
static/service-worker.js
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/**
|
2 |
+
* Welcome to your Workbox-powered service worker!
|
3 |
+
*
|
4 |
+
* You'll need to register this file in your web app and you should
|
5 |
+
* disable HTTP caching for this file too.
|
6 |
+
* See https://goo.gl/nhQhGp
|
7 |
+
*
|
8 |
+
* The rest of the code is auto-generated. Please don't update this file
|
9 |
+
* directly; instead, make changes to your Workbox build configuration
|
10 |
+
* and re-run your build process.
|
11 |
+
* See https://goo.gl/2aRDsh
|
12 |
+
*/
|
13 |
+
|
14 |
+
importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js");
|
15 |
+
|
16 |
+
importScripts(
|
17 |
+
"/precache-manifest.053b14ee2ebd7996a78e6e055f2144fe.js"
|
18 |
+
);
|
19 |
+
|
20 |
+
self.addEventListener('message', (event) => {
|
21 |
+
if (event.data && event.data.type === 'SKIP_WAITING') {
|
22 |
+
self.skipWaiting();
|
23 |
+
}
|
24 |
+
});
|
25 |
+
|
26 |
+
workbox.core.clientsClaim();
|
27 |
+
|
28 |
+
/**
|
29 |
+
* The workboxSW.precacheAndRoute() method efficiently caches and responds to
|
30 |
+
* requests for URLs in the manifest.
|
31 |
+
* See https://goo.gl/S9QRab
|
32 |
+
*/
|
33 |
+
self.__precacheManifest = [].concat(self.__precacheManifest || []);
|
34 |
+
workbox.precaching.precacheAndRoute(self.__precacheManifest, {});
|
35 |
+
|
36 |
+
workbox.routing.registerNavigationRoute(workbox.precaching.getCacheKeyForURL("/index.html"), {
|
37 |
+
|
38 |
+
blacklist: [/^\/_/,/\/[^\/?]+\.[^\/]+$/],
|
39 |
+
});
|