banao-tech commited on
Commit
7c2d0c5
·
verified ·
1 Parent(s): 55244d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -53
README.md CHANGED
@@ -1,53 +1,61 @@
1
- # OmniParser API
2
-
3
- Self-hosted version of Microsoft's [OmniParser](https://huggingface.co/microsoft/OmniParser) Image-to-text model.
4
-
5
- > OmniParser is a general screen parsing tool, which interprets/converts UI screenshot to structured format, to improve existing LLM based UI agent. Training Datasets include: 1) an interactable icon detection dataset, which was curated from popular web pages and automatically annotated to highlight clickable and actionable regions, and 2) an icon description dataset, designed to associate each UI element with its corresponding function.
6
-
7
- ## Why?
8
-
9
- There's already a great HuggingFace gradio [app](https://huggingface.co/spaces/microsoft/OmniParser) for this model. It even offers an API. But
10
-
11
- - Gradio is much slower than serving the model directly (like we do here)
12
- - HF is rate-limited
13
-
14
- ## How it works
15
-
16
- If you look at the Dockerfile, we start off with the HF demo image to retrive all the weights and util functions. Then we add a simple FastAPI server (under main.py) to serve the model.
17
-
18
- ## Getting Started
19
-
20
- ### Requirements
21
-
22
- - GPU
23
- - 16 GB Ram (swap recommended)
24
-
25
- ### Locally
26
-
27
- 1. Clone the repository
28
- 2. Build the docker image: `docker build -t omni-parser-app .`
29
- 3. Run the docker container: `docker run -p 7860:7860 omni-parser-app`
30
-
31
- ### Self-hosted API
32
-
33
- I suggest hosting on [fly.io](https://fly.io) because it's quick and simple to deploy with a CLI.
34
-
35
- This repo is ready-made for deployment on fly.io (see fly.toml for configuration). Just run `fly launch` and follow the prompts.
36
-
37
- ## Docs
38
-
39
- Visit `http://localhost:7860/docs` for the API documentation. There's only one route `/process_image` which returns
40
-
41
- - The image with bounding boxes drawn on (in base64) format
42
- - The parsed elements in a list with text descriptions
43
- - The bounding box coordinates of the parsed elements
44
-
45
- ## Examples
46
-
47
- | Before Image | After Image |
48
- | ---------------------------------- | ----------------------------- |
49
- | ![Before](examples/screenshot.png) | ![After](examples/after.webp) |
50
-
51
- ## Related Projects
52
-
53
- Check out [OneQuery](https://query-rho.vercel.app), an agent that browses the web and returns structured responses for any query, simple or complex. OneQuery is built using OmniParser to enhance its capabilities.
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ title: OmniParse
4
+ sdk: docker
5
+ emoji: 🐢
6
+ colorFrom: yellow
7
+ colorTo: green
8
+ ---
9
+ # OmniParser API
10
+
11
+ Self-hosted version of Microsoft's [OmniParser](https://huggingface.co/microsoft/OmniParser) Image-to-text model.
12
+
13
+ > OmniParser is a general screen parsing tool, which interprets/converts UI screenshot to structured format, to improve existing LLM based UI agent. Training Datasets include: 1) an interactable icon detection dataset, which was curated from popular web pages and automatically annotated to highlight clickable and actionable regions, and 2) an icon description dataset, designed to associate each UI element with its corresponding function.
14
+
15
+ ## Why?
16
+
17
+ There's already a great HuggingFace gradio [app](https://huggingface.co/spaces/microsoft/OmniParser) for this model. It even offers an API. But
18
+
19
+ - Gradio is much slower than serving the model directly (like we do here)
20
+ - HF is rate-limited
21
+
22
+ ## How it works
23
+
24
+ If you look at the Dockerfile, we start off with the HF demo image to retrive all the weights and util functions. Then we add a simple FastAPI server (under main.py) to serve the model.
25
+
26
+ ## Getting Started
27
+
28
+ ### Requirements
29
+
30
+ - GPU
31
+ - 16 GB Ram (swap recommended)
32
+
33
+ ### Locally
34
+
35
+ 1. Clone the repository
36
+ 2. Build the docker image: `docker build -t omni-parser-app .`
37
+ 3. Run the docker container: `docker run -p 7860:7860 omni-parser-app`
38
+
39
+ ### Self-hosted API
40
+
41
+ I suggest hosting on [fly.io](https://fly.io) because it's quick and simple to deploy with a CLI.
42
+
43
+ This repo is ready-made for deployment on fly.io (see fly.toml for configuration). Just run `fly launch` and follow the prompts.
44
+
45
+ ## Docs
46
+
47
+ Visit `http://localhost:7860/docs` for the API documentation. There's only one route `/process_image` which returns
48
+
49
+ - The image with bounding boxes drawn on (in base64) format
50
+ - The parsed elements in a list with text descriptions
51
+ - The bounding box coordinates of the parsed elements
52
+
53
+ ## Examples
54
+
55
+ | Before Image | After Image |
56
+ | ---------------------------------- | ----------------------------- |
57
+ | ![Before](examples/screenshot.png) | ![After](examples/after.webp) |
58
+
59
+ ## Related Projects
60
+
61
+ Check out [OneQuery](https://query-rho.vercel.app), an agent that browses the web and returns structured responses for any query, simple or complex. OneQuery is built using OmniParser to enhance its capabilities.