Update README.md
Browse files
README.md
CHANGED
@@ -11,4 +11,164 @@ license: mit
|
|
11 |
short_description: Interactive tool to visualize patch-wise similarity in image
|
12 |
---
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
short_description: Interactive tool to visualize patch-wise similarity in image
|
12 |
---
|
13 |
|
14 |
+
# DINOv3 Patch Similarity Viewer [Github Repo](https://github.com/devMuniz02/DINOv3-Interactive-Patch-Cosine-Similarity)
|
15 |
+
|
16 |
+

|
17 |
+
|
18 |
+
> **Note:** This README and repository are for educational purposes. The creation of this repo was inspired by the DINOv3 paper to help visualize and understand the output of the model.
|
19 |
+
|
20 |
+
## Purpose
|
21 |
+
|
22 |
+
This repository provides interactive tools to visualize and explore patch-wise similarity in images using the DINOv3 vision transformer model. It is designed for researchers, students, and practitioners interested in understanding how self-supervised vision transformers perceive and relate different regions of an image.
|
23 |
+
|
24 |
+
## About DINOv3
|
25 |
+
|
26 |
+
- **Paper:** [DINOv3: Self-supervised Vision Transformers with Enormous Teacher Models](https://arxiv.org/abs/2508.10104)
|
27 |
+
- **Meta Research Page:** [Meta DINOv3 Publication](https://ai.meta.com/dinov3/)
|
28 |
+
- **Official GitHub:** [facebookresearch/dinov3](https://github.com/facebookresearch/dinov3)
|
29 |
+
|
30 |
+
**Note:**
|
31 |
+
The DINOv3 model weights require access approval.
|
32 |
+
You can request access via the [Meta Research page](https://ai.meta.com/resources/models-and-libraries/dinov3-downloads/) or by selecting the desired model on [Hugging Face model collection](https://huggingface.co/collections/facebook/dinov3-68924841bd6b561778e31009).
|
33 |
+
|
34 |
+
## Features
|
35 |
+
|
36 |
+
- **Interactive Visualization:** Click on image patches or use arrow keys to explore patch similarity heatmaps.
|
37 |
+
- **Single or Two-Image Mode:** If one image is specified, shows self-similarity. If two images are specified, shows both self-similarity and cross-image similarity overlays interactively.
|
38 |
+
- **Image Preprocessing:** Loads and pads images without resizing, preserving the original aspect ratio.
|
39 |
+
- **Cosine Similarity Calculation:** Computes and visualizes cosine similarity between image patches.
|
40 |
+
- **Robust Fallback:** If an image URL fails to load, a default image is used.
|
41 |
+
|
42 |
+
## Installation
|
43 |
+
|
44 |
+
Install dependencies with:
|
45 |
+
|
46 |
+
```bash
|
47 |
+
pip install -r requirements.txt
|
48 |
+
```
|
49 |
+
|
50 |
+
## Model Selection
|
51 |
+
|
52 |
+
You can choose from several DINOv3 models available on Hugging Face (click to view each model card):
|
53 |
+
|
54 |
+
LVD-1689M Dataset (Web data)
|
55 |
+
- ViT
|
56 |
+
- [facebook/dinov3-vit7b16-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vit7b16-pretrain-lvd1689m)
|
57 |
+
- [facebook/dinov3-vits16-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vits16-pretrain-lvd1689m)
|
58 |
+
- [facebook/dinov3-vits16plus-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vits16plus-pretrain-lvd1689m)
|
59 |
+
- [facebook/dinov3-vitb16-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vitb16-pretrain-lvd1689m)
|
60 |
+
- [facebook/dinov3-vitl16-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vitl16-pretrain-lvd1689m)
|
61 |
+
- [facebook/dinov3-vith16plus-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vith16plus-pretrain-lvd1689m)
|
62 |
+
|
63 |
+
- ConvNeXt
|
64 |
+
- [facebook/dinov3-convnext-tiny-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-convnext-tiny-pretrain-lvd1689m)
|
65 |
+
- [facebook/dinov3-convnext-small-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-convnext-small-pretrain-lvd1689m)
|
66 |
+
- [facebook/dinov3-convnext-base-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-convnext-base-pretrain-lvd1689m)
|
67 |
+
- [facebook/dinov3-convnext-large-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-convnext-large-pretrain-lvd1689m)
|
68 |
+
|
69 |
+
SAT-493M Dataset (Satellite data)
|
70 |
+
- ViT
|
71 |
+
- [facebook/dinov3-vitl16-pretrain-sat493m](https://huggingface.co/facebook/dinov3-vitl16-pretrain-sat493m)
|
72 |
+
- [facebook/dinov3-vit7b16-pretrain-sat493m](https://huggingface.co/facebook/dinov3-vit7b16-pretrain-sat493m)
|
73 |
+
|
74 |
+
## Usage
|
75 |
+
|
76 |
+
### Gradio app
|
77 |
+
|
78 |
+
Run the Gradio app:
|
79 |
+
|
80 |
+
```bash
|
81 |
+
python app.py
|
82 |
+
```
|
83 |
+
|
84 |
+
After runnig the app, go to [http://localhost:7860/](http://localhost:7860/) to see the app running.
|
85 |
+
|
86 |
+
Then:
|
87 |
+
- Choose Dataset and model name
|
88 |
+
- For Single image similarity:
|
89 |
+
- Choose only one file or URL
|
90 |
+
- For 2 image similarity:
|
91 |
+
- Choose images from file and/or URL
|
92 |
+
- Click button "Initialize / Update "
|
93 |
+
- Select the desired patch from the image
|
94 |
+
- Watch the results
|
95 |
+
|
96 |
+
**Note:**
|
97 |
+
*Overlay alpha* is the intensity of the overlay of patches on top of image
|
98 |
+
|
99 |
+
### Python Script
|
100 |
+
|
101 |
+
Run the interactive viewer with the default COCO image:
|
102 |
+
|
103 |
+
```bash
|
104 |
+
python DINOv3CosSimilarity.py
|
105 |
+
```
|
106 |
+
|
107 |
+
#### Single Image Mode
|
108 |
+
|
109 |
+
Specify your own image (local path or URL):
|
110 |
+
|
111 |
+
```bash
|
112 |
+
python DINOv3CosSimilarity.py --image path/to/your/image.jpg
|
113 |
+
python DINOv3CosSimilarity.py --image https://yourdomain.com/image.png
|
114 |
+
```
|
115 |
+
|
116 |
+
#### Two Image Mode
|
117 |
+
|
118 |
+
Specify two images (local paths or URLs):
|
119 |
+
|
120 |
+
```bash
|
121 |
+
python DINOv3CosSimilarity.py --image1 path/to/image1.jpg --image2 path/to/image2.jpg
|
122 |
+
python DINOv3CosSimilarity.py --image1 https://yourdomain.com/image1.png --image2 https://yourdomain.com/image2.png
|
123 |
+
```
|
124 |
+
|
125 |
+
#### Model Selection
|
126 |
+
|
127 |
+
Specify the model with `--model` (default is vits16):
|
128 |
+
|
129 |
+
```bash
|
130 |
+
python DINOv3CosSimilarity.py --model facebook/dinov3-vitb16-pretrain-lvd1689m
|
131 |
+
```
|
132 |
+
|
133 |
+
#### Other Options
|
134 |
+
|
135 |
+
- `--show_grid` : Draw patch grid
|
136 |
+
- `--annotate_indices` : Write patch indices on cells
|
137 |
+
- `--overlay_alpha <float>` : Set heatmap alpha (default 0.55)
|
138 |
+
- `--patch_size <int>` : Override patch size (default: model's patch size)
|
139 |
+
|
140 |
+
#### Controls
|
141 |
+
|
142 |
+
- Mouse click to select a patch
|
143 |
+
- Arrow keys to move selection
|
144 |
+
- '1', '2', or 't' to switch active image (in two-image mode)
|
145 |
+
- 'q' to quit
|
146 |
+
|
147 |
+
## Demo Single Image
|
148 |
+
|
149 |
+

|
150 |
+
|
151 |
+
## Demo 2 Images
|
152 |
+
|
153 |
+

|
154 |
+
|
155 |
+
### Jupyter Notebook
|
156 |
+
|
157 |
+
1. Open `PatchCosSimilarity.ipynb` in Jupyter Notebook.
|
158 |
+
2. Run the cells to load an image and visualize patch similarities.
|
159 |
+
3. Set `url1` for single-image mode, or both `url1` and `url2` for two-image mode.
|
160 |
+
4. If an image fails to load, a default image will be used automatically.
|
161 |
+
5. Set the `model_id` variable to any of the models listed above (see commented lines at the top of the notebook).
|
162 |
+
|
163 |
+
**Notebook Controls:**
|
164 |
+
- Mouse click to select a patch
|
165 |
+
- Arrow keys to move selection
|
166 |
+
- '1', '2', or 't' to switch active image (in two-image mode)
|
167 |
+
|
168 |
+
## License
|
169 |
+
|
170 |
+
This project is licensed under the MIT License. See the `LICENSE` file for details.
|
171 |
+
|
172 |
+
## Acknowledgments
|
173 |
+
|
174 |
+
This project utilizes the DINOv3 model from Hugging Face's Transformers library, along with PyTorch, Matplotlib, and Pillow
|