File size: 1,845 Bytes
94e735e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
I love Depth Anything V2 😍  It’s Depth Anything, but scaled with both larger teacher model and a gigantic dataset! Let’s unpack 🤓🧶!

![image_1](image_1.jpg)

The authors have analyzed Marigold, a diffusion based model against Depth Anything and found out what’s up with using synthetic images vs real images for MDE: 🔖  
Real data has a lot of label noise, inaccurate depth maps (caused by depth sensors missing transparent objects etc).

![image_2](image_2.jpg)

The authors train different image encoders only on synthetic images and find out unless the encoder is very large the model can’t generalize well (but large models generalize inherently anyway) 🧐 But they still fail encountering real images that have wide distribution in labels.  

![image_3](image_3.jpg)

Depth Anything v2 framework is to...  
🦖 Train a teacher model based on DINOv2-G based on 595K synthetic images  
🏷️ Label 62M real images using teacher model  
🦕 Train a student model using the real images labelled by teacher  
Result: 10x faster and more accurate than Marigold!  

![image_4](image_4.jpg)  


The authors also construct a new benchmark called DA-2K that is less noisy, highly detailed and more diverse!  
I have created a [collection](https://t.co/3fAB9b2sxi) that has the models, the dataset, the demo and CoreML converted model 😚

> [!TIP]
Ressources:  
[Depth Anything V2](https://arxiv.org/abs/2406.09414)  
by Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao (2024) 
[GitHub](https://github.com/DepthAnything/Depth-Anything-V2) 
[Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/depth_anything_v2)   

> [!NOTE]
[Original tweet](https://twitter.com/mervenoyann/status/1803063120354492658) (June 18, 2024)