Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,9 @@ Disclaimer: The team releasing Depth Anything did not write a model card for thi
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
-
Depth Anything leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2] backbone.
|
|
|
|
|
19 |
|
20 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
|
21 |
alt="drawing" width="600"/>
|
|
|
15 |
|
16 |
## Model description
|
17 |
|
18 |
+
Depth Anything leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone.
|
19 |
+
|
20 |
+
The model is trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
|
21 |
|
22 |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
|
23 |
alt="drawing" width="600"/>
|