Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -19,8 +19,7 @@ Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vi
|
|
19 |
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
|
20 |
- [InternVideo](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodal understanding.
|
21 |
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension.
|
22 |
-
- [All-Seeing-
|
23 |
-
- [All Seeing V2](https://github.com/OpenGVLab/all-seeing): towards general relation comprehension of the open world.
|
24 |
|
25 |
# Datasets
|
26 |
|
|
|
19 |
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
|
20 |
- [InternVideo](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodal understanding.
|
21 |
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension.
|
22 |
+
- [All-Seeing-Project](https://github.com/OpenGVLab/all-seeing): towards panoptic visual recognition and understanding of the open world.
|
|
|
23 |
|
24 |
# Datasets
|
25 |
|