Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -17,8 +17,7 @@ Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vi
|
|
17 |
|
18 |
- [InternVL](https://github.com/OpenGVLab/InternVL): a pioneering open-source alternative to GPT-4V.
|
19 |
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
|
20 |
-
- [InternVideo](https://github.com/OpenGVLab/InternVideo): video foundation models
|
21 |
-
- [InternVideo2](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodality understanding.
|
22 |
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension.
|
23 |
- [All Seeing]():
|
24 |
- [All Seeing V2]():
|
@@ -28,7 +27,9 @@ Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vi
|
|
28 |
# Datasets
|
29 |
|
30 |
- [ShareGPT4o]():
|
31 |
-
- [InternVid]():
|
32 |
-
|
|
|
|
|
33 |
|
34 |
|
|
|
17 |
|
18 |
- [InternVL](https://github.com/OpenGVLab/InternVL): a pioneering open-source alternative to GPT-4V.
|
19 |
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
|
20 |
+
- [InternVideo](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodal understanding.
|
|
|
21 |
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension.
|
22 |
- [All Seeing]():
|
23 |
- [All Seeing V2]():
|
|
|
27 |
# Datasets
|
28 |
|
29 |
- [ShareGPT4o]():
|
30 |
+
- [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid): a large-scale video-text dataset for multimodal understanding and generation.
|
31 |
+
|
32 |
+
# Benchmarks
|
33 |
+
- [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding.
|
34 |
|
35 |
|