File size: 1,720 Bytes
34c119b
 
 
 
 
 
 
 
 
3041a83
 
 
 
aab0eba
 
 
 
03469f1
 
1859be0
e758044
3e6ee21
 
60e9832
aab0eba
60e9832
3e6ee21
1859be0
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
title: README
emoji: 
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
---

<div align="center">
<b><font size="6">OpenGVLab</font></b>
</div>

Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.

# Models

- [InternVL](https://github.com/OpenGVLab/InternVL): a pioneering open-source alternative to GPT-4V.
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions.
- [InternVideo](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodal understanding.
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant for video comprehension. 
- [All-Seeing-V1](https://github.com/OpenGVLab/all-seeing): towards panoptic visual recognition and understanding of the open world.
- [All Seeing V2](https://github.com/OpenGVLab/all-seeing): towards general relation comprehension of the open world.

# Datasets

- [ShareGPT4o](https://sharegpt4o.github.io/): a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
- [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid): a large-scale video-text dataset for multimodal understanding and generation.

# Benchmarks
- [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding.