README / README.md
czczup's picture
Update README.md
3e6ee21 verified
|
raw
history blame
1.72 kB
metadata
title: README
emoji: 
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
OpenGVLab

Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks.

Models

  • InternVL: a pioneering open-source alternative to GPT-4V.
  • InternImage: a large-scale vision foundation models with deformable convolutions.
  • InternVideo: large-scale video foundation models for multimodal understanding.
  • VideoChat: an end-to-end chat assistant for video comprehension.
  • All-Seeing-V1: towards panoptic visual recognition and understanding of the open world.
  • All Seeing V2: towards general relation comprehension of the open world.

Datasets

  • ShareGPT4o: a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
  • InternVid: a large-scale video-text dataset for multimodal understanding and generation.

Benchmarks

  • MVBench: a comprehensive benchmark for multimodal video understanding.