Spaces:
Running
Running
title: README | |
emoji: ⚡ | |
colorFrom: purple | |
colorTo: gray | |
sdk: static | |
pinned: false | |
<div align="center"> | |
<b><font size="6">OpenGVLab</font></b> | |
</div> | |
Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vision-Centric AI research. The GV in our name, OpenGVLab, means general vision, a general understanding of vision, so little effort is needed to adapt to new vision-based tasks. | |
# Models | |
- [InternVL](https://github.com/OpenGVLab/InternVL): a pioneering open-source alternative to GPT-4V. | |
- [InternImage](https://github.com/OpenGVLab/InternImage): a large-scale vision foundation models with deformable convolutions. | |
- [InternVideo](https://github.com/OpenGVLab/InternVideo): video foundation models with generative and discriminative learning. | |
- [InternVideo2](https://github.com/OpenGVLab/InternVideo): large-scale video foundation models for multimodality understanding. | |
- [VideoChat](https://github.com/OpenGVLab/Ask-Anything): an end-to-end chat assistant with robust performances. | |
- [All Seeing](): | |
- [All Seeing V2](): | |
- | |
# Datasets | |
- [ShareGPT4o](): | |
- [InternVid](): | |
- | |