Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,10 @@ pipeline_tag: visual-question-answering
|
|
18 |
|
19 |
> _Two interns holding hands, symbolizing the integration of InternViT and InternLM._
|
20 |
|
21 |
-
\[
|
|
|
|
|
|
|
22 |
|
23 |
You can run multimodal large models using a 1080Ti now.
|
24 |
|
@@ -55,7 +58,7 @@ As shown in the figure below, we adopted the same model architecture as InternVL
|
|
55 |
|
56 |
## Performance
|
57 |
|
58 |
-
 [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HuggingFace Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL)
|
22 |
+
|
23 |
+
[\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376)
|
24 |
+
|
25 |
|
26 |
You can run multimodal large models using a 1080Ti now.
|
27 |
|
|
|
58 |
|
59 |
## Performance
|
60 |
|
61 |
+

|
62 |
|
63 |
## Model Usage
|
64 |
|