Weiyun1025 commited on
Commit
0ad4b9f
Β·
verified Β·
1 Parent(s): 95fd2fa

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -5,8 +5,9 @@ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
5
  pipeline_tag: image-text-to-text
6
  library_name: transformers
7
  base_model:
8
- - OpenGVLab/InternVL3-2B-Instruct
9
- base_model_relation: finetune
 
10
  datasets:
11
  - OpenGVLab/MMPR-v1.2
12
  language:
@@ -16,7 +17,7 @@ tags:
16
  - custom_code
17
  ---
18
 
19
- # InternVL3-2B
20
 
21
  [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[πŸ“œ InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[πŸ“œ InternVL3\]](https://huggingface.co/papers/2504.10479)
22
 
@@ -28,6 +29,8 @@ tags:
28
 
29
  ## Introduction
30
 
 
 
31
  We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
32
  Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
33
  Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
 
5
  pipeline_tag: image-text-to-text
6
  library_name: transformers
7
  base_model:
8
+ - OpenGVLab/InternViT-300M-448px-V2_5
9
+ - Qwen/Qwen2.5-1.5B
10
+ base_model_relation: merge
11
  datasets:
12
  - OpenGVLab/MMPR-v1.2
13
  language:
 
17
  - custom_code
18
  ---
19
 
20
+ # InternVL3-2B-Pretrained
21
 
22
  [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[πŸ“œ InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[πŸ“œ InternVL3\]](https://huggingface.co/papers/2504.10479)
23
 
 
29
 
30
  ## Introduction
31
 
32
+ ***This is the pretrained version of InternVL3-2B, which has undergone native multimodal pre-trainin but has not undergone post-training (i.e., SFT and MPO). If you're unsure which version to use, please use the [InternVL3-2B](https://huggingface.co/OpenGVLab/InternVL3-2B) version.***
33
+
34
  We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
35
  Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
36
  Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.