Add link to code repository
#2
by
nielsr
HF staff
- opened
README.md
CHANGED
@@ -1,20 +1,23 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
-
pipeline_tag: image-text-to-text
|
4 |
-
library_name: transformers
|
5 |
base_model:
|
6 |
- internlm/internlm2-chat-1_8b
|
7 |
-
base_model_relation: merge
|
8 |
language:
|
9 |
- multilingual
|
|
|
|
|
|
|
10 |
tags:
|
11 |
- internvl
|
12 |
- vision-language model
|
13 |
- monolithic
|
|
|
14 |
---
|
|
|
15 |
# HoVLE
|
16 |
|
17 |
-
[
|
|
|
|
|
18 |
|
19 |
<a id="radar"></a>
|
20 |
|
@@ -40,12 +43,11 @@ This repository releases the HoVLE model with 2.6B parameters. It is built upon
|
|
40 |
| | Details |
|
41 |
| :---------------------------: | :---------- |
|
42 |
| Architecture | The whole model consists of a holistic embedding module and an LLM. The holistic embedding module consists of the same causal Transformer layers as the LLM. It accepts both images and texts as input, and projects them into a unified embedding space. These embeddings are then forwarded into the LLM, constituting a monolithic VLM. |
|
43 |
-
| Stage I (Distillation) | The first stage trains the holistic embedding module to distill the image feature from a pre-trained visual encoder and the text embeddings from
|
44 |
| Stage II (Alignment) | The second stage combines the holistic embedding module with the LLM to perform auto-regressive training, aligning different modalities to a shared embedding space. Only the holistic embedding module is trainable. |
|
45 |
| Stage III (Instruction Tuning) | A visual instruction tuning stage is incorporated to further strengthen the whole VLM to follow instructions. The whole model is trainable. |
|
46 |
|
47 |
|
48 |
-
|
49 |
## Performance
|
50 |
<p align="middle">
|
51 |
<img src="assets/performance1.png" width="90%" />
|
@@ -58,11 +60,9 @@ This repository releases the HoVLE model with 2.6B parameters. It is built upon
|
|
58 |
- Please note that evaluating the same model using different testing toolkits can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
|
59 |
|
60 |
|
61 |
-
|
62 |
Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
|
63 |
|
64 |
|
65 |
-
|
66 |
## Quick Start
|
67 |
|
68 |
We provide an example code to run HoVLE inference using `transformers`.
|
@@ -105,7 +105,7 @@ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_
|
|
105 |
best_ratio_diff = ratio_diff
|
106 |
best_ratio = ratio
|
107 |
elif ratio_diff == best_ratio_diff:
|
108 |
-
if area > 0.5 * image_size *
|
109 |
best_ratio = ratio
|
110 |
return best_ratio
|
111 |
|
@@ -190,10 +190,8 @@ print(f'User: {question}\nAssistant: {response}')
|
|
190 |
question = 'Please write a poem according to the image.'
|
191 |
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
|
192 |
print(f'User: {question}\nAssistant: {response}')
|
193 |
-
|
194 |
```
|
195 |
|
196 |
-
|
197 |
## License
|
198 |
|
199 |
This project is released under the MIT license, while InternLM2 is licensed under the Apache-2.0 license.
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- internlm/internlm2-chat-1_8b
|
|
|
4 |
language:
|
5 |
- multilingual
|
6 |
+
library_name: transformers
|
7 |
+
license: mit
|
8 |
+
pipeline_tag: image-text-to-text
|
9 |
tags:
|
10 |
- internvl
|
11 |
- vision-language model
|
12 |
- monolithic
|
13 |
+
base_model_relation: merge
|
14 |
---
|
15 |
+
|
16 |
# HoVLE
|
17 |
|
18 |
+
[\\[π HoVLE Paper\\]](https://arxiv.org/pdf/2412.16158) [\\[π Quick Start\\]](#quick-start)
|
19 |
+
|
20 |
+
Code: https://github.com/OpenGVLab/HoVLE
|
21 |
|
22 |
<a id="radar"></a>
|
23 |
|
|
|
43 |
| | Details |
|
44 |
| :---------------------------: | :---------- |
|
45 |
| Architecture | The whole model consists of a holistic embedding module and an LLM. The holistic embedding module consists of the same causal Transformer layers as the LLM. It accepts both images and texts as input, and projects them into a unified embedding space. These embeddings are then forwarded into the LLM, constituting a monolithic VLM. |
|
46 |
+
| Stage I (Distillation) | The first stage trains the holistic embedding module to distill the image feature from a pre-trained visual encoder and the text embeddings from the LLM, providing general encoding abilities. Only the holistic embedding module is trainable. |
|
47 |
| Stage II (Alignment) | The second stage combines the holistic embedding module with the LLM to perform auto-regressive training, aligning different modalities to a shared embedding space. Only the holistic embedding module is trainable. |
|
48 |
| Stage III (Instruction Tuning) | A visual instruction tuning stage is incorporated to further strengthen the whole VLM to follow instructions. The whole model is trainable. |
|
49 |
|
50 |
|
|
|
51 |
## Performance
|
52 |
<p align="middle">
|
53 |
<img src="assets/performance1.png" width="90%" />
|
|
|
60 |
- Please note that evaluating the same model using different testing toolkits can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
|
61 |
|
62 |
|
|
|
63 |
Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
|
64 |
|
65 |
|
|
|
66 |
## Quick Start
|
67 |
|
68 |
We provide an example code to run HoVLE inference using `transformers`.
|
|
|
105 |
best_ratio_diff = ratio_diff
|
106 |
best_ratio = ratio
|
107 |
elif ratio_diff == best_ratio_diff:
|
108 |
+
if area > 0.5 * image_size * ratio[0] * ratio[1]:
|
109 |
best_ratio = ratio
|
110 |
return best_ratio
|
111 |
|
|
|
190 |
question = 'Please write a poem according to the image.'
|
191 |
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
|
192 |
print(f'User: {question}\nAssistant: {response}')
|
|
|
193 |
```
|
194 |
|
|
|
195 |
## License
|
196 |
|
197 |
This project is released under the MIT license, while InternLM2 is licensed under the Apache-2.0 license.
|