Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,12 @@ tags:
|
|
11 |
base_model: Salesforce/blip2-opt-2.7b
|
12 |
---
|
13 |
# VLRM
|
14 |
-
This repository contains the
|
|
|
|
|
|
|
|
|
|
|
15 |
# Running the model
|
16 |
## Option 1
|
17 |
<details>
|
@@ -38,7 +43,7 @@ processor.decode(out[0], skip_special_tokens=True).strip()
|
|
38 |
</details>
|
39 |
|
40 |
## Option 2
|
41 |
-
Since the fine-tuned take
|
42 |
<details>
|
43 |
<summary> Step 1. Load the original model </summary>
|
44 |
|
@@ -64,7 +69,11 @@ processor.decode(out[0], skip_special_tokens=True).strip()
|
|
64 |
|
65 |
<details>
|
66 |
<summary> Step 2. Load the RL-tuned weights </summary>
|
67 |
-
|
|
|
|
|
|
|
|
|
68 |
```python
|
69 |
from huggingface_hub import hf_hub_download
|
70 |
finetuned_weights_state_dict = torch.load(hf_hub_download(repo_id="sashakunitsyn/vlrm-blip2-opt-2.7b", filename="vlrm-blip2-opt-2.7b.pt"))
|
|
|
11 |
base_model: Salesforce/blip2-opt-2.7b
|
12 |
---
|
13 |
# VLRM
|
14 |
+
This repository contains the weights of BLIP-2 OPT-2.7B model fine-tuned by reinforcement learning method introduced in the paper [VLRM: Vision-Language Models act as
|
15 |
+
Reward Models for Image Captioning](https://arxiv.com).
|
16 |
+
|
17 |
+
The RL-tuned model is able to generate longer and more comprehensive descriptions with zero computational overhead compared to the original model.
|
18 |
+
|
19 |
+
You can find other details in the [GitHub Repository](https://github.com/papermsucode).
|
20 |
# Running the model
|
21 |
## Option 1
|
22 |
<details>
|
|
|
43 |
</details>
|
44 |
|
45 |
## Option 2
|
46 |
+
Since the fine-tuned layers take small part of the whole model, you can first load the original model, then load the RL-tuned weights.
|
47 |
<details>
|
48 |
<summary> Step 1. Load the original model </summary>
|
49 |
|
|
|
69 |
|
70 |
<details>
|
71 |
<summary> Step 2. Load the RL-tuned weights </summary>
|
72 |
+
Available checkpoints:
|
73 |
+
|
74 |
+
- `vlrm-blip2-opt-2.7b.pt` (VLRM in the paper)
|
75 |
+
- `vlrm-rs-blip2-opt-2.7b.pt` (VLRM-RS in the paper)
|
76 |
+
|
77 |
```python
|
78 |
from huggingface_hub import hf_hub_download
|
79 |
finetuned_weights_state_dict = torch.load(hf_hub_download(repo_id="sashakunitsyn/vlrm-blip2-opt-2.7b", filename="vlrm-blip2-opt-2.7b.pt"))
|