drgary commited on
Commit
5153ef3
·
verified ·
1 Parent(s): 441388a

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +151 -0
  2. pytorch_model.bin +3 -0
  3. special_tokens_map.json +7 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-to-text
3
+ tags:
4
+ - image-captioning
5
+ languages:
6
+ - en
7
+ license: bsd-3-clause
8
+ ---
9
+
10
+ # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
11
+
12
+ Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone).
13
+
14
+ | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) |
15
+ |:--:|
16
+ | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
17
+
18
+ ## TL;DR
19
+
20
+ Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
21
+
22
+ *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
23
+
24
+ ## Usage
25
+
26
+ You can use this model for conditional and un-conditional image captioning
27
+
28
+ ### Using the Pytorch model
29
+
30
+ #### Running the model on CPU
31
+
32
+ <details>
33
+ <summary> Click to expand </summary>
34
+
35
+ ```python
36
+ import requests
37
+ from PIL import Image
38
+ from transformers import BlipProcessor, BlipForConditionalGeneration
39
+
40
+ processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
41
+ model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large")
42
+
43
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
44
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
45
+
46
+ # conditional image captioning
47
+ text = "a photography of"
48
+ inputs = processor(raw_image, text, return_tensors="pt")
49
+
50
+ out = model.generate(**inputs)
51
+ print(processor.decode(out[0], skip_special_tokens=True))
52
+
53
+ # unconditional image captioning
54
+ inputs = processor(raw_image, return_tensors="pt")
55
+
56
+ out = model.generate(**inputs)
57
+ print(processor.decode(out[0], skip_special_tokens=True))
58
+ ```
59
+ </details>
60
+
61
+ #### Running the model on GPU
62
+
63
+ ##### In full precision
64
+
65
+ <details>
66
+ <summary> Click to expand </summary>
67
+
68
+ ```python
69
+ import requests
70
+ from PIL import Image
71
+ from transformers import BlipProcessor, BlipForConditionalGeneration
72
+
73
+ processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
74
+ model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda")
75
+
76
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
77
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
78
+
79
+ # conditional image captioning
80
+ text = "a photography of"
81
+ inputs = processor(raw_image, text, return_tensors="pt").to("cuda")
82
+
83
+ out = model.generate(**inputs)
84
+ print(processor.decode(out[0], skip_special_tokens=True))
85
+
86
+ # unconditional image captioning
87
+ inputs = processor(raw_image, return_tensors="pt").to("cuda")
88
+
89
+ out = model.generate(**inputs)
90
+ print(processor.decode(out[0], skip_special_tokens=True))
91
+ ```
92
+ </details>
93
+
94
+ ##### In half precision (`float16`)
95
+
96
+ <details>
97
+ <summary> Click to expand </summary>
98
+
99
+ ```python
100
+ import torch
101
+ import requests
102
+ from PIL import Image
103
+ from transformers import BlipProcessor, BlipForConditionalGeneration
104
+
105
+ processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
106
+ model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda")
107
+
108
+ img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
109
+ raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
110
+
111
+ # conditional image captioning
112
+ text = "a photography of"
113
+ inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)
114
+
115
+ out = model.generate(**inputs)
116
+ print(processor.decode(out[0], skip_special_tokens=True))
117
+ # >>> a photography of a woman and her dog
118
+
119
+ # unconditional image captioning
120
+ inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
121
+
122
+ out = model.generate(**inputs)
123
+ print(processor.decode(out[0], skip_special_tokens=True))
124
+ >>> a woman sitting on the beach with her dog
125
+ ```
126
+ </details>
127
+
128
+ ## Ethical Considerations
129
+ This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
130
+
131
+ ## BibTex and citation info
132
+
133
+ ```
134
+ @misc{https://doi.org/10.48550/arxiv.2201.12086,
135
+ doi = {10.48550/ARXIV.2201.12086},
136
+
137
+ url = {https://arxiv.org/abs/2201.12086},
138
+
139
+ author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
140
+
141
+ keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
142
+
143
+ title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
144
+
145
+ publisher = {arXiv},
146
+
147
+ year = {2022},
148
+
149
+ copyright = {Creative Commons Attribution 4.0 International}
150
+ }
151
+ ```
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66c8aec8d91b5e74b86bde343ed95fb53e9ccf9ffc77c5093890df662d234e04
3
+ size 1879143921
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }