OpenCLIP
Yanqing0327 commited on
Commit
e8ed2fc
·
verified ·
1 Parent(s): 1cc7535

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -13,4 +13,33 @@ datasets:
13
  - **Paper:** [More Information Needed]
14
  - **Project Page:** https://ucsc-vlaa.github.io/CLIPS/
15
 
16
- More details will be updated soon.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - **Paper:** [More Information Needed]
14
  - **Project Page:** https://ucsc-vlaa.github.io/CLIPS/
15
 
16
+ ## Model Usage
17
+ ### With OpenCLIP
18
+ #### Note: Due to differences in the default epsilon values for LayerNorm initialization between JAX and PyTorch, we made some modifications in open_clip/transformer.py to align the model's behavior. Refer to https://github.com/UCSC-VLAA/CLIPS for more details.
19
+ ```
20
+ import torch
21
+ import torch.nn.functional as F
22
+ from urllib.request import urlopen
23
+ from PIL import Image
24
+ from open_clip import create_model_from_pretrained, get_tokenizer
25
+
26
+ model, preprocess = create_model_from_pretrained('hf-hub:UCSC-VLAA/ViT-L-14-CLIPS-Recap-DataComp-1B')
27
+ tokenizer = get_tokenizer('hf-hub:UCSC-VLAA/ViT-L-14-CLIPS-Recap-DataComp-1B')
28
+
29
+ image = Image.open(urlopen(
30
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
31
+ ))
32
+ image = preprocess(image).unsqueeze(0)
33
+
34
+ text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length)
35
+
36
+ with torch.no_grad(), torch.cuda.amp.autocast():
37
+ image_features = model.encode_image(image)
38
+ text_features = model.encode_text(text)
39
+ image_features = F.normalize(image_features, dim=-1)
40
+ text_features = F.normalize(text_features, dim=-1)
41
+
42
+ text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
43
+
44
+ print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]]
45
+ ```