Delta_CLIP
Collection
3 items
•
Updated
•
1
A DeltaCLIP-H/14-336 Model that is adversarially pre-trained with web-scale image-text data to reach non-robust-VLM helpfulness levels on clean data while being robust on adversarially attacked data.
import torch
import torch.nn.functional as F
from urllib.request import urlopen
from PIL import Image
from open_clip import create_model_from_pretrained, get_tokenizer
model, preprocess = create_model_from_pretrained('hf-hub:zw123/delta_clip_l14_224')
tokenizer = get_tokenizer('hf-hub:zw123/delta_clip_l14_224')
image = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
image = preprocess(image).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features = F.normalize(image_features, dim=-1)
text_features = F.normalize(text_features, dim=-1)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]]
These models are released under the Creative Commons Attribution 4.0 license.
LLNL-DATA-2003001
If you find this model useful, please consider citing our paper:
@article{wang2025double,
title={Double Visual Defense: Adversarial Pre-training and Instruction Tuning for Improving Vision-Language Model Robustness},
author={Wang, Zeyu and Xie, Cihang and Bartoldson, Brian and Kailkhura, Bhavya},
journal={arXiv preprint arXiv:2501.09446},
year={2025}
}