hanxunh nielsr HF Staff commited on
Commit
edfbd47
·
verified ·
1 Parent(s): 769791b

Update pipeline tag and add usage instructions (#1)

Browse files

- Update pipeline tag and add usage instructions (f4176bcffb7a4d69dacdcd20c990f02109001936)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +18 -8
README.md CHANGED
@@ -1,17 +1,18 @@
1
  ---
2
  library_name: XTransferBench
3
  license: mit
4
- pipeline_tag: zero-shot-classification
5
  tags:
6
  - not-for-all-audiences
7
  - pytorch_model_hub_mixin
8
  - model_hub_mixin
9
  ---
10
 
11
-
12
  # X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
13
  <div align="center">
14
  <a href="https://arxiv.org/abs/2505.05528" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
 
 
15
  </div>
16
 
17
  Baseline attacker [GD-UAP](https://arxiv.org/abs/1801.08092) used in ICML2025 paper ["X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP"](https://arxiv.org/abs/2505.05528)
@@ -19,7 +20,7 @@ Baseline attacker [GD-UAP](https://arxiv.org/abs/1801.08092) used in ICML2025 pa
19
  ---
20
 
21
  ## X-TransferBench
22
- X-TransferBench is an open-source benchmark that provides a comprehensive collection of UAPs/TUAPs capable of achieving universal adversarial transferability. These UAPs can simultaneously **transfer across data, domains, models**, and **tasks**. Essentially, they represent perturbations that can transform any sample into an adversarial example, effective against any model and for any task.
23
 
24
  ## Model Details
25
 
@@ -33,10 +34,20 @@ X-TransferBench is an open-source benchmark that provides a comprehensive collec
33
  ## Model Usage
34
 
35
  ```python
36
- from XTransferBench import attacker
 
 
 
 
 
 
 
37
 
38
- attacker = XTransferBench.zoo.load_attacker("linf_non_targeted", "gd_uap_dl_resnet_msc_with_all_data")
39
- images = # torch.Tensor [b, 3, h, w], values should be between 0 and 1
 
 
 
40
  adv_images = attacker(images) # adversarial examples
41
  ```
42
 
@@ -65,5 +76,4 @@ booktitle={ICML},
65
  year={2025},
66
  }
67
 
68
- ```
69
-
 
1
  ---
2
  library_name: XTransferBench
3
  license: mit
4
+ pipeline_tag: image-to-image
5
  tags:
6
  - not-for-all-audiences
7
  - pytorch_model_hub_mixin
8
  - model_hub_mixin
9
  ---
10
 
 
11
  # X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP
12
  <div align="center">
13
  <a href="https://arxiv.org/abs/2505.05528" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
14
+ <a href="https://github.com/HanxunH/XTransferBench" target="_blank"><img src="https://img.shields.io/badge/GitHub-code-blue" alt="GitHub"></a>
15
+ <a href="https://huggingface.co/spaces/hanxunh/XTransferBench-UAP-Linf" target="_blank"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Spaces-blue" alt="HuggingFace Spaces"></a>
16
  </div>
17
 
18
  Baseline attacker [GD-UAP](https://arxiv.org/abs/1801.08092) used in ICML2025 paper ["X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP"](https://arxiv.org/abs/2505.05528)
 
20
  ---
21
 
22
  ## X-TransferBench
23
+ X-TransferBench is an open-source benchmark that provides a comprehensive collection of UAPs/TUAPs capable of achieving universal adversarial transferability. These UAPs can simultaneously **transfer across data,\xa0domains,\xa0models**, and **tasks**. Essentially, they represent perturbations that can transform any sample into an adversarial example, effective against any model and for any task.
24
 
25
  ## Model Details
26
 
 
34
  ## Model Usage
35
 
36
  ```python
37
+ import XTransferBench
38
+ import XTransferBench.zoo
39
+
40
+ # List threat models
41
+ print(XTransferBench.zoo.list_threat_model())
42
+
43
+ # List UAPs under L_inf threat model
44
+ print(XTransferBench.zoo.list_attacker('linf_non_targeted'))
45
 
46
+ # Load X-Transfer with the Large search space (N=64) non-targeted
47
+ attacker = XTransferBench.zoo.load_attacker('linf_non_targeted', 'xtransfer_large_linf_eps12_non_targeted')
48
+
49
+ # Perturbe images to adversarial example
50
+ images = # Tensor [b, 3, h, w]
51
  adv_images = attacker(images) # adversarial examples
52
  ```
53
 
 
76
  year={2025},
77
  }
78
 
79
+ ```