Update README.md
Browse files
README.md
CHANGED
|
@@ -813,4 +813,182 @@ configs:
|
|
| 813 |
data_files:
|
| 814 |
- split: train
|
| 815 |
path: vit/train-*
|
|
|
|
| 816 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 813 |
data_files:
|
| 814 |
- split: train
|
| 815 |
path: vit/train-*
|
| 816 |
+
pretty_name: LoWRA-Bench
|
| 817 |
---
|
| 818 |
+
|
| 819 |
+
# Dataset Card for the LoWRA Bench Dataset
|
| 820 |
+
The ***Lo***RA ***W***eight ***R***ecovery ***A***ttack (LoWRA) Bench is a comprehensive
|
| 821 |
+
benchmark designed to evaluate Pre-Fine-Tuning (Pre-FT) weight recovery methods as presented
|
| 822 |
+
in the "Recovering the Pre-Fine-Tuning Weights of Generative Models" paper.
|
| 823 |
+
|
| 824 |
+
- [Task Details](#task-details)
|
| 825 |
+
- [Dataset Description](#dataset-description)
|
| 826 |
+
- [Dataset Structure](#dataset-structure)
|
| 827 |
+
- [Data Subsets](#data-subsets)
|
| 828 |
+
- [Data Fields](#data-fields)
|
| 829 |
+
- [Layer Merging Example](#layer-merging-example)
|
| 830 |
+
- [Dataset Creation](#dataset-creation)
|
| 831 |
+
- [Risks and Out-of-Scope Use](#risks-and-out-of-scope-use)
|
| 832 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
| 833 |
+
- [Licensing Information](#licensing-information)
|
| 834 |
+
- [Citation Information](#citation-information)
|
| 835 |
+
|
| 836 |
+
|
| 837 |
+
- **🌐 Homepage:**
|
| 838 |
+
https://vision.huji.ac.il/spectral_detuning/
|
| 839 |
+
- **🧑💻 Repository:**
|
| 840 |
+
https://github.com/eliahuhorwitz/spectral_detuning
|
| 841 |
+
- **📃 Paper:**
|
| 842 |
+
http://arxiv.org/abs/
|
| 843 |
+
- **✉️ Point of Contact:**
|
| 844 | |
| 845 |
+
|
| 846 |
+
|
| 847 |
+
## Task Details
|
| 848 |
+
**Pre-Fine-Tuning Weight Recovery Attack Setting:** We uncover a vulnerability in LoRA fine-tuned models wherein an attacker is
|
| 849 |
+
able to undo the fine-tuning process and recover the weights of the original pre-trained model.
|
| 850 |
+
The setting for the vulnerability is as follows:
|
| 851 |
+
|
| 852 |
+
(a) The attacker only has access to n different LoRA fine-tuned models.
|
| 853 |
+
|
| 854 |
+
(b) The attacker assumes that all n models originated from the same source model.
|
| 855 |
+
|
| 856 |
+
(c) Using only the n visible models, the attacker attempts to recover the original source model.
|
| 857 |
+
|
| 858 |
+
**Note: The attacker has no access to the low-rank decomposition of the fine-tuned models.**
|
| 859 |
+
|
| 860 |
+
## Dataset Description
|
| 861 |
+
|
| 862 |
+
The LoWRA Bench dataset is designed to evaluate the performance of Pre-FT weight recovery methods.
|
| 863 |
+
The dataset encompasses three pre-trained representative source models:
|
| 864 |
+
1. A Vision Transformer (ViT) pre-trained on ImageNet-1K.
|
| 865 |
+
2. Mistral-7B-v0.1.
|
| 866 |
+
3. Stable Diffusion 1.5.
|
| 867 |
+
|
| 868 |
+
These models collectively cover supervised and self-supervised objectives, spanning both vision and
|
| 869 |
+
natural language processing (NLP) domains, as well as generative and discriminative tasks.
|
| 870 |
+
Notably, these models are widely used and deployed in numerous production systems.
|
| 871 |
+
|
| 872 |
+
For each source model, we curate 15 LoRA models fine-tuned on diverse datasets, tasks, and objectives.
|
| 873 |
+
The dataset comprises a diverse array of layer types, including self-attention, cross-attention,
|
| 874 |
+
and MLPs. This diversity enables us to assess the generalization capabilities of Pre-FT methods.
|
| 875 |
+
The evaluation can be conducted on a per-model basis, per layer type, or layer depth,
|
| 876 |
+
allowing for a comprehensive analysis of Pre-FT methods. Overall, our dataset includes 544 source
|
| 877 |
+
model layers. When taking into account the fine-tuned LoRA layers, the dataset includes over
|
| 878 |
+
8,000 layers.
|
| 879 |
+
|
| 880 |
+
|
| 881 |
+
## Dataset Structure
|
| 882 |
+
The dataset contains 4 subsets, for each subset we curate 15 LoRA fine-tuned models.
|
| 883 |
+
Each row of the dataset represents a single layer that should be recovered and contains all the needed information for the recovery and numerical evaluation.
|
| 884 |
+
In particular, for each layer, the dataset includes the original Pre-FT weights and the *unmerged* fine-tuned LoRA weight matrices.
|
| 885 |
+
We decided to provide the unmerged weights instead of the merged ones for two reasons:
|
| 886 |
+
1. Providing the unmerged weights significantly reduces the storage size of the dataset (e.g., for a single Mistral subset this reduces the size from ~100GB to ~8GB).
|
| 887 |
+
2. Providing the unmerged weights allows the dataset user to study the properties of the fine-tuned LoRA layers and may help when developing new methods.
|
| 888 |
+
|
| 889 |
+
We leave the merging of the layers to the user, keep in mind this should be done carefully and tested to ensure the original Pre-FT weights are not simply
|
| 890 |
+
provided to the method verbatim. See [Layer Merging Example ](#layer-merging-example) for an example taken from our GitHub repository.
|
| 891 |
+
|
| 892 |
+
|
| 893 |
+
### Data Subsets
|
| 894 |
+
The table below describes the dataset subsets in detail:
|
| 895 |
+
|
| 896 |
+
| Subset Name | Pre-FT Model | Task | Fine-tuning Task | # Pre-FT Layers | # Fine-tuned Layers |
|
| 897 |
+
|----------------------|----------------------|-------------------------------|------------------|-----------------|---------------------|
|
| 898 |
+
| vit | ViT | Image Classification | VTAB-1K | 24 | 360 |
|
| 899 |
+
| stable-diffusion-1.5 | Stable Diffusion 1.5 | Text-to-Image <br/>Generation | Personalization | 264 | 3960 |
|
| 900 |
+
| mistral-7b-v0.1-sft | Mistral-7B-v0.1 | Text Generation | UltraChat SFT | 128 | 1920 |
|
| 901 |
+
| mistral-7b-v0.1-dpo | Mistral-7B-v0.1 | Text Generation | UltraFeedback DPO| 128 | 1920 |
|
| 902 |
+
|
| 903 |
+
|
| 904 |
+
### Data Fields
|
| 905 |
+
As described above, each row of the dataset represents a single layer that should be recovered and contains the following fields:
|
| 906 |
+
|
| 907 |
+
task_name - The name of the task the model was fine-tuned on (subset).
|
| 908 |
+
layer_model - In some cases a Pre-FT model has more than one model (e.g., Stable Diffusion fine-tuned both
|
| 909 |
+
the UNet and the Text Encoder). This field specifies the model the layer belongs to.
|
| 910 |
+
layer_name - The name of the layer in the Pre-FT model as it appears in the model state_dict.
|
| 911 |
+
pre_ft_name - The name of the Pre-FT model (e.g., runwayml/stable-diffusion-v1-5).
|
| 912 |
+
pre_ft_weight - The weight matrix of the Pre-FT models layer.
|
| 913 |
+
lora_{lora_idx}_name - The name of the LoRA fine-tuned model.
|
| 914 |
+
lora_{lora_idx}_A_weight - The LoRA A weight matrix of the LoRA fine-tuned models layer.
|
| 915 |
+
lora_{lora_idx}_B_weight - The LoRA B weight matrix of the LoRA fine-tuned models layer.
|
| 916 |
+
lora_{lora_idx}_rank - The LoRA rank of the LoRA fine-tuned models layer.
|
| 917 |
+
lora_{lora_idx}_alpha - The LoRA alpha of the LoRA fine-tuned models layer.
|
| 918 |
+
|
| 919 |
+
where `{lora_idx}` is the index of the LoRA fine-tuned model in the subset (there are 15 LoRA models per subset).
|
| 920 |
+
|
| 921 |
+
|
| 922 |
+
### Layer Merging Example
|
| 923 |
+
The following code snippet demonstrates merging the LoRA fine-tuned weights with the Pre-FT weights.
|
| 924 |
+
```python
|
| 925 |
+
def merge_lora_weights(args, layer_idx, device):
|
| 926 |
+
dataset = load_dataset(args.dataset, name=args.subset, cache_dir=args.cache_dir)
|
| 927 |
+
layer = deepcopy(dataset.with_format("torch")["train"][layer_idx])
|
| 928 |
+
|
| 929 |
+
merged_layer = {}
|
| 930 |
+
|
| 931 |
+
# Note: load the ground truth Pre-FT weights
|
| 932 |
+
merged_layer['layer_model'] = layer['layer_model']
|
| 933 |
+
merged_layer['layer_name'] = layer['layer_name']
|
| 934 |
+
merged_layer['pre_ft_name'] = layer['pre_ft_name']
|
| 935 |
+
W_pre_ft = deepcopy(layer['pre_ft_weight']).to(device).float()
|
| 936 |
+
merged_layer['pre_ft_weight'] = deepcopy(W_pre_ft)
|
| 937 |
+
|
| 938 |
+
# Note: merge the LoRA weights for all existing LoRA models
|
| 939 |
+
for lora_idx in args.lora_ids:
|
| 940 |
+
alpha = layer[f'lora_{lora_idx}_alpha']
|
| 941 |
+
rank = layer[f'lora_{lora_idx}_rank']
|
| 942 |
+
B = deepcopy(layer[f'lora_{lora_idx}_B_weight']).to(device).float()
|
| 943 |
+
A = deepcopy(layer[f'lora_{lora_idx}_A_weight']).to(device).float()
|
| 944 |
+
|
| 945 |
+
merged_layer[f'lora_{lora_idx}_name'] = layer[f'lora_{lora_idx}_name']
|
| 946 |
+
merged_layer[f'lora_{lora_idx}_rank'] = rank
|
| 947 |
+
merged_layer[f'lora_{lora_idx}_alpha'] = alpha
|
| 948 |
+
merged_layer[f'lora_{lora_idx}_merged_weights'] = W_pre_ft + ((alpha / rank * B) @ A)
|
| 949 |
+
|
| 950 |
+
assert torch.allclose(merged_layer['pre_ft_weight'], layer['pre_ft_weight'])
|
| 951 |
+
assert not torch.allclose(merged_layer[f'lora_{lora_idx}_merged_weights'], layer['pre_ft_weight'])
|
| 952 |
+
assert not torch.allclose(merged_layer[f'lora_{lora_idx}_merged_weights'], merged_layer['pre_ft_weight'])
|
| 953 |
+
return merged_layer
|
| 954 |
+
```
|
| 955 |
+
|
| 956 |
+
|
| 957 |
+
|
| 958 |
+
## Dataset Creation
|
| 959 |
+
|
| 960 |
+
### Source Data
|
| 961 |
+
- The fine-tuning of the ViT models was performed using the [PEFT](https://huggingface.co/docs/peft/en/index) library
|
| 962 |
+
on various datasets from the [VTAB-1K](https://arxiv.org/abs/1910.04867) benchmark.
|
| 963 |
+
- The fine-tuned LoRA models for Stable Diffusion are taken from civitai and were fine-tuned by [RalFinger](https://civitai.com/user/RalFinger).
|
| 964 |
+
- The fine-tuning of Mistral was performed based on the Zephyr model as seen [here](https://github.com/huggingface/alignment-handbook/tree/main).
|
| 965 |
+
|
| 966 |
+
For the full list of models and hyper-parameters see the appendix of the [paper](http://arxiv.org/abs/).
|
| 967 |
+
|
| 968 |
+
|
| 969 |
+
## Risks and Out-of-Scope Use
|
| 970 |
+
Our work uncovers a significant vulnerability in fine-tuned models, allowing attackers to
|
| 971 |
+
access pre-fine-tuning weights. While this discovery reveals potential security risks,
|
| 972 |
+
our primary objective is to advance the field of Machine Learning and raise awareness within the
|
| 973 |
+
research community about the existing vulnerabilities in current models.
|
| 974 |
+
|
| 975 |
+
Instead of using the findings of this study to execute attacks, we advocate for their use by
|
| 976 |
+
model creators to enhance the safety and security of their models. By acknowledging and
|
| 977 |
+
addressing vulnerabilities, creators can proactively safeguard against potential threats.
|
| 978 |
+
|
| 979 |
+
Following established practices in the cyber-security community, we emphasize the importance of open
|
| 980 |
+
discussion and encourage the reporting of vulnerabilities. By fostering transparency and collaboration,
|
| 981 |
+
we can collectively create a safer environment for deploying machine learning models.
|
| 982 |
+
|
| 983 |
+
## Considerations for Using the Data
|
| 984 |
+
### Licensing Information
|
| 985 |
+
[More Information Needed]
|
| 986 |
+
|
| 987 |
+
### Citation Information
|
| 988 |
+
If you use this dataset in your work please cite the following paper:
|
| 989 |
+
|
| 990 |
+
**BibTeX:**
|
| 991 |
+
|
| 992 |
+
[More Information Needed]
|
| 993 |
+
|
| 994 |
+
|