UniversalAlgorithmic commited on
Commit
7920b56
·
verified ·
1 Parent(s): 332095d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +424 -3
README.md CHANGED
@@ -1,3 +1,424 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPG: Sequential Policy Gradient for Adaptive Hyperparameter Optimization
2
+
3
+
4
+ ## Model zoo
5
+ We provide baseline models and SPG-trained models, all available for download at the following links:
6
+
7
+
8
+ `Table 1: Model comparison on the ImageNet-1K dataset.`
9
+ | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights |
10
+ |-------|------|----------|-----------|-----------|---------|
11
+ | MobileNet-V2 | ❌ | 3.5 M | 71.878 | 90.286 | <a href='https://download.pytorch.org/models/mobilenet_v2-b0353104.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
12
+ | MobileNet-V2 | ✅ | 3.5 M | 72.104 | 90.316 | <a href='https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/mobilenet_v2-yellow'></a> |
13
+ | ResNet-50 | ❌ | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
14
+ | ResNet-50 | ✅ | 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/resnet50/model_35.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> |
15
+ | EfficientNet-V2-M | ❌ | 54.1 M | 85.112 | 97.156 | <a href='https://download.pytorch.org/models/efficientnet_v2_m-dc08266a.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
16
+ | EfficientNet-V2-M | ✅ | 54.1 M | 85.218 | 97.208 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/efficientnet_v2_m/model_7.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/efficientnet_v2_m-yellow'></a> |
17
+ | ViT-B16 | ❌ | 86.6 M | 81.072 | 95.318 | <a href='https://download.pytorch.org/models/vit_b_16-c867db91.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
18
+ | ViT-B16 | ✅ | 86.6 M | 81.092 | 95.304 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/vit_b_16/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/vit_b_16-yellow'></a> |
19
+
20
+
21
+ `Table 2: All models are evaluated a subset of COCO val2017, on the 21 categories (including
22
+ "background") that are present in the Pascal VOC dataset.`
23
+
24
+ `All model reported on TorchVision (with weight COCO_WITH_VOC_LABELS_V1) were benchmarked using only 20 categories. Researchers should first download the pre-trained model from TorchVision and conduct re-evaluation under the 21-category framework.`
25
+
26
+ | Model | SPG | # Params | mIoU (%) | pixelwise Acc (%) | Weights |
27
+ |---------------------|-----|----------|------------|---------------------|---------|
28
+ | FCN-ResNet50 | ❌ | 35.3 M | 58.9 | 90.9 | <a href='https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
29
+ | FCN-ResNet50 | ✅ | 35.3 M | 59.4 | 90.9 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/fcn_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet50-yellow'></a> |
30
+ | FCN-ResNet101 | ❌ | 54.3 M | 62.2 | 91.1 | <a href='https://download.pytorch.org/models/fcn_resnet101_coco-7ecb50ca.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
31
+ | FCN-ResNet101 | ✅ | 54.3 M | 62.4 | 91.1 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/fcn_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet101-yellow'></a> |
32
+ | DeepLabV3-ResNet50 | ❌ | 42.0 M | 63.8 | 91.5 | <a href='https://download.pytorch.org/models/deeplabv3_resnet50_coco-cd0a2569.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
33
+ | DeepLabV3-ResNet50 | ✅ | 42.0 M | 64.2 | 91.6 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/deeplabv3_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet50-yellow'></a> |
34
+ | DeepLabV3-ResNet101 | ❌ | 61.0 M | 65.3 | 91.7 | <a href='https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> |
35
+ | DeepLabV3-ResNet101 | ✅ | 61.0 M | 65.7 | 91.8 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/deeplabv3_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet101-yellow'></a> |
36
+
37
+
38
+ `Table 3: Performance of models for transfer learning trained with fine-tuning (FT) vs. SPG.`
39
+ | Task | SPG | Metric Type | Performance (%) | Weights |
40
+ |-------|------|------------------|-----------------|---------|
41
+ | CoLA | ❌ | Matthews coor | 56.53 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> |
42
+ | CoLA | ✅ | Matthews coor | 62.13 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/cola'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/CoLA-yellow'></a> |
43
+ | SST-2 | ❌ | Accuracy | 92.32 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> |
44
+ | SST-2 | ✅ | Accuracy | 92.54 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/sst2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/SST2-yellow'></a> |
45
+ | MRPC | ❌ | F1/Accuracy | 88.85/84.09 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> |
46
+ | MRPC | ✅ | F1/Accuracy | 91.10/87.25 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/mrpc'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/MRPC-yellow'></a> |
47
+ | QQP | ❌ | F1/Accuracy | 87.49/90.71 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> |
48
+ | QQP | ✅ | F1/Accuracy | 89.72/90.88 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/qqp'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QQP-yellow'></a> |
49
+ | QNLI | ❌ | Accuracy | 90.66 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> |
50
+ | QNLI | ✅ | Accuracy | 91.10 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/qnli'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QNLI-yellow'></a> |
51
+ | RTE | ❌ | Accuracy | 65.70 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> |
52
+ | RTE | ✅ | Accuracy | 72.56 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/rte'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/RTE-yellow'></a> |
53
+ | Q/A* | ❌ | F1/Extra match | 88.52/81.22 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-question_answering-yellow'></a> |
54
+ | Q/A* | ✅ | F1/Extra match | 88.67/81.51 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/qa'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QA-yellow'></a> |
55
+ | AC† | ❌ | Accuracy | 98.26 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-audio_classification-yellow'></a> |
56
+ | AC† | ✅ | Accuracy | 98.31 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/ac'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/AC-yellow'></a> |
57
+
58
+ ## Requirements
59
+
60
+ 1. Install `torch>=2.0.0+cu118`.
61
+ 2. To install other pip packages:
62
+ ```setup
63
+ pip install -r requirements.txt
64
+ ```
65
+ 3. Prepare the [ImageNet](http://image-net.org/) dataset manually and place it in `/path/to/imagenet`. For image classification examples, pass the argument `--data-path=/path/to/imagenet` to the training script. The extracted dataset directory should follow this structure:
66
+ ```setup
67
+ /path/to/imagenet/:
68
+ train/:
69
+ n01440764:
70
+ n01440764_18.JPEG ...
71
+ n01443537:
72
+ n01443537_2.JPEG ...
73
+ val/:
74
+ n01440764:
75
+ ILSVRC2012_val_00000293.JPEG ...
76
+ n01443537:
77
+ ILSVRC2012_val_00000236.JPEG ...
78
+ ```
79
+ 4. Prepare the [MS-COCO 2017](https://cocodataset.org/#home) dataset manually and place it in `/path/to/coco`. For image classification examples, pass the argument `--data-path=/path/to/coco` to the training script. The extracted dataset directory should follow this structure:
80
+ ```setup
81
+ /path/to/coco/:
82
+ annotations:
83
+ many_json_files.json ...
84
+ train2017:
85
+ 000000000009.jpg ...
86
+ val2017:
87
+ 000000000139.jpg ...
88
+ ```
89
+ 5. For [🗣️ Keyword Spotting subset](https://huggingface.co/datasets/s3prl/superb#ks), [Common Language](https://huggingface.co/datasets/speechbrain/common_language), [SQuAD](https://huggingface.co/datasets/rajpurkar/squad), [Common Voice](https://huggingface.co/datasets/legacy-datasets/common_voice), [GLUE](https://gluebenchmark.com/) and [WMT](https://huggingface.co/datasets/wmt/wmt17) datasets, manual downloading is not required — they will be automatically loaded via the Hugging Face Datasets library when running our `audio-classification`, `question-answering`, `speech-recognition`, `text-classification`, or `translation` examples.
90
+
91
+ ## Training
92
+
93
+ ### Model retraining
94
+ We utilize recipes similar to those in [PyTorch Vision's classification reference](https://github.com/pytorch/vision/blob/main/references/classification/README.md) to retrain MobileNet-V2, ResNet, EfficientNet-V2, and ViT using our SPG on ImageNet. You can run the following command:
95
+
96
+ ```train
97
+ cd image-classification
98
+
99
+ # MobileNet-V2
100
+ torchrun --nproc_per_node=4 train.py\
101
+ --data-path /path/to/imagenet/\
102
+ --model mobilenet_v2 --output-dir mobilenet_v2 --weights MobileNet_V2_Weights.IMAGENET1K_V1\
103
+ --batch-size 192 --epochs 40 --lr 0.0004 --lr-step-size 10 --lr-gamma 0.5 --wd 0.00004 --apply-trp --trp-depths 1 --trp-p 0.15 --trp-lambdas 0.4 0.2 0.1
104
+
105
+ # ResNet-50
106
+ torchrun --nproc_per_node=4 train.py\
107
+ --data-path /path/to/imagenet/\
108
+ --model resnet50 --output-dir resnet50 --weights ResNet50_Weights.IMAGENET1K_V1\
109
+ --batch-size 64 --epochs 40 --lr 0.0004 --lr-step-size 10 --lr-gamma 0.5 --print-freq 100\
110
+ --apply-trp --trp-depths 1 --trp-p 0.2 --trp-lambdas 0.4 0.2 0.1
111
+
112
+ # EfficientNet-V2 M
113
+ torchrun --nproc_per_node=4 train.py \
114
+ --data-path /path/to/imagenet/\
115
+ --model efficientnet_v2_m --output-dir efficientnet_v2_m --weights EfficientNet_V2_M_Weights.IMAGENET1K_V1\
116
+ --epochs 10 --batch-size 64 --lr 5e-9 --lr-scheduler cosineannealinglr --weight-decay 0.00002 \
117
+ --lr-warmup-method constant --lr-warmup-epochs 8 --lr-warmup-decay 0. \
118
+ --auto-augment ta_wide --random-erase 0.1 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 --norm-weight-decay 0.0 \
119
+ --train-crop-size 384 --val-crop-size 480 --val-resize-size 480 --ra-sampler --ra-reps 4 --print-freq 100\
120
+ --apply-trp --trp-depths 1 --trp-p 0.2 --trp-lambdas 0.4 0.2 0.1
121
+
122
+ # ViT-B-16
123
+ torchrun --nproc_per_node=4 train.py\
124
+ --data-path /path/to/imagenet/\
125
+ --model vit_b_16 --output-dir vit_b_16 --weights ViT_B_16_Weights.IMAGENET1K_V1\
126
+ --epochs 5 --batch-size 196 --opt adamw --lr 5e-9 --lr-scheduler cosineannealinglr --wd 0.3\
127
+ --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
128
+ --amp --label-smoothing 0.11 --mixup-alpha 0.2 --auto-augment ra --clip-grad-norm 1 --cutmix-alpha 1.0\
129
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1 --print-freq 100
130
+ ```
131
+
132
+ We utilize recipes similar to those in [PyTorch Vision's segmentation reference](https://github.com/pytorch/vision/blob/main/references/segmentation/README.md) to retrain FCN and DeepLab-V3 using our SPG on COCO dataset. You can run the following command:
133
+
134
+ ```train
135
+ cd semantic-segmentation
136
+
137
+ # FCN-ResNet50
138
+ torchrun --nproc_per_node=4 train.py\
139
+ --workers 4 --dataset coco --data-path /path/to/coco/\
140
+ --model fcn_resnet50 --aux-loss --output-dir fcn_resnet50 --weights FCN_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
141
+ --epochs 5 --batch-size 16 --lr 0.0002 --aux-loss --print-freq 100\
142
+ --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
143
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
144
+
145
+ # FCN-ResNet101
146
+ torchrun --nproc_per_node=4 train.py\
147
+ --workers 4 --dataset coco --data-path /path/to/coco/\
148
+ --model fcn_resnet101 --aux-loss --output-dir fcn_resnet101 --weights FCN_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
149
+ --epochs 5 --batch-size 12 --lr 0.0002 --aux-loss --print-freq 100\
150
+ --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
151
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
152
+
153
+ # DeepLabV3-ResNet50
154
+ torchrun --nproc_per_node=4 train.py\
155
+ --workers 4 --dataset coco --data-path /path/to/coco/\
156
+ --model deeplabv3_resnet50 --aux-loss --output-dir deeplabv3_resnet50 --weights DeepLabV3_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
157
+ --epochs 5 --batch-size 16 --lr 0.0002 --aux-loss --print-freq 100\
158
+ --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
159
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
160
+
161
+ # DeepLabV3-ResNet101
162
+ torchrun --nproc_per_node=4 train.py\
163
+ --workers 4 --dataset coco --data-path /path/to/coco/\
164
+ --model deeplabv3_resnet101 --aux-loss --output-dir deeplabv3_resnet101 --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
165
+ --epochs 5 --batch-size 12 --lr 0.0002 --aux-loss --print-freq 100\
166
+ --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
167
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
168
+ ```
169
+
170
+ ### Transfer learning
171
+ We utilize recipes similar to those in [HuggingFace Transformers' Examples](https://github.com/huggingface/transformers/blob/main/examples/pytorch/README.md) to retrain BERT and Wav2Vec using our SPG on GLUE benchmark, SquAD dataset, and SUPERB benchmark. You can run the following command:
172
+
173
+ ```train
174
+ cd text-classification
175
+
176
+ # Task: CoLA
177
+ CUDA_VISIBLE_DEVICES=0 python run_glue.py \
178
+ --model_name_or_path google-bert/bert-base-cased \
179
+ --task_name "cola" \
180
+ --do_train \
181
+ --do_eval \
182
+ --max_seq_length 128 \
183
+ --per_device_train_batch_size 32 \
184
+ --learning_rate 2.5e-5 \
185
+ --num_train_epochs 6 \
186
+ --output_dir "cola" \
187
+ --overwrite_output_dir \
188
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
189
+
190
+ # Task: SST-2
191
+ CUDA_VISIBLE_DEVICES=0 python run_glue.py \
192
+ --model_name_or_path google-bert/bert-base-cased \
193
+ --task_name "sst2" \
194
+ --do_train \
195
+ --do_eval \
196
+ --max_seq_length 128 \
197
+ --per_device_train_batch_size 64 \
198
+ --learning_rate 3e-5 \
199
+ --num_train_epochs 5 \
200
+ --output_dir "sst2" \
201
+ --overwrite_output_dir \
202
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
203
+
204
+ # Task: MRPC
205
+ CUDA_VISIBLE_DEVICES=0 python run_glue.py \
206
+ --model_name_or_path google-bert/bert-base-cased \
207
+ --task_name "mrpc" \
208
+ --do_train \
209
+ --do_eval \
210
+ --max_seq_length 128 \
211
+ --per_device_train_batch_size 16 \
212
+ --learning_rate 2e-5 \
213
+ --num_train_epochs 4 \
214
+ --output_dir "mrpc" \
215
+ --overwrite_output_dir \
216
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
217
+
218
+ # Task: QQP
219
+ CUDA_VISIBLE_DEVICES=0 python run_glue.py \
220
+ --model_name_or_path google-bert/bert-base-cased \
221
+ --task_name "qqp" \
222
+ --do_train \
223
+ --do_eval \
224
+ --max_seq_length 128 \
225
+ --per_device_train_batch_size 32 \
226
+ --learning_rate 1e-5 \
227
+ --num_train_epochs 10 \
228
+ --output_dir "qqp" \
229
+ --overwrite_output_dir \
230
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
231
+
232
+ # Task: QNLI
233
+ CUDA_VISIBLE_DEVICES=0 python run_glue.py \
234
+ --model_name_or_path google-bert/bert-base-cased \
235
+ --task_name "qnli" \
236
+ --do_train \
237
+ --do_eval \
238
+ --max_seq_length 128 \
239
+ --per_device_train_batch_size 32 \
240
+ --learning_rate 2e-5 \
241
+ --num_train_epochs 10 \
242
+ --output_dir "qnli" \
243
+ --overwrite_output_dir \
244
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
245
+
246
+ # Task: RTE
247
+ CUDA_VISIBLE_DEVICES=0 python run_glue.py \
248
+ --model_name_or_path google-bert/bert-base-cased \
249
+ --task_name "rte" \
250
+ --do_train \
251
+ --do_eval \
252
+ --max_seq_length 128 \
253
+ --per_device_train_batch_size 32 \
254
+ --learning_rate 5e-5 \
255
+ --num_train_epochs 5 \
256
+ --output_dir "rte" \
257
+ --overwrite_output_dir \
258
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
259
+
260
+
261
+ # Task: audio classification
262
+ cd ../audio-classification
263
+ CUDA_VISIBLE_DEVICES=0 python run_audio_classification.py \
264
+ --model_name_or_path facebook/wav2vec2-base \
265
+ --dataset_name superb \
266
+ --dataset_config_name ks \
267
+ --trust_remote_code \
268
+ --output_dir wav2vec2-base-ft-keyword-spotting \
269
+ --overwrite_output_dir \
270
+ --remove_unused_columns False \
271
+ --do_train \
272
+ --do_eval \
273
+ --fp16 \
274
+ --learning_rate 3e-5 \
275
+ --max_length_seconds 1 \
276
+ --attention_mask False \
277
+ --warmup_ratio 0.1 \
278
+ --num_train_epochs 8 \
279
+ --per_device_train_batch_size 64 \
280
+ --gradient_accumulation_steps 4 \
281
+ --per_device_eval_batch_size 32 \
282
+ --dataloader_num_workers 4 \
283
+ --logging_strategy steps \
284
+ --logging_steps 10 \
285
+ --eval_strategy epoch \
286
+ --save_strategy epoch \
287
+ --load_best_model_at_end True \
288
+ --metric_for_best_model accuracy \
289
+ --save_total_limit 3 \
290
+ --seed 0 \
291
+ --push_to_hub \
292
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
293
+
294
+
295
+ # Task: question answering
296
+ cd ../question-answering
297
+ CUDA_VISIBLE_DEVICES=0 python run_qa.py \
298
+ --model_name_or_path google-bert/bert-base-uncased \
299
+ --dataset_name squad \
300
+ --do_train \
301
+ --do_eval \
302
+ --per_device_train_batch_size 12 \
303
+ --learning_rate 3e-5 \
304
+ --num_train_epochs 2 \
305
+ --max_seq_length 384 \
306
+ --doc_stride 128 \
307
+ --output_dir ./baseline \
308
+ --overwrite_output_dir \
309
+ --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
310
+ ```
311
+
312
+ ### Network Architecture Search
313
+ We conduct Neural Architecture Search (NAS) on the ResNet architecture using the ImageNet dataset. You can run the following command:
314
+
315
+ ```train
316
+ cd neural-architecture-search
317
+
318
+ torchrun --nproc_per_node=4 train.py\
319
+ --data-path /path/to/imagenet/\
320
+ --model resnet18 --output-dir resnet18 --weights ResNet18_Weights.IMAGENET1K_V1\
321
+ --batch-size 64 --epochs 10 --lr 0.0004 --lr-step-size 2 --lr-gamma 0.5\
322
+ --lr-warmup-method constant --lr-warmup-epochs 1 --lr-warmup-decay 0. \
323
+ --apply-trp --trp-lambdas 0.1 0.01 --print-freq 100
324
+ ```
325
+
326
+ ## Evaluation
327
+
328
+ To evaluate our models on ImageNet, run:
329
+
330
+ ```eval
331
+
332
+ cd image-classification
333
+
334
+ # Required: Download our MobileNet-V2 weights to /path/to/image-classification/mobilenet_v2
335
+ torchrun --nproc_per_node=4 train.py\
336
+ --data-path /path/to/imagenet/\
337
+ --model mobilenet_v2 --resume mobilenet_v2/model_32.pth --test-only
338
+
339
+ # Required: Download our ResNet-50 weights to /path/to/image-classification/resnet50
340
+ torchrun --nproc_per_node=4 train.py\
341
+ --data-path /path/to/imagenet/\
342
+ --model resnet50 --resume resnet50/model_35.pth --test-only
343
+
344
+ # Required: Download our EfficientNet-V2 M weights to /path/to/image-classification/efficientnet_v2_m
345
+ torchrun --nproc_per_node=4 train.py\
346
+ --data-path /path/to/imagenet/\
347
+ --model efficientnet_v2_m --resume efficientnet_v2_m/model_7.pth --test-only\
348
+ --val-crop-size 480 --val-resize-size 480
349
+
350
+ # Required: Download our ViT-B-16 weights to /path/to/image-classification/vit_b_16
351
+ torchrun --nproc_per_node=4 train.py\
352
+ --data-path /path/to/imagenet/\
353
+ --model vit_b_16 --resume vit_b_16/model_4.pth --test-only
354
+ ```
355
+
356
+ To evaluate our models on COCO, run:
357
+
358
+ ```eval
359
+
360
+ cd semantic-segmentation
361
+
362
+ # eval baselines
363
+ torchrun --nproc_per_node=4 train.py\
364
+ --workers 4 --dataset coco --data-path /path/to/coco/\
365
+ --model fcn_resnet50 --aux-loss --weights FCN_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
366
+ --test-only
367
+ torchrun --nproc_per_node=4 train.py\
368
+ --workers 4 --dataset coco --data-path /path/to/coco/\
369
+ --model fcn_resnet101 --aux-loss --weights FCN_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
370
+ --test-only
371
+ torchrun --nproc_per_node=4 train.py\
372
+ --workers 4 --dataset coco --data-path /path/to/coco/\
373
+ --model deeplabv3_resnet50 --aux-loss --weights DeepLabV3_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
374
+ --test-only
375
+ torchrun --nproc_per_node=4 train.py\
376
+ --workers 4 --dataset coco --data-path /path/to/coco/\
377
+ --model deeplabv3_resnet101 --aux-loss --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
378
+ --test-only
379
+
380
+
381
+ # eval our models
382
+ # Required: Download our FCN-ResNet50 weights to /path/to/semantic-segmentation/fcn_resnet50
383
+ torchrun --nproc_per_node=4 train.py\
384
+ --workers 4 --dataset coco --data-path /path/to/coco/\
385
+ --model fcn_resnet50 --aux-loss --resume fcn_resnet50/model_4.pth\
386
+ --test-only
387
+
388
+ # Required: Download our FCN-ResNet101 weights to /path/to/semantic-segmentation/fcn_resnet101
389
+ torchrun --nproc_per_node=4 train.py\
390
+ --workers 4 --dataset coco --data-path /path/to/coco/\
391
+ --model fcn_resnet101 --aux-loss --resume fcn_resnet101/model_4.pth\
392
+ --test-only
393
+
394
+ # Required: Download our DeepLabV3-ResNet50 weights to /path/to/semantic-segmentation/deeplabv3_resnet50
395
+ torchrun --nproc_per_node=4 train.py\
396
+ --workers 4 --dataset coco --data-path /path/to/coco/\
397
+ --model deeplabv3_resnet50 --aux-loss --resume deeplabv3_resnet50/model_4.pth\
398
+ --test-only
399
+
400
+ # Required: Download our DeepLabV3-ResNet101 weights to /path/to/semantic-segmentation/deeplabv3_resnet101
401
+ torchrun --nproc_per_node=4 train.py\
402
+ --workers 4 --dataset coco --data-path /path/to/coco/\
403
+ --model deeplabv3_resnet101 --aux-loss --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
404
+ --test-only
405
+ ```
406
+
407
+ To evaluate our models on GLUE, SquAD, and SUPERB, please re-run the `transfer learning` related commands we previously declared, as these commands are used not only for training but also for evaluation.
408
+
409
+
410
+ For Network Architecture Search, please run the following command to evaluate our SPG-trained ResNet-18 model:
411
+ ```eval
412
+
413
+ cd neural-architecture-search
414
+
415
+ # Required: Download our ResNet-18 weights to /path/to/neural-architecture-search/resnet18
416
+ torchrun --nproc_per_node=4 train.py\
417
+ --data-path /path/to/imagenet/\
418
+ --model resnet18 --resume resnet18/model_8.pth --test-only
419
+ ```
420
+
421
+
422
+ ## License
423
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
424
+