nielsr HF Staff commited on
Commit
906b10a
·
verified ·
1 Parent(s): d8e11b0

Add metadata: license, pipeline tag, library name, and link to paper and Github repo

Browse files

This PR adds metadata to the model card, including the license, pipeline tag, and library name. It also links to the original paper and the official Github repository.

Files changed (1) hide show
  1. README.md +13 -244
README.md CHANGED
@@ -1,11 +1,21 @@
 
 
 
 
 
 
1
  # SPG: Sequential Policy Gradient for Adaptive Hyperparameter Optimization
2
 
 
 
 
 
 
3
  > 🚀 If you're using Jupyter or Colab, you can follow the demo and run it on a single GPU:
4
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/UniversalAlgorithmic/SPG/blob/main/demo_nas.ipynb)
5
 
6
  ## Model Zoo: Adaptive Hyperparameter Optimization (HPO) via SPG Algorithm
7
 
8
-
9
  `Table 1: Performance of pre-trained vs. SPG-retrained models on ImageNet-1K`
10
  | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
11
  |-------|------|----------|-----------|-----------|---------|----------------------|
@@ -18,8 +28,6 @@
18
  | ViT-B16 | ❌ | 86.6 M | 81.072 | 95.318 | <a href='https://download.pytorch.org/models/vit_b_16-c867db91.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#vit_b_16'>Recipe</a> |
19
  | ViT-B16 | ✅ | 86.6 M | 81.092 | 95.304 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/vit_b_16/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/vit_b_16-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
20
 
21
-
22
-
23
  `Table 2: Performance of pre-trained vs. SPG-retrained models. All models are evaluated a subset of COCO val2017, on the 21/20 categories that are present in the Pascal VOC dataset.`
24
 
25
  > ⚠️ `All model reported on TorchVision (with weight COCO_WITH_VOC_LABELS_V1) were benchmarked using only 20 categories. Researchers should first download the pre-trained model from TorchVision and conduct re-evaluation under the 21-categories (including "background") framework.`
@@ -35,7 +43,6 @@
35
  | DeepLabV3-ResNet101 | ❌ | 61.0 M | 65.3/67.4 | 91.7/92.4 | <a href='https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
36
  | DeepLabV3-ResNet101 | ✅ | 61.0 M | 65.7/67.8 | 91.8/92.5 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/deeplabv3_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet101-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
37
 
38
-
39
  `Table 3: Performance comparison of fine-tuned vs. SPG-retrained models across NLP and speech benchmarks.`
40
  - GLUE (Text classification: BERT on CoLA, SST-2, MRPC, QQP, QNLI, and RTE task)
41
  - SQuAD (Question answering: BERT)
@@ -60,7 +67,6 @@
60
  | AC† | ❌ | Accuracy | 98.26 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-audio_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu'>Recipe</a> |
61
  | AC† | ✅ | Accuracy | 98.31 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/audio-classification/ac'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/AC-yellow'></a> | [examples/audio-answering/run.sh](#transfer-learning-on-superb) |
62
 
63
-
64
  ## Model Zoo: Neural Architecture Search (NAS) via SPG Algorithm
65
 
66
  `Table 4: Performance of pre-trained vs. SPG-retrained models on ImageNet-1K`
@@ -71,7 +77,6 @@ Depending on the base model, we explore the following architectures:
71
 
72
  > ⚠️`Our SPG differs from most NAS algorithms, which typically use a gating network for architecture selection. In contrast, we neither employ a gating network nor a proxy network. Instead, after policy optimization, we keep only the base architecture (ResNet-18, ResNet-34, and ResNet-50) and remove all others (ResNet-27/36/45, ResNet-40/46/52, and ResNet-53/56/59).`
73
 
74
-
75
  | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
76
  |-------|------|----------|-----------|-----------|---------|----------------------|
77
  | ResNet-18 | ❌ | 11.7M | 69.758 | 89.078 | <a href='https://download.pytorch.org/models/resnet18-f37072fd.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
@@ -81,7 +86,6 @@ Depending on the base model, we explore the following architectures:
81
  | ResNet-50 | ❌ | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
82
  | ResNet-50 | ✅ | 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet50/model_9.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
83
 
84
-
85
  ## Requirements
86
 
87
  1. Install `torch>=2.0.0+cu118`.
@@ -167,239 +171,4 @@ cd ./examples/semantic-segmentation
167
  # FCN-ResNet50
168
  torchrun --nproc_per_node=4 train.py\
169
  --workers 4 --dataset coco --data-path /path/to/coco/\
170
- --model fcn_resnet50 --aux-loss --output-dir fcn_resnet50 --weights FCN_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
171
- --epochs 5 --batch-size 16 --lr 0.0002 --aux-loss --print-freq 100\
172
- --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
173
- --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
174
-
175
- # FCN-ResNet101
176
- torchrun --nproc_per_node=4 train.py\
177
- --workers 4 --dataset coco --data-path /path/to/coco/\
178
- --model fcn_resnet101 --aux-loss --output-dir fcn_resnet101 --weights FCN_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
179
- --epochs 5 --batch-size 12 --lr 0.0002 --aux-loss --print-freq 100\
180
- --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
181
- --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
182
-
183
- # DeepLabV3-ResNet50
184
- torchrun --nproc_per_node=4 train.py\
185
- --workers 4 --dataset coco --data-path /path/to/coco/\
186
- --model deeplabv3_resnet50 --aux-loss --output-dir deeplabv3_resnet50 --weights DeepLabV3_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
187
- --epochs 5 --batch-size 16 --lr 0.0002 --aux-loss --print-freq 100\
188
- --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
189
- --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
190
-
191
- # DeepLabV3-ResNet101
192
- torchrun --nproc_per_node=4 train.py\
193
- --workers 4 --dataset coco --data-path /path/to/coco/\
194
- --model deeplabv3_resnet101 --aux-loss --output-dir deeplabv3_resnet101 --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
195
- --epochs 5 --batch-size 12 --lr 0.0002 --aux-loss --print-freq 100\
196
- --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
197
- --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
198
- ```
199
- </details>
200
-
201
- ### Transfer learning on GLUE
202
- We use recipes similar to those in [HuggingFace Transformers' Examples](https://github.com/huggingface/transformers/blob/main/examples/pytorch/README.md) to retrain BERT with our SPG on GLUE benchmark. The following command can be used:
203
-
204
- ```bash
205
- cd ./examples/text-classification && bash run.sh
206
- ```
207
-
208
- ### Transfer learning on SQuAD
209
- We use recipes similar to those in [HuggingFace Transformers' Examples](https://github.com/huggingface/transformers/blob/main/examples/pytorch/README.md) to retrain Wav2Vec with our SPG on SQuAD dataset. The following command can be used:
210
-
211
- ```bash
212
- cd ./examples/audio-classification
213
- CUDA_VISIBLE_DEVICES=0 python run_audio_classification.py \
214
- --model_name_or_path facebook/wav2vec2-base \
215
- --dataset_name superb \
216
- --dataset_config_name ks \
217
- --trust_remote_code \
218
- --output_dir wav2vec2-base-ft-keyword-spotting \
219
- --overwrite_output_dir \
220
- --remove_unused_columns False \
221
- --do_train \
222
- --do_eval \
223
- --fp16 \
224
- --learning_rate 3e-5 \
225
- --max_length_seconds 1 \
226
- --attention_mask False \
227
- --warmup_ratio 0.1 \
228
- --num_train_epochs 8 \
229
- --per_device_train_batch_size 64 \
230
- --gradient_accumulation_steps 4 \
231
- --per_device_eval_batch_size 32 \
232
- --dataloader_num_workers 4 \
233
- --logging_strategy steps \
234
- --logging_steps 10 \
235
- --eval_strategy epoch \
236
- --save_strategy epoch \
237
- --load_best_model_at_end True \
238
- --metric_for_best_model accuracy \
239
- --save_total_limit 3 \
240
- --seed 0 \
241
- --push_to_hub \
242
- --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
243
- ```
244
-
245
-
246
- ### Transfer learning on SUPERB
247
- We use recipes similar to those in [HuggingFace Transformers' Examples](https://github.com/huggingface/transformers/blob/main/examples/pytorch/README.md) to retrain BERT with our SPG on SUPERB benchmark. The following command can be used:
248
-
249
- ```bash
250
- cd ./examples/question-answering
251
- CUDA_VISIBLE_DEVICES=0 python run_qa.py \
252
- --model_name_or_path google-bert/bert-base-uncased \
253
- --dataset_name squad \
254
- --do_train \
255
- --do_eval \
256
- --per_device_train_batch_size 12 \
257
- --learning_rate 3e-5 \
258
- --num_train_epochs 2 \
259
- --max_seq_length 384 \
260
- --doc_stride 128 \
261
- --output_dir ./baseline \
262
- --overwrite_output_dir \
263
- --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
264
- ```
265
-
266
- ### Neural Architecture Search for ResNet on ImageNet-1K
267
- We conduct Neural Architecture Search (NAS) for the ResNet architecture on the ImageNet dataset. The following command can be used:
268
-
269
- ```bash
270
- cd ./examples/neural-architecture-search
271
-
272
- # During Neural Architecture Search (NAS), we explore ResNet-18, ResNet-27, ResNet-36, and ResNet-45. After retraining with SPG algorithm, we retain only ResNet-18 and discard the others.
273
- torchrun --nproc_per_node=4 train.py\
274
- --data-path /home/cs/Documents/datasets/imagenet\
275
- --model resnet18 --output-dir resnet18 --weights ResNet18_Weights.IMAGENET1K_V1\
276
- --batch-size 128 --epochs 10 --lr 0.0004 --lr-step-size 2 --lr-gamma 0.5\
277
- --lr-warmup-method constant --lr-warmup-epochs 1 --lr-warmup-decay 0.\
278
- --apply-trp --trp-depths 3 3 3 --trp-planes 256 --trp-lambdas 0.4 0.2 0.1 --print-freq 100
279
-
280
- # During Neural Architecture Search (NAS), we explore ResNet-34, ResNet-40, ResNet-46, and ResNet-52. After retraining with SPG algorithm, we retain only ResNet-34 and discard the others.
281
- torchrun --nproc_per_node=4 train.py\
282
- --data-path /home/cs/Documents/datasets/imagenet\
283
- --model resnet34 --output-dir resnet34 --weights ResNet34_Weights.IMAGENET1K_V1\
284
- --batch-size 96 --epochs 10 --lr 0.0004 --lr-step-size 2 --lr-gamma 0.5\
285
- --lr-warmup-method constant --lr-warmup-epochs 1 --lr-warmup-decay 0.\
286
- --apply-trp --trp-depths 2 2 2 --trp-planes 256 --trp-lambdas 0.4 0.2 0.1 --print-freq 100
287
-
288
- # During Neural Architecture Search (NAS), we explore ResNet-50, ResNet-53, ResNet-56, and ResNet-59. After retraining with SPG algorithm, we retain only ResNet-50 and discard the others.
289
- torchrun --nproc_per_node=4 train.py\
290
- --data-path /home/cs/Documents/datasets/imagenet\
291
- --model resnet50 --output-dir resnet50 --weights ResNet50_Weights.IMAGENET1K_V1\
292
- --batch-size 64 --epochs 10 --lr 0.0004 --lr-step-size 2 --lr-gamma 0.5\
293
- --lr-warmup-method constant --lr-warmup-epochs 1 --lr-warmup-decay 0.\
294
- --apply-trp --trp-depths 1 1 1 --trp-planes 1024 --trp-lambdas 0.4 0.2 0.1 --print-freq 100
295
- ```
296
-
297
- ## Evaluation
298
-
299
- To evaluate our models on ImageNet, run:
300
-
301
- ```bash
302
-
303
- cd examples/image-classification
304
-
305
- # Required: Download our MobileNet-V2 weights to examples/image-classification/mobilenet_v2
306
- torchrun --nproc_per_node=4 train.py\
307
- --data-path /path/to/imagenet/\
308
- --model mobilenet_v2 --resume mobilenet_v2/model_32.pth --test-only
309
-
310
- # Required: Download our ResNet-50 weights to examples/image-classification/resnet50
311
- torchrun --nproc_per_node=4 train.py\
312
- --data-path /path/to/imagenet/\
313
- --model resnet50 --resume resnet50/model_35.pth --test-only
314
-
315
- # Required: Download our EfficientNet-V2 M weights to examples/image-classification/efficientnet_v2_m
316
- torchrun --nproc_per_node=4 train.py\
317
- --data-path /path/to/imagenet/\
318
- --model efficientnet_v2_m --resume efficientnet_v2_m/model_7.pth --test-only\
319
- --val-crop-size 480 --val-resize-size 480
320
-
321
- # Required: Download our ViT-B-16 weights to examples/image-classification/vit_b_16
322
- torchrun --nproc_per_node=4 train.py\
323
- --data-path /path/to/imagenet/\
324
- --model vit_b_16 --resume vit_b_16/model_4.pth --test-only
325
- ```
326
-
327
- To evaluate our models on COCO, run:
328
-
329
- ```bash
330
-
331
- cd examples/semantic-segmentation
332
-
333
- # eval baselines
334
- torchrun --nproc_per_node=1 train.py\
335
- --workers 4 --dataset coco --data-path /path/to/coco/\
336
- --model fcn_resnet50 --aux-loss --weights FCN_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
337
- --test-only
338
- torchrun --nproc_per_node=1 train.py\
339
- --workers 4 --dataset coco --data-path /path/to/coco/\
340
- --model fcn_resnet101 --aux-loss --weights FCN_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
341
- --test-only
342
- torchrun --nproc_per_node=1 train.py\
343
- --workers 4 --dataset coco --data-path /path/to/coco/\
344
- --model deeplabv3_resnet50 --aux-loss --weights DeepLabV3_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
345
- --test-only
346
- torchrun --nproc_per_node=1 train.py\
347
- --workers 4 --dataset coco --data-path /path/to/coco/\
348
- --model deeplabv3_resnet101 --aux-loss --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
349
- --test-only
350
-
351
-
352
- # eval our models
353
- # Required: Download our FCN-ResNet50 weights to examples/semantic-segmentation/fcn_resnet50
354
- torchrun --nproc_per_node=1 train.py\
355
- --workers 4 --dataset coco --data-path /path/to/coco/\
356
- --model fcn_resnet50 --aux-loss --resume fcn_resnet50/model_4.pth\
357
- --test-only
358
-
359
- # Required: Download our FCN-ResNet101 weights to examples/semantic-segmentation/fcn_resnet101
360
- torchrun --nproc_per_node=1 train.py\
361
- --workers 4 --dataset coco --data-path /path/to/coco/\
362
- --model fcn_resnet101 --aux-loss --resume fcn_resnet101/model_4.pth\
363
- --test-only
364
-
365
- # Required: Download our DeepLabV3-ResNet50 weights to examples/semantic-segmentation/deeplabv3_resnet50
366
- torchrun --nproc_per_node=1 train.py\
367
- --workers 4 --dataset coco --data-path /path/to/coco/\
368
- --model deeplabv3_resnet50 --aux-loss --resume deeplabv3_resnet50/model_4.pth\
369
- --test-only
370
-
371
- # Required: Download our DeepLabV3-ResNet101 weights to examples/semantic-segmentation/deeplabv3_resnet101
372
- torchrun --nproc_per_node=1 train.py\
373
- --workers 4 --dataset coco --data-path /path/to/coco/\
374
- --model deeplabv3_resnet101 --aux-loss --resume deeplabv3_resnet101/model_4.pth\
375
- --test-only
376
- ```
377
-
378
- To evaluate our models on GLUE, SquAD, and SUPERB, please re-run the `transfer learning` related commands we previously declared, as these commands are used not only for training but also for evaluation.
379
-
380
-
381
- For Network Architecture Search, please run the following command to evaluate our SPG-trained ResNet models:
382
- ```bash
383
-
384
- cd ./examples/neural-architecture-search
385
-
386
- # Required: Download our ResNet-18 weights to examples/neural-architecture-search/resnet18
387
- torchrun --nproc_per_node=4 train.py\
388
- --data-path /home/cs/Documents/datasets/imagenet\
389
- --model resnet18 --resume resnet18/model_3.pth --test-only
390
-
391
- # Required: Download our ResNet-34 weights to examples/neural-architecture-search/resnet34
392
- torchrun --nproc_per_node=4 train.py\
393
- --data-path /home/cs/Documents/datasets/imagenet\
394
- --model resnet34 --resume resnet34/model_8.pth --test-only
395
-
396
- # Required: Download our ResNet-50 weights to examples/neural-architecture-search/resnet50
397
- torchrun --nproc_per_node=4 train.py\
398
- --data-path /home/cs/Documents/datasets/imagenet\
399
- --model resnet50 --resume resnet50/model_9.pth --test-only
400
- ```
401
-
402
-
403
- ## License
404
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
405
-
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: image-classification
5
+ ---
6
+
7
  # SPG: Sequential Policy Gradient for Adaptive Hyperparameter Optimization
8
 
9
+ This repository contains the models described in the paper [Sequential Policy Gradient for Adaptive Hyperparameter Optimization](https://huggingface.co/papers/2506.15051).
10
+
11
+ [Project page](https://huggingface.co/UniversalAlgorithmic/SPG)
12
+ [Github repository](https://github.com/SafeAILab/EAGLE)
13
+
14
  > 🚀 If you're using Jupyter or Colab, you can follow the demo and run it on a single GPU:
15
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/UniversalAlgorithmic/SPG/blob/main/demo_nas.ipynb)
16
 
17
  ## Model Zoo: Adaptive Hyperparameter Optimization (HPO) via SPG Algorithm
18
 
 
19
  `Table 1: Performance of pre-trained vs. SPG-retrained models on ImageNet-1K`
20
  | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
21
  |-------|------|----------|-----------|-----------|---------|----------------------|
 
28
  | ViT-B16 | ❌ | 86.6 M | 81.072 | 95.318 | <a href='https://download.pytorch.org/models/vit_b_16-c867db91.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#vit_b_16'>Recipe</a> |
29
  | ViT-B16 | ✅ | 86.6 M | 81.092 | 95.304 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/vit_b_16/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/vit_b_16-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
30
 
 
 
31
  `Table 2: Performance of pre-trained vs. SPG-retrained models. All models are evaluated a subset of COCO val2017, on the 21/20 categories that are present in the Pascal VOC dataset.`
32
 
33
  > ⚠️ `All model reported on TorchVision (with weight COCO_WITH_VOC_LABELS_V1) were benchmarked using only 20 categories. Researchers should first download the pre-trained model from TorchVision and conduct re-evaluation under the 21-categories (including "background") framework.`
 
43
  | DeepLabV3-ResNet101 | ❌ | 61.0 M | 65.3/67.4 | 91.7/92.4 | <a href='https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
44
  | DeepLabV3-ResNet101 | ✅ | 61.0 M | 65.7/67.8 | 91.8/92.5 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/deeplabv3_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet101-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
45
 
 
46
  `Table 3: Performance comparison of fine-tuned vs. SPG-retrained models across NLP and speech benchmarks.`
47
  - GLUE (Text classification: BERT on CoLA, SST-2, MRPC, QQP, QNLI, and RTE task)
48
  - SQuAD (Question answering: BERT)
 
67
  | AC† | ❌ | Accuracy | 98.26 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-audio_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu'>Recipe</a> |
68
  | AC† | ✅ | Accuracy | 98.31 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/audio-classification/ac'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/AC-yellow'></a> | [examples/audio-answering/run.sh](#transfer-learning-on-superb) |
69
 
 
70
  ## Model Zoo: Neural Architecture Search (NAS) via SPG Algorithm
71
 
72
  `Table 4: Performance of pre-trained vs. SPG-retrained models on ImageNet-1K`
 
77
 
78
  > ⚠️`Our SPG differs from most NAS algorithms, which typically use a gating network for architecture selection. In contrast, we neither employ a gating network nor a proxy network. Instead, after policy optimization, we keep only the base architecture (ResNet-18, ResNet-34, and ResNet-50) and remove all others (ResNet-27/36/45, ResNet-40/46/52, and ResNet-53/56/59).`
79
 
 
80
  | Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
81
  |-------|------|----------|-----------|-----------|---------|----------------------|
82
  | ResNet-18 | ❌ | 11.7M | 69.758 | 89.078 | <a href='https://download.pytorch.org/models/resnet18-f37072fd.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
 
86
  | ResNet-50 | ❌ | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
87
  | ResNet-50 | ✅ | 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet50/model_9.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
88
 
 
89
  ## Requirements
90
 
91
  1. Install `torch>=2.0.0+cu118`.
 
171
  # FCN-ResNet50
172
  torchrun --nproc_per_node=4 train.py\
173
  --workers 4 --dataset coco --data-path /path/to/coco/\
174
+ --model fcn_resnet50 --aux-loss --output-dir fcn_resnet50 --weights FCN_ResNet50