SPG / README.md
UniversalAlgorithmic's picture
Update README.md
45107b4 verified
|
raw
history blame
24 kB

SPG: Sequential Policy Gradient for Adaptive Hyperparameter Optimization

Model zoo

We provide baseline models and SPG-trained models, all available for download at the following links:

Table 1: Model comparison on the ImageNet-1K dataset.

Model SPG # Params Acc@1 (%) Acc@5 (%) Weights
MobileNet-V2 ❌ 3.5 M 71.878 90.286
MobileNet-V2 βœ… 3.5 M 72.104 90.316
ResNet-50 ❌ 25.6 M 76.130 92.862
ResNet-50 βœ… 25.6 M 77.234 93.322
EfficientNet-V2-M ❌ 54.1 M 85.112 97.156
EfficientNet-V2-M βœ… 54.1 M 85.218 97.208
ViT-B16 ❌ 86.6 M 81.072 95.318
ViT-B16 βœ… 86.6 M 81.092 95.304

Table 2: All models are evaluated a subset of COCO val2017, on the 21 categories (including "background") that are present in the Pascal VOC dataset.

All model reported on TorchVision (with weight COCO_WITH_VOC_LABELS_V1) were benchmarked using only 20 categories. Researchers should first download the pre-trained model from TorchVision and conduct re-evaluation under the 21-category framework.

Model SPG # Params mIoU (%) pixelwise Acc (%) Weights
FCN-ResNet50 ❌ 35.3 M 58.9 90.9
FCN-ResNet50 βœ… 35.3 M 59.4 90.9
FCN-ResNet101 ❌ 54.3 M 62.2 91.1
FCN-ResNet101 βœ… 54.3 M 62.4 91.1
DeepLabV3-ResNet50 ❌ 42.0 M 63.8 91.5
DeepLabV3-ResNet50 βœ… 42.0 M 64.2 91.6
DeepLabV3-ResNet101 ❌ 61.0 M 65.3 91.7
DeepLabV3-ResNet101 βœ… 61.0 M 65.7 91.8

Table 3: Performance of models for transfer learning trained with fine-tuning (FT) vs. SPG.

Task SPG Metric Type Performance (%) Weights
CoLA ❌ Matthews coor 56.53
CoLA βœ… Matthews coor 62.13
SST-2 ❌ Accuracy 92.32
SST-2 βœ… Accuracy 92.54
MRPC ❌ F1/Accuracy 88.85/84.09
MRPC βœ… F1/Accuracy 91.10/87.25
QQP ❌ F1/Accuracy 87.49/90.71
QQP βœ… F1/Accuracy 89.72/90.88
QNLI ❌ Accuracy 90.66
QNLI βœ… Accuracy 91.10
RTE ❌ Accuracy 65.70
RTE βœ… Accuracy 72.56
Q/A* ❌ F1/Extra match 88.52/81.22
Q/A* βœ… F1/Extra match 88.67/81.51
AC† ❌ Accuracy 98.26
AC† βœ… Accuracy 98.31

Requirements

  1. Install torch>=2.0.0+cu118.
  2. To install other pip packages:
        pip install -r requirements.txt
    
  3. Prepare the ImageNet dataset manually and place it in /path/to/imagenet. For image classification examples, pass the argument --data-path=/path/to/imagenet to the training script. The extracted dataset directory should follow this structure:
    /path/to/imagenet/:
        train/:
            n01440764: 
                n01440764_18.JPEG ...
            n01443537:
                n01443537_2.JPEG ...
        val/:
            n01440764:
                ILSVRC2012_val_00000293.JPEG ...
            n01443537:
                ILSVRC2012_val_00000236.JPEG ...
    
  4. Prepare the MS-COCO 2017 dataset manually and place it in /path/to/coco. For image classification examples, pass the argument --data-path=/path/to/coco to the training script. The extracted dataset directory should follow this structure:
    /path/to/coco/:
        annotations:
            many_json_files.json ...
        train2017:
            000000000009.jpg ...
        val2017:
            000000000139.jpg ...
    
  5. For πŸ—£οΈ Keyword Spotting subset, Common Language, SQuAD, Common Voice, GLUE and WMT datasets, manual downloading is not required β€” they will be automatically loaded via the Hugging Face Datasets library when running our audio-classification, question-answering, speech-recognition, text-classification, or translation examples.

Training

Model retraining

We utilize recipes similar to those in PyTorch Vision's classification reference to retrain MobileNet-V2, ResNet, EfficientNet-V2, and ViT using our SPG on ImageNet. You can run the following command:

cd image-classification

# MobileNet-V2
torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model mobilenet_v2  --output-dir mobilenet_v2 --weights MobileNet_V2_Weights.IMAGENET1K_V1\
  --batch-size 192 --epochs 40 --lr 0.0004 --lr-step-size 10 --lr-gamma 0.5 --wd 0.00004 --apply-trp --trp-depths 1 --trp-p 0.15 --trp-lambdas 0.4 0.2 0.1

# ResNet-50
torchrun --nproc_per_node=4 train.py\
    --data-path /path/to/imagenet/\
    --model resnet50 --output-dir resnet50 --weights ResNet50_Weights.IMAGENET1K_V1\
    --batch-size 64 --epochs 40 --lr 0.0004 --lr-step-size 10 --lr-gamma 0.5 --print-freq 100\
    --apply-trp --trp-depths 1 --trp-p 0.2 --trp-lambdas 0.4 0.2 0.1

# EfficientNet-V2 M
torchrun --nproc_per_node=4 train.py \
  --data-path /path/to/imagenet/\
  --model efficientnet_v2_m --output-dir efficientnet_v2_m --weights EfficientNet_V2_M_Weights.IMAGENET1K_V1\
  --epochs 10 --batch-size 64 --lr 5e-9 --lr-scheduler cosineannealinglr --weight-decay 0.00002 \
  --lr-warmup-method constant --lr-warmup-epochs 8 --lr-warmup-decay 0. \
  --auto-augment ta_wide --random-erase 0.1 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 --norm-weight-decay 0.0 \
  --train-crop-size 384 --val-crop-size 480 --val-resize-size 480 --ra-sampler --ra-reps 4 --print-freq 100\
  --apply-trp --trp-depths 1 --trp-p 0.2 --trp-lambdas 0.4 0.2 0.1

# ViT-B-16
torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model vit_b_16 --output-dir vit_b_16 --weights ViT_B_16_Weights.IMAGENET1K_V1\
  --epochs 5 --batch-size 196 --opt adamw --lr 5e-9 --lr-scheduler cosineannealinglr --wd 0.3\
  --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
  --amp --label-smoothing 0.11 --mixup-alpha 0.2 --auto-augment ra --clip-grad-norm 1 --cutmix-alpha 1.0\
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1 --print-freq 100

We utilize recipes similar to those in PyTorch Vision's segmentation reference to retrain FCN and DeepLab-V3 using our SPG on COCO dataset. You can run the following command:

cd semantic-segmentation

# FCN-ResNet50
torchrun --nproc_per_node=4 train.py\
  --workers 4 --dataset coco --data-path /path/to/coco/\
  --model fcn_resnet50 --aux-loss --output-dir fcn_resnet50 --weights FCN_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
  --epochs 5 --batch-size 16 --lr 0.0002 --aux-loss --print-freq 100\
  --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# FCN-ResNet101
torchrun --nproc_per_node=4 train.py\
  --workers 4 --dataset coco --data-path /path/to/coco/\
  --model fcn_resnet101 --aux-loss --output-dir fcn_resnet101 --weights FCN_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
  --epochs 5 --batch-size 12 --lr 0.0002 --aux-loss --print-freq 100\
  --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# DeepLabV3-ResNet50
torchrun --nproc_per_node=4 train.py\
  --workers 4 --dataset coco --data-path /path/to/coco/\
  --model deeplabv3_resnet50 --aux-loss --output-dir deeplabv3_resnet50 --weights DeepLabV3_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
  --epochs 5 --batch-size 16 --lr 0.0002 --aux-loss --print-freq 100\
  --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# DeepLabV3-ResNet101
torchrun --nproc_per_node=4 train.py\
  --workers 4 --dataset coco --data-path /path/to/coco/\
  --model deeplabv3_resnet101 --aux-loss --output-dir deeplabv3_resnet101 --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
  --epochs 5 --batch-size 12 --lr 0.0002 --aux-loss --print-freq 100\
  --lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

Transfer learning

We utilize recipes similar to those in HuggingFace Transformers' Examples to retrain BERT and Wav2Vec using our SPG on GLUE benchmark, SquAD dataset, and SUPERB benchmark. You can run the following command:

cd text-classification

# Task: CoLA 
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
  --model_name_or_path google-bert/bert-base-cased \
  --task_name "cola" \
  --do_train \
  --do_eval \
  --max_seq_length 128 \
  --per_device_train_batch_size 32 \
  --learning_rate 2.5e-5 \
  --num_train_epochs 6 \
  --output_dir "cola" \
  --overwrite_output_dir \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# Task: SST-2
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
  --model_name_or_path google-bert/bert-base-cased \
  --task_name "sst2" \
  --do_train \
  --do_eval \
  --max_seq_length 128 \
  --per_device_train_batch_size 64 \
  --learning_rate 3e-5 \
  --num_train_epochs 5 \
  --output_dir "sst2" \
  --overwrite_output_dir \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# Task: MRPC
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
  --model_name_or_path google-bert/bert-base-cased \
  --task_name "mrpc" \
  --do_train \
  --do_eval \
  --max_seq_length 128 \
  --per_device_train_batch_size 16 \
  --learning_rate 2e-5 \
  --num_train_epochs 4 \
  --output_dir "mrpc" \
  --overwrite_output_dir \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# Task: QQP
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
  --model_name_or_path google-bert/bert-base-cased \
  --task_name "qqp" \
  --do_train \
  --do_eval \
  --max_seq_length 128 \
  --per_device_train_batch_size 32 \
  --learning_rate 1e-5 \
  --num_train_epochs 10 \
  --output_dir "qqp" \
  --overwrite_output_dir \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# Task: QNLI
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
  --model_name_or_path google-bert/bert-base-cased \
  --task_name "qnli" \
  --do_train \
  --do_eval \
  --max_seq_length 128 \
  --per_device_train_batch_size 32 \
  --learning_rate 2e-5 \
  --num_train_epochs 10 \
  --output_dir "qnli" \
  --overwrite_output_dir \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

# Task: RTE
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
  --model_name_or_path google-bert/bert-base-cased \
  --task_name "rte" \
  --do_train \
  --do_eval \
  --max_seq_length 128 \
  --per_device_train_batch_size 32 \
  --learning_rate 5e-5 \
  --num_train_epochs 5 \
  --output_dir "rte" \
  --overwrite_output_dir \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1
    

# Task: audio classification
cd ../audio-classification
CUDA_VISIBLE_DEVICES=0 python run_audio_classification.py \
  --model_name_or_path facebook/wav2vec2-base \
  --dataset_name superb \
  --dataset_config_name ks \
  --trust_remote_code \
  --output_dir wav2vec2-base-ft-keyword-spotting \
  --overwrite_output_dir \
  --remove_unused_columns False \
  --do_train \
  --do_eval \
  --fp16 \
  --learning_rate 3e-5 \
  --max_length_seconds 1 \
  --attention_mask False \
  --warmup_ratio 0.1 \
  --num_train_epochs 8 \
  --per_device_train_batch_size 64 \
  --gradient_accumulation_steps 4 \
  --per_device_eval_batch_size 32 \
  --dataloader_num_workers 4 \
  --logging_strategy steps \
  --logging_steps 10 \
  --eval_strategy epoch \
  --save_strategy epoch \
  --load_best_model_at_end True \
  --metric_for_best_model accuracy \
  --save_total_limit 3 \
  --seed 0 \
  --push_to_hub \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1


# Task: question answering
cd ../question-answering
CUDA_VISIBLE_DEVICES=0 python run_qa.py \
  --model_name_or_path google-bert/bert-base-uncased \
  --dataset_name squad \
  --do_train \
  --do_eval \
  --per_device_train_batch_size 12 \
  --learning_rate 3e-5 \
  --num_train_epochs 2 \
  --max_seq_length 384 \
  --doc_stride 128 \
  --output_dir ./baseline \
  --overwrite_output_dir \
  --apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1

Network Architecture Search

We conduct Neural Architecture Search (NAS) on the ResNet architecture using the ImageNet dataset. You can run the following command:

cd neural-architecture-search

torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model resnet18 --output-dir resnet18 --weights ResNet18_Weights.IMAGENET1K_V1\
  --batch-size 64 --epochs 10 --lr 0.0004 --lr-step-size 2 --lr-gamma 0.5\
  --lr-warmup-method constant --lr-warmup-epochs 1 --lr-warmup-decay 0. \
  --apply-trp --trp-lambdas 0.1 0.01 --print-freq 100

Evaluation

To evaluate our models on ImageNet, run:


cd image-classification

# Required: Download our MobileNet-V2 weights to /path/to/image-classification/mobilenet_v2
torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model mobilenet_v2  --resume mobilenet_v2/model_32.pth --test-only
  
# Required: Download our ResNet-50 weights to /path/to/image-classification/resnet50
torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model resnet50  --resume resnet50/model_35.pth --test-only
  
# Required: Download our EfficientNet-V2 M weights to /path/to/image-classification/efficientnet_v2_m
torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model efficientnet_v2_m  --resume efficientnet_v2_m/model_7.pth --test-only\
  --val-crop-size 480 --val-resize-size 480

# Required: Download our ViT-B-16 weights to /path/to/image-classification/vit_b_16
torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model vit_b_16  --resume vit_b_16/model_4.pth --test-only

To evaluate our models on COCO, run:


cd semantic-segmentation

# eval baselines
torchrun --nproc_per_node=4 train.py\
  --workers 4 --dataset coco --data-path /path/to/coco/\
  --model fcn_resnet50 --aux-loss --weights FCN_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
  --test-only
torchrun --nproc_per_node=4 train.py\
    --workers 4 --dataset coco --data-path /path/to/coco/\
    --model fcn_resnet101 --aux-loss --weights FCN_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
    --test-only
torchrun --nproc_per_node=4 train.py\
    --workers 4 --dataset coco --data-path /path/to/coco/\
    --model deeplabv3_resnet50 --aux-loss --weights DeepLabV3_ResNet50_Weights.COCO_WITH_VOC_LABELS_V1\
    --test-only
torchrun --nproc_per_node=4 train.py\
    --workers 4 --dataset coco --data-path /path/to/coco/\
    --model deeplabv3_resnet101 --aux-loss --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
    --test-only


# eval our models
# Required: Download our FCN-ResNet50 weights to /path/to/semantic-segmentation/fcn_resnet50
torchrun --nproc_per_node=4 train.py\
    --workers 4 --dataset coco --data-path /path/to/coco/\
    --model fcn_resnet50 --aux-loss --resume fcn_resnet50/model_4.pth\
    --test-only

# Required: Download our FCN-ResNet101 weights to /path/to/semantic-segmentation/fcn_resnet101
torchrun --nproc_per_node=4 train.py\
    --workers 4 --dataset coco --data-path /path/to/coco/\
    --model fcn_resnet101 --aux-loss --resume fcn_resnet101/model_4.pth\
    --test-only

# Required: Download our DeepLabV3-ResNet50 weights to /path/to/semantic-segmentation/deeplabv3_resnet50
torchrun --nproc_per_node=4 train.py\
    --workers 4 --dataset coco --data-path /path/to/coco/\
    --model deeplabv3_resnet50 --aux-loss --resume deeplabv3_resnet50/model_4.pth\
    --test-only

# Required: Download our DeepLabV3-ResNet101 weights to /path/to/semantic-segmentation/deeplabv3_resnet101
torchrun --nproc_per_node=4 train.py\
    --workers 4 --dataset coco --data-path /path/to/coco/\
    --model deeplabv3_resnet101 --aux-loss --weights DeepLabV3_ResNet101_Weights.COCO_WITH_VOC_LABELS_V1\
    --test-only

To evaluate our models on GLUE, SquAD, and SUPERB, please re-run the transfer learning related commands we previously declared, as these commands are used not only for training but also for evaluation.

For Network Architecture Search, please run the following command to evaluate our SPG-trained ResNet-18 model:


cd neural-architecture-search

# Required: Download our ResNet-18 weights to /path/to/neural-architecture-search/resnet18
torchrun --nproc_per_node=4 train.py\
  --data-path /path/to/imagenet/\
  --model resnet18  --resume resnet18/model_8.pth --test-only

License

This project is licensed under the MIT License - see the LICENSE file for details.