YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
CLIP Sparse Autoencoder Checkpoint
This model is a sparse autoencoder trained on CLIP's internal representations.
Model Details
Architecture
- Layer: 0
- Layer Type: hook_resid_post
- Model: open-clip:laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K
- Dictionary Size: 49152
- Input Dimension: 768
- Expansion Factor: 64
- CLS Token Only: False
Training
- Training Images: 1299936
- Learning Rate: 0.0002
- L1 Coefficient: 0.0002
- Batch Size: 4096
- Context Size: 49
Performance Metrics
Sparsity
- L0 (Active Features): 64.0000
- Dead Features: 0
- Mean Log10 Feature Sparsity: -5.7141
- Features Below 1e-5: 47932
- Features Below 1e-6: 7869
- Mean Passes Since Fired: 154.9637
Reconstruction
- Explained Variance: 0.8124
- Explained Variance Std: 0.0792
- MSE Loss: 0.0019
- L1 Loss: 0
- Overall Loss: 0.0019
Training Details
- Training Duration: 4426 seconds
- Final Learning Rate: 0.0000
- Warm Up Steps: 500
- Gradient Clipping: 1
Additional Information
- Original Checkpoint Path: /network/scratch/p/praneet.suresh/celeba_checkpoints/18f79fca-tinyclip_sae_16_hyperparam_sweep_lr/n_images_1300020.pt
- Wandb Run: https://wandb.ai/perceptual-alignment/imagenet-sweep-topk-patches_all_layers/runs/nymo7hak
- Random Seed: 42
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support