-
A Loss Curvature Perspective on Training Instability in Deep Learning
Paper • 2110.04369 • Published -
Why Do We Need Weight Decay in Modern Deep Learning?
Paper • 2310.04415 • Published -
Small-scale proxies for large-scale Transformer training instabilities
Paper • 2309.14322 • Published • 20 -
Transformers Can Navigate Mazes With Multi-Step Prediction
Paper • 2412.05117 • Published • 5
Collections
Discover the best community collections!
Collections including paper arxiv:2309.14322
-
AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models
Paper • 2309.16414 • Published • 19 -
Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient Pruning of A Multilingual ASR Model
Paper • 2309.13018 • Published • 9 -
Robust Speech Recognition via Large-Scale Weak Supervision
Paper • 2212.04356 • Published • 27 -
Language models in molecular discovery
Paper • 2309.16235 • Published • 10
-
Language Modeling Is Compression
Paper • 2309.10668 • Published • 83 -
Small-scale proxies for large-scale Transformer training instabilities
Paper • 2309.14322 • Published • 20 -
Evaluating Cognitive Maps and Planning in Large Language Models with CogEval
Paper • 2309.15129 • Published • 7 -
Vision Transformers Need Registers
Paper • 2309.16588 • Published • 79
-
Textbooks Are All You Need II: phi-1.5 technical report
Paper • 2309.05463 • Published • 87 -
When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale
Paper • 2309.04564 • Published • 16 -
Large-Scale Automatic Audiobook Creation
Paper • 2309.03926 • Published • 54 -
The Languini Kitchen: Enabling Language Modelling Research at Different Scales of Compute
Paper • 2309.11197 • Published • 5