Collections
Discover the best community collections!
Collections including paper arxiv:2402.17764
-
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 609 -
When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method
Paper • 2402.17193 • Published • 24 -
Training-Free Long-Context Scaling of Large Language Models
Paper • 2402.17463 • Published • 21 -
The Power of Scale for Parameter-Efficient Prompt Tuning
Paper • 2104.08691 • Published • 10
-
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 53 -
Beyond Language Models: Byte Models are Digital World Simulators
Paper • 2402.19155 • Published • 50 -
StarCoder 2 and The Stack v2: The Next Generation
Paper • 2402.19173 • Published • 138 -
Simple linear attention language models balance the recall-throughput tradeoff
Paper • 2402.18668 • Published • 19