-
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
Collections
Discover the best community collections!
Collections including paper arxiv:2402.14905
-
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 256 -
MiniCPM-V: A GPT-4V Level MLLM on Your Phone
Paper • 2408.01800 • Published • 82 -
SlimLM: An Efficient Small Language Model for On-Device Document Assistance
Paper • 2411.09944 • Published • 12 -
MobileQuant: Mobile-friendly Quantization for On-device Language Models
Paper • 2408.13933 • Published • 15
-
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases
Paper • 2402.14905 • Published • 128 -
facebook/MobileLLM-125M
Text Generation • Updated • 4.93k • 105 -
facebook/MobileLLM-350M
Text Generation • Updated • 819 • 32 -
facebook/MobileLLM-600M
Text Generation • Updated • 77.4k • 28
-
A Survey of Small Language Models
Paper • 2410.20011 • Published • 40 -
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases
Paper • 2402.14905 • Published • 128 -
HuggingFaceTB/SmolLM2-1.7B-Instruct-GGUF
Text Generation • Updated • 1.64k • 38 -
OpenGVLab/Mini-InternVL-Chat-2B-V1-5
Image-Text-to-Text • Updated • 2.52k • 73