-
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 12 -
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences
Paper • 2404.03715 • Published • 61 -
Zero-Shot Tokenizer Transfer
Paper • 2405.07883 • Published • 5
Collections
Discover the best community collections!
Collections including paper arxiv:1706.03762
-
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 13
-
Long-form factuality in large language models
Paper • 2403.18802 • Published • 25 -
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 13 -
A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4
Paper • 2310.12321 • Published • 1
-
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Paper • 1701.06538 • Published • 6 -
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 10 -
Language Model Evaluation Beyond Perplexity
Paper • 2106.00085 • Published
-
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
Self-Attention with Relative Position Representations
Paper • 1803.02155 • Published -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding
Paper • 2401.12954 • Published • 30
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 40 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
Attention Is All You Need
Paper • 1706.03762 • Published • 55 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 244