-
SMOTE: Synthetic Minority Over-sampling Technique
Paper • 1106.1813 • Published • 1 -
Scikit-learn: Machine Learning in Python
Paper • 1201.0490 • Published • 1 -
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Paper • 1406.1078 • Published -
Distributed Representations of Sentences and Documents
Paper • 1405.4053 • Published
Collections
Discover the best community collections!
Collections including paper arxiv:2302.13971
-
Attention Is All You Need
Paper • 1706.03762 • Published • 51 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Attention Is All You Need
Paper • 1706.03762 • Published • 51 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 10 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 72
-
Llemma: An Open Language Model For Mathematics
Paper • 2310.10631 • Published • 52 -
Mistral 7B
Paper • 2310.06825 • Published • 46 -
Qwen Technical Report
Paper • 2309.16609 • Published • 35 -
BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model
Paper • 2309.11568 • Published • 10
-
Attention Is All You Need
Paper • 1706.03762 • Published • 51 -
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 10 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 32 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 13
-
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Paper • 2310.09478 • Published • 20 -
Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams
Paper • 2310.08678 • Published • 13 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 244 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 14
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 33 -
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Paper • 2205.14135 • Published • 13 -
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 11 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12
-
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Paper • 2309.03883 • Published • 35 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 32 -
Agents: An Open-source Framework for Autonomous Language Agents
Paper • 2309.07870 • Published • 42 -
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Paper • 2309.00267 • Published • 47