cot-finetuning ReFT: Reasoning with Reinforced Fine-Tuning Paper • 2401.08967 • Published Jan 17, 2024 • 32
faster-decoding Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
interesting-papers Self-Rewarding Language Models Paper • 2401.10020 • Published Jan 18, 2024 • 152 Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 118
Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 118
interpretability Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 24
Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 24
cot-finetuning ReFT: Reasoning with Reinforced Fine-Tuning Paper • 2401.08967 • Published Jan 17, 2024 • 32
interesting-papers Self-Rewarding Language Models Paper • 2401.10020 • Published Jan 18, 2024 • 152 Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 118
Self-Discover: Large Language Models Self-Compose Reasoning Structures Paper • 2402.03620 • Published Feb 6, 2024 • 118
faster-decoding Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads Paper • 2401.10774 • Published Jan 19, 2024 • 59
interpretability Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 24
Rethinking Interpretability in the Era of Large Language Models Paper • 2402.01761 • Published Jan 30, 2024 • 24