Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs Paper • 2503.02846 • Published 9 days ago • 18 • 2
Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning Paper • 2502.06781 • Published about 1 month ago • 60 • 6
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models Paper • 2407.04693 • Published Jul 5, 2024 • 3 • 3
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models Paper • 2407.04693 • Published Jul 5, 2024 • 3 • 3