Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published 25 days ago • 142
MoBA: Mixture of Block Attention for Long-Context LLMs Paper • 2502.13189 • Published 23 days ago • 14
Portrait Video Editing Empowered by Multimodal Generative Priors Paper • 2409.13591 • Published Sep 20, 2024 • 17