Papers
arxiv:2502.11089

Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

Published on Feb 16
ยท Submitted by HelloJiang on Feb 18
#1 Paper of the day
Authors:
,
,
,
,
,
,

Abstract

Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trainable Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle.

Community

Paper submitter

image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Does anyone have an idea of how ๐œ‘ (3.3.1 Token Compression) should be implemented?

I made a simple downpooling approach, but I do not believe this will perform well in training:

def Linear(a: int, b: int): return nn.Linear(a,b,bias=False)
class Phi(nn.Module):
    def __init__(self, dim: int, block_l: int):
        super().__init__()
        downpools = int(math.log2(block_l))
        assert 1<<downpools == block_l
        self.down = nn.ModuleList([Linear(dim*2, dim) for _ in range(downpools)])
        self.stop = Linear(dim,dim)
    def forward(self, x):
        # x: [... seqlen//stride_d block_l headdim ] -> [... seqlen//stride_d headdim ]
        # This is roughly, "downproject 2->1 adjacent tokens + activation fn",
        # repeated log2(block_l) times, with an extra final nn.Linear.
        for l in self.down:
            x = x.unflatten(-2, (x.size(-2)//2, 2)).flatten(-2)
            x = F.silu(l(x))
        return self.stop(x)

Curious to hear if anyone has thought about this.

ยท

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.11089 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.11089 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 19