Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
Abstract
Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trainable Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LLM Pretraining with Continuous Concepts (2025)
- Softplus Attention with Re-weighting Boosts Length Extrapolation in Large Language Models (2025)
- Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning (2025)
- AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference (2025)
- LeMo: Enabling LEss Token Involvement for MOre Context Fine-tuning (2025)
- GQSA: Group Quantization and Sparsity for Accelerating Large Language Model Inference (2024)
- Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Does anyone have an idea of how ๐ (3.3.1 Token Compression) should be implemented?
I made a simple downpooling approach, but I do not believe this will perform well in training:
def Linear(a: int, b: int): return nn.Linear(a,b,bias=False)
class Phi(nn.Module):
def __init__(self, dim: int, block_l: int):
super().__init__()
downpools = int(math.log2(block_l))
assert 1<<downpools == block_l
self.down = nn.ModuleList([Linear(dim*2, dim) for _ in range(downpools)])
self.stop = Linear(dim,dim)
def forward(self, x):
# x: [... seqlen//stride_d block_l headdim ] -> [... seqlen//stride_d headdim ]
# This is roughly, "downproject 2->1 adjacent tokens + activation fn",
# repeated log2(block_l) times, with an extra final nn.Linear.
for l in self.down:
x = x.unflatten(-2, (x.size(-2)//2, 2)).flatten(-2)
x = F.silu(l(x))
return self.stop(x)
Curious to hear if anyone has thought about this.
Lucidrains has started an implementation here: https://github.com/lucidrains/native-sparse-attention-pytorch/blob/main/native_sparse_attention_pytorch/nsa.py
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper