Abstract
This paper introduces the Large Memory Model (LM2), a decoder-only Transformer architecture enhanced with an auxiliary memory module that aims to address the limitations of standard Transformers in multi-step reasoning, relational argumentation, and synthesizing information distributed over long contexts. The proposed LM2 incorporates a memory module that acts as a contextual representation repository, interacting with input tokens via cross attention and updating through gating mechanisms. To preserve the Transformers general-purpose capabilities, LM2 maintains the original information flow while integrating a complementary memory pathway. Experimental results on the BABILong benchmark demonstrate that the LM2model outperforms both the memory-augmented RMT model by 37.1% and the baseline Llama-3.2 model by 86.3% on average across tasks. LM2 exhibits exceptional capabilities in multi-hop inference, numerical reasoning, and large-context question-answering. On the MMLU dataset, it achieves a 5.0% improvement over a pre-trained vanilla model, demonstrating that its memory module does not degrade performance on general tasks. Further, in our analysis, we explore the memory interpretability, effectiveness of memory modules, and test-time behavior. Our findings emphasize the importance of explicit memory in enhancing Transformer architectures.
Community
TLDR: The LM2 model integrates a memory module into the Transformer architecture to improve multi-step reasoning and information synthesis over long contexts. This enhancement leads to significant performance improvements in tasks requiring multi-hop inference and large-context question-answering, demonstrating the value of explicit memory in Transformer models.
Titans become Lilliputians...
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Contextual Memory Reweaving in Large Language Models Using Layered Latent State Reconstruction (2025)
- Segment-Based Attention Masking for GPTs (2024)
- M+: Extending MemoryLLM with Scalable Long-Term Memory (2025)
- On the Structural Memory of LLM Agents (2024)
- LIFT: Improving Long Context Understanding Through Long Input Fine-Tuning (2024)
- Bactrainus: Optimizing Large Language Models for Multi-hop Complex Question Answering Tasks (2025)
- LLMs are Also Effective Embedding Models: An In-depth Overview (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
I found the Repo not available. Will this be added later.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper