Papers
arxiv:2502.14258

Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information

Published on Feb 20
· Submitted by Minbyul on Feb 21
Authors:

Abstract

While the ability of language models to elicit facts has been widely investigated, how they handle temporally changing facts remains underexplored. We discover Temporal Heads, specific attention heads primarily responsible for processing temporal knowledge through circuit analysis. We confirm that these heads are present across multiple models, though their specific locations may vary, and their responses differ depending on the type of knowledge and its corresponding years. Disabling these heads degrades the model's ability to recall time-specific knowledge while maintaining its general capabilities without compromising time-invariant and question-answering performances. Moreover, the heads are activated not only numeric conditions ("In 2004") but also textual aliases ("In the year ..."), indicating that they encode a temporal dimension beyond simple numerical representation. Furthermore, we expand the potential of our findings by demonstrating how temporal knowledge can be edited by adjusting the values of these heads.

Community

Paper author Paper submitter

⏱︎ Large language models (LLMs) have made significant strides in factual reasoning, but how do they handle temporally evolving facts?

We uncover Temporal Heads, specialized attention heads within LLMs that are primarily responsible for processing time-sensitive knowledge. We observe that certain attention heads consistently control temporal knowledge recall and update across models through circuit analysis.

Our findings show that disabling these heads selectively degrades time-specific recall while leaving general reasoning and time-invariant knowledge intact. Interestingly, these heads are activated not only by explicit numerical timestamps (e.g., "In 2004") but also by textual cues ("In the year..."), indicating that LLMs encode a richer temporal understanding than previously thought.

Beyond just identifying these mechanisms, we take a step further—demonstrating how temporal knowledge can be directly edited by manipulating the values of these heads. This opens up exciting possibilities for controlling and updating factual knowledge in LLMs, paving the way for more adaptive and reliable AI systems.

image.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.14258 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.14258 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.14258 in a Space README.md to link it from this page.

Collections including this paper 2