LONGCODEU: Benchmarking Long-Context Language Models on Long Code Understanding
Abstract
Current advanced long-context language models offer great potential for real-world software engineering applications. However, progress in this critical domain remains hampered by a fundamental limitation: the absence of a rigorous evaluation framework for long code understanding. To gap this obstacle, we propose a long code understanding benchmark LONGCODEU from four aspects (8 tasks) to evaluate LCLMs' long code understanding ability required for practical applications, including code unit perception, intra-code unit understanding, inter-code unit relation understanding, and long code documentation understanding. We evaluate 9 popular LCLMs on LONGCODEU (i.e., 6 general models and 3 code models). Our experimental results reveal key limitations in current LCLMs' capabilities for long code understanding. Particularly, the performance of LCLMs drops dramatically when the long code length is greater than 32K, falling far short of their claimed 128K-1M context windows. In the four aspects, inter-code unit relation understanding is the most challenging for LCLMs. Our study provides valuable insights for optimizing LCLMs and driving advancements in software engineering.
Community
We present LongCodeU, to benchmark long context LLMs’ long code understanding ability required for practical applications.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SolEval: Benchmarking Large Language Models for Repository-level Solidity Code Generation (2025)
- A Survey On Large Language Models For Code Generation (2025)
- CodeIF: Benchmarking the Instruction-Following Capabilities of Large Language Models for Code Generation (2025)
- CLOVER: A Test Case Generation Benchmark with Coverage, Long-Context, and Verification (2025)
- LongReason: A Synthetic Long-Context Reasoning Benchmark via Context Expansion (2025)
- Code Summarization Beyond Function Level (2025)
- Code-Vision: Evaluating Multimodal LLMs Logic Understanding and Code Generation Capabilities (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper