File size: 3,003 Bytes
79bca0b 73bb4dd 79bca0b 73bb4dd 79bca0b 73bb4dd da6daef 1b91002 b66507a 1b91002 b66507a 1b91002 b66507a 1b91002 b66507a 1b91002 ace8cdd 06014db aeec18c f7e3aec aeec18c 0ba2ee2 06014db 0ba2ee2 cad8bbb 4a8ead1 a68ba10 73bb4dd a68ba10 73bb4dd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
language:
- en
task_categories:
- question-answering
- summarization
- text-generation
pretty_name: LoopServe Multi-Turn Dialogue Benchmark
tags:
- llm
- kv_cache
- benchmark
library_name: datasets
license: cc-by-4.0
configs:
- config_name: multi-turn_FS
data_files: multi_turn/few_shot_learning/*.jsonl
- config_name: multi-turn_NIH
data_files: multi_turn/needle_in_haystack/*.jsonl
- config_name: multi-turn_QA
data_files: multi_turn/question_answering/*.jsonl
- config_name: multi-turn_SUM
data_files: multi_turn/summarization/*.jsonl
- config_name: single-turn_FS
data_files: single_turn/few_shot_learning/*.jsonl
- config_name: single-turn_NIH
data_files: single_turn/needle_in_haystack/*.jsonl
- config_name: single-turn_QA
data_files: single_turn/question_answering/*.jsonl
- config_name: single-turn_SUM
data_files: single_turn/summarization/*.jsonl
---
# LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues
Arxiv: https://www.arxiv.org/abs/2507.13681
Huggingface: https://huggingface.co/papers/2507.13681
# Introduction
**LoopServe Multi-Turn Dialogue Benchmark** is a comprehensive evaluation dataset comprising multiple diverse datasets designed to assess large language model performance in realistic conversational scenarios.
Unlike traditional benchmarks that place queries only at the end of input sequences,
this benchmark features diverse query positions (beginning, middle, end) across multi-turn conversations,
spanning Question Answering, Needle in a haystack, Summarization, and Few-shot Learning tasks.
The benchmark captures the dynamic dependencies and unpredictable patterns characteristic of real-world multi-turn dialogues to provide more authentic LLM evaluation in practical conversational applications.
# Dataset statistics

For more details, please refer to our paper.
# Sample Usage
The dataset can be easily loaded using the `load_dataset` function from the 🤗 Datasets library.
```python
from datasets import load_dataset
# Load a specific configuration, for example, the multi-turn question answering data
dataset = load_dataset("TreeAILab/Multi-turn_Long-context_Benchmark_for_LLMs", "multi-turn_QA")
# Access the training split
print(dataset["train"])
# Iterate through an example
for example in dataset["train"]:
print(example)
break
```
# Citation
``` bibtex
@misc{li2025loopserveadaptivedualphasellm,
title={LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues},
author={Haoyang Li and Zhanchao Xu and Yiming Li and Xuejia Chen and Darian Li and Anxin Tian and Qingfa Xiao and Cheng Deng and Jun Wang and Qing Li and Lei Chen and Mingxuan Yuan},
year={2025},
eprint={2507.13681},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.13681},
}
``` |