task_categories:
- question-answering
- summarization
- text-generation
language:
- en
tags:
- llm
- kv_cache
pretty_name: LoopServe Multi-Turn Dialogue Benchmark
configs:
- config_name: multi-turn_FS
data_files: multi_turn/few_shot_learning/*.jsonl
- config_name: multi-turn_NIH
data_files: multi_turn/needle_in_haystack/*.jsonl
- config_name: multi-turn_QA
data_files: multi_turn/question_answering/*.jsonl
- config_name: multi-turn_SUM
data_files: multi_turn/summarization/*.jsonl
- config_name: single-turn_FS
data_files: single_turn/few_shot_learning/*.jsonl
- config_name: single-turn_NIH
data_files: single_turn/needle_in_haystack/*.jsonl
- config_name: single-turn_QA
data_files: single_turn/question_answering/*.jsonl
- config_name: single-turn_SUM
data_files: single_turn/summarization/*.jsonl
LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues
Arxiv: https://www.arxiv.org/abs/2507.13681
Huggingface: https://huggingface.co/papers/2507.13681
.
βββ README.md
βββ conversations.jsonl
βββ multi_turn
β βββ few_shot_learning
β βββ needle_in_haystack
β βββ question_answering
β βββ summarization
βββ single_turn
βββ few_shot_learning
βββ needle_in_haystack
βββ question_answering
βββ summarization
Introduction
LoopServe Multi-Turn Dialogue Benchmark is a comprehensive evaluation dataset comprising multiple diverse datasets designed to assess large language model performance in realistic conversational scenarios. Unlike traditional benchmarks that place queries only at the end of input sequences, this benchmark features diverse query positions (beginning, middle, end) across multi-turn conversations, spanning Question Answering, Needle in a haystack, Summarization, and Few-shot Learning tasks. The benchmark captures the dynamic dependencies and unpredictable patterns characteristic of real-world multi-turn dialogues to provide more authentic LLM evaluation in practical conversational applications.