File size: 2,142 Bytes
79bca0b ba5c30b da6daef 1b91002 b66507a 1b91002 b66507a 1b91002 b66507a 1b91002 b66507a 1b91002 ace8cdd 06014db aeec18c f7e3aec aeec18c ace8cdd 7785274 ace8cdd 7785274 ace8cdd 0ba2ee2 06014db 0ba2ee2 cad8bbb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
task_categories:
- question-answering
- summarization
- text-generation
language:
- en
tags:
- llm
- kv_cache
pretty_name: LoopServe Multi-Turn Dialogue Benchmark
configs:
- config_name: multi-turn_FS
data_files: multi_turn/few_shot_learning/*.jsonl
- config_name: multi-turn_NIH
data_files: multi_turn/needle_in_haystack/*.jsonl
- config_name: multi-turn_QA
data_files: multi_turn/question_answering/*.jsonl
- config_name: multi-turn_SUM
data_files: multi_turn/summarization/*.jsonl
- config_name: single-turn_FS
data_files: single_turn/few_shot_learning/*.jsonl
- config_name: single-turn_NIH
data_files: single_turn/needle_in_haystack/*.jsonl
- config_name: single-turn_QA
data_files: single_turn/question_answering/*.jsonl
- config_name: single-turn_SUM
data_files: single_turn/summarization/*.jsonl
---
# LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues
Arxiv: https://www.arxiv.org/abs/2507.13681
Huggingface: https://huggingface.co/papers/2507.13681
``` shell
.
βββ README.md
βββ conversations.jsonl
βββ multi_turn
β βββ few_shot_learning
β βββ needle_in_haystack
β βββ question_answering
β βββ summarization
βββ single_turn
βββ few_shot_learning
βββ needle_in_haystack
βββ question_answering
βββ summarization
```
# Introduction
**LoopServe Multi-Turn Dialogue Benchmark** is a comprehensive evaluation dataset comprising multiple diverse datasets designed to assess large language model performance in realistic conversational scenarios.
Unlike traditional benchmarks that place queries only at the end of input sequences,
this benchmark features diverse query positions (beginning, middle, end) across multi-turn conversations,
spanning Question Answering, Needle in a haystack, Summarization, and Few-shot Learning tasks.
The benchmark captures the dynamic dependencies and unpredictable patterns characteristic of real-world multi-turn dialogues to provide more authentic LLM evaluation in practical conversational applications.
# Dataset statistics
|