Update README.md
Browse files
README.md
CHANGED
@@ -32,6 +32,7 @@ configs:
|
|
32 |
# LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues
|
33 |
|
34 |
Arxiv: https://www.arxiv.org/abs/2507.13681
|
|
|
35 |
Huggingface: https://huggingface.co/papers/2507.13681
|
36 |
|
37 |
``` shell
|
@@ -52,4 +53,4 @@ Huggingface: https://huggingface.co/papers/2507.13681
|
|
52 |
|
53 |
# Introduction
|
54 |
|
55 |
-
**LoopServe Multi-Turn Dialogue Benchmark** is a comprehensive evaluation dataset comprising multiple diverse datasets designed to assess large language model performance in realistic conversational scenarios. Unlike traditional benchmarks that place queries only at the end of input sequences, this benchmark features diverse query positions (beginning, middle, end) across multi-turn conversations, spanning Question Answering, Needle in a haystack
|
|
|
32 |
# LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues
|
33 |
|
34 |
Arxiv: https://www.arxiv.org/abs/2507.13681
|
35 |
+
|
36 |
Huggingface: https://huggingface.co/papers/2507.13681
|
37 |
|
38 |
``` shell
|
|
|
53 |
|
54 |
# Introduction
|
55 |
|
56 |
+
**LoopServe Multi-Turn Dialogue Benchmark** is a comprehensive evaluation dataset comprising multiple diverse datasets designed to assess large language model performance in realistic conversational scenarios. Unlike traditional benchmarks that place queries only at the end of input sequences, this benchmark features diverse query positions (beginning, middle, end) across multi-turn conversations, spanning Question Answering, Needle in a haystack, Summarization, and Few-shot Learning tasks. The benchmark captures the dynamic dependencies and unpredictable patterns characteristic of real-world multi-turn dialogues to provide more authentic LLM evaluation in practical conversational applications.
|