nielsr HF Staff commited on
Commit
c76f084
Β·
verified Β·
1 Parent(s): a68ba10

Improve dataset card: Add library_name, license, benchmark tag, GitHub link, and sample usage

Browse files

This PR improves the dataset card for the LoopServe Multi-Turn Dialogue Benchmark by:
- Adding `library_name: datasets` and `license: cc-by-4.0` to the metadata section for better discoverability and usage clarity.
- Adding the `benchmark` tag to reflect the dataset's nature as an evaluation benchmark.
- Including a direct link to the associated GitHub repository (`https://github.com/TreeAI-Lab/Awesome-KV-Cache-Management`), which serves as a central hub for KV Cache Management research and explicitly links to this dataset.
- Providing a clear `Sample Usage` section demonstrating how to load and access the dataset using the `datasets` library.
- Removing the redundant shell tree file information from the content section, as this is typically handled by the Hub's file browser and `configs` metadata.

The dataset's primary associated paper, "LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues" (arXiv:2507.13681), remains the main reference in the dataset card.

Files changed (1) hide show
  1. README.md +27 -20
README.md CHANGED
@@ -1,15 +1,17 @@
1
  ---
 
 
2
  task_categories:
3
  - question-answering
4
  - summarization
5
  - text-generation
6
- language:
7
- - en
8
  tags:
9
  - llm
10
  - kv_cache
11
- pretty_name: LoopServe Multi-Turn Dialogue Benchmark
12
-
 
13
  configs:
14
  - config_name: multi-turn_FS
15
  data_files: multi_turn/few_shot_learning/*.jsonl
@@ -35,21 +37,7 @@ Arxiv: https://www.arxiv.org/abs/2507.13681
35
 
36
  Huggingface: https://huggingface.co/papers/2507.13681
37
 
38
- ``` shell
39
- .
40
- β”œβ”€β”€ README.md
41
- β”œβ”€β”€ conversations.jsonl
42
- β”œβ”€β”€ multi_turn
43
- β”‚ β”œβ”€β”€ few_shot_learning
44
- β”‚ β”œβ”€β”€ needle_in_haystack
45
- β”‚ β”œβ”€β”€ question_answering
46
- β”‚ └── summarization
47
- └── single_turn
48
- β”œβ”€β”€ few_shot_learning
49
- β”œβ”€β”€ needle_in_haystack
50
- β”œβ”€β”€ question_answering
51
- └── summarization
52
- ```
53
 
54
  # Introduction
55
 
@@ -65,6 +53,25 @@ The benchmark captures the dynamic dependencies and unpredictable patterns chara
65
 
66
  For more details, please refer to our paper.
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  # Citation
69
  ``` bibtex
70
  @misc{li2025loopserveadaptivedualphasellm,
@@ -76,4 +83,4 @@ For more details, please refer to our paper.
76
  primaryClass={cs.CL},
77
  url={https://arxiv.org/abs/2507.13681},
78
  }
79
- ```
 
1
  ---
2
+ language:
3
+ - en
4
  task_categories:
5
  - question-answering
6
  - summarization
7
  - text-generation
8
+ pretty_name: LoopServe Multi-Turn Dialogue Benchmark
 
9
  tags:
10
  - llm
11
  - kv_cache
12
+ - benchmark
13
+ library_name: datasets
14
+ license: cc-by-4.0
15
  configs:
16
  - config_name: multi-turn_FS
17
  data_files: multi_turn/few_shot_learning/*.jsonl
 
37
 
38
  Huggingface: https://huggingface.co/papers/2507.13681
39
 
40
+ Code: https://github.com/TreeAI-Lab/Awesome-KV-Cache-Management
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  # Introduction
43
 
 
53
 
54
  For more details, please refer to our paper.
55
 
56
+ # Sample Usage
57
+
58
+ The dataset can be easily loaded using the `load_dataset` function from the πŸ€— Datasets library.
59
+
60
+ ```python
61
+ from datasets import load_dataset
62
+
63
+ # Load a specific configuration, for example, the multi-turn question answering data
64
+ dataset = load_dataset("TreeAILab/Multi-turn_Long-context_Benchmark_for_LLMs", "multi-turn_QA")
65
+
66
+ # Access the training split
67
+ print(dataset["train"])
68
+
69
+ # Iterate through an example
70
+ for example in dataset["train"]:
71
+ print(example)
72
+ break
73
+ ```
74
+
75
  # Citation
76
  ``` bibtex
77
  @misc{li2025loopserveadaptivedualphasellm,
 
83
  primaryClass={cs.CL},
84
  url={https://arxiv.org/abs/2507.13681},
85
  }
86
+ ```