Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
MSRS / README.md
rohanphanse
Fix formatting
c419a73
metadata
license: mit
configs:
  - config_name: meeting-qa
    data_files:
      - split: train
        path: meeting/train.jsonl
      - split: validation
        path: meeting/dev.jsonl
      - split: test
        path: meeting/test.jsonl
  - config_name: story-qa
    data_files:
      - split: train
        path: story/train.jsonl
      - split: validation
        path: story/dev.jsonl
      - split: test
        path: story/test.jsonl
  - config_name: meeting-corpus
    data_files:
      - split: corpus
        path: meeting/corpus.jsonl
  - config_name: story-corpus
    data_files:
      - split: corpus
        path: story/corpus.jsonl

MSRS: Evaluating Multi-Source Retrieval-Augmented Generation

πŸ“„ Paper | πŸ’» Code

This paper introduces a scalable framework for constructing evaluation benchmarks that challenge RAG systems to integrate information across distinct sources and generate long-form responses. Using our framework, we build two new benchmarks on Multi-Source Retrieval and Synthesis: MSRS-Story and MSRS-Meet.

πŸš€ Quickstart

Load the corpora for MSRS-Story and MSRS-Meet:

from datasets import load_dataset

story_corpus = load_dataset("yale-nlp/MSRS", "story-corpus", split="corpus")
meeting_corpus = load_dataset("yale-nlp/MSRS", "meeting-corpus", split="corpus")

Corpus Dataset Example:

{
    "id": // Unique ID for the document
    "text": // Document text
}

Load the query-answer pairs for MSRS-Story and MSRS-Meet (available splits: train, test, and validation):

from datasets import load_dataset

story_qa = load_dataset("yale-nlp/MSRS", "story-qa")
meeting_qa = load_dataset("yale-nlp/MSRS", "meeting-qa")

QA Dataset Example:

{
    "id": // Unique ID for the query
    "query": // Query text
    "gold_documents": // List of gold document IDs
    "answer": // List of answer summaries
}