language:
- en
splits:
- name: test
configs:
- config_name: default
data_files:
- split: chunk.single
path: data/chunk.single-*
- split: chunk.multi
path: data/chunk.multi-*
dataset_info:
features:
- name: source_file
sequence: string
- name: query
dtype: string
- name: target_pages
sequence:
sequence: int64
- name: target_passages
sequence:
sequence: string
- name: annotator
dtype: int64
splits:
- name: chunk.single
num_bytes: 83909
num_examples: 300
- name: chunk.multi
num_bytes: 15754
num_examples: 32
download_size: 65191
dataset_size: 99663
What is PIRE ?
PIRE stands for Pdf Information Retrieval Evaluation.
This dataset is composed of manually created queries and a corpus of PDF files. For each query, the relevant documents, pages, and text passages relevant to the queries have been labeled.
What is its purpose ?
This dataset can be used for evaluation of information retrieval strategies on PDF files. It was initially created in order to compare various PDF parsing and chunking tools for information retrieval in RAG applications.
How to use ?
data = datasets.load_dataset("Wikit/retrieval-pdf-acl2025")
Additionally, you may want to download the PDF files, available as a .zip archive in the files of this repo.
Cite
The work from which originated the creation of this dataset is currently under submission to a conference.
Details about dataset content and usage
Dataset schema and splits
The dataset is composed of two splits : chunks.single
and chunks.multi
:
- chunk.single: each query can be answered with only one passage of the PDF corpus
- chunk.multi: each query needs multiple passages, sometimes from different pages to be answers
Both splits have de same schema:
- query (str): the query
- source_file (list[str]): the list of files from which the relevant passages are.
- target_pages (list[list[int]]): for each source file, the list of pages from which the relevant passages are. NOTE: page numbers are 0-based ! (the first page of the PDF files is 0) !
- target_passages (list[list[str]]): for each source file, the list of passages (copy-pasted from the PDF file) that answer the query.
Example:
Considering a sample of the dataset which would be:
{
"query": "dummy query ?",
"source_file": ["doc_A.pdf", "doc_B.pdf"],
"target_pages": [[1], [2, 3]],
"taget_passages": [["passage"], ["other passage", "other other passage"]]
}
That means that "dummy query ?"
would be answered by "passage" which is on page 1
of doc_A.pdf
, by "other passage" which is on page 2
of doc_B.pdf
and by "other other passage" which is on page 3
of doc_B.pdf
Usage of the dataset
The provided labeled queries allow to evaluate information retrieval from the parsed and chunked PDF file:
- By checking the source document and source page of a given chunk you can determine if it is likely to be the relevant chunk for the query.
- To confirm that the chunk contains the target text passage, you might want to compute a ROUGE score on unigram between the normalized chunk and target passage.
Regarding the confirmation that the chunk contains the target passage, this might do the trick:
import re
from unidecode import unidecode
def chunk_contains_passage(chunk_text: str, target_passage: str, rouge_threshold: float = .7) -> bool:
"""
Returns True if the text of a target passage is in the chunk.
(rouge_threshold value may be adjusted but values between 0.7 and 0.9 give pretty reliable results).
"""
chunk_text = unidecode(chunk_text.lower())
target_passage = unidecode(target_passage.lower())
target_passage_words = re.findall(r"\w+", target_passage)
rouge_score = [word for word in target_passage_words if word in chunk_text] / len(target_passage_words)
return rouge_score >= rouge_threshold
Licence
CC BY-SA