Update README.md
Browse files
README.md
CHANGED
@@ -35,8 +35,75 @@ dataset_info:
|
|
35 |
dataset_size: 99663
|
36 |
---
|
37 |
|
38 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
```py
|
41 |
data = datasets.load_dataset("Wikit/retrieval-pdf-acl2025")
|
42 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
dataset_size: 99663
|
36 |
---
|
37 |
|
38 |
+
## What is in this dataset ?
|
39 |
+
|
40 |
+
This dataset is composed of **manually created queries** and a **corpus of PDF files**. For each query, the relevant documents, pages, and text passages relevant to the queries have been labeled.
|
41 |
+
|
42 |
+
## What is its purpose ?
|
43 |
+
|
44 |
+
This dataset can be used for evaluation of information retrieval strategies on PDF files. It was initially created in order to compare various PDF parsing and chunking tools for information retrieval in RAG applications.
|
45 |
+
|
46 |
+
## How to use ?
|
47 |
|
48 |
```py
|
49 |
data = datasets.load_dataset("Wikit/retrieval-pdf-acl2025")
|
50 |
+
```
|
51 |
+
Additionally, **you may want to download the PDF files**, available as a *.zip* archive in the files of this repo.
|
52 |
+
|
53 |
+
## Cite
|
54 |
+
|
55 |
+
*The work from which originated the creation of this dataset is currently under submission to a conference.*
|
56 |
+
|
57 |
+
## Details about dataset content and usage ?
|
58 |
+
|
59 |
+
### Dataset schema and splits
|
60 |
+
|
61 |
+
The dataset is composed of two splits : ``chunks.single`` and ``chunks.multi``:
|
62 |
+
- *chunks.single*: each query can be answered with only one passage of the PDF corpus
|
63 |
+
- *chunks.multi*: each query needs multiple passages, sometimes from different pages to be answers
|
64 |
+
|
65 |
+
Both splits have de same schema:
|
66 |
+
- **query** (*str*): the query
|
67 |
+
- **source_file** (*list[str]*): the list of files from which the relevant passages are.
|
68 |
+
- **target_pages** (*list[list[int]]*): for each source file, the list of pages from which the relevant passages are. NOTE: page numbers are 0-based ! (the first page of the PDF files is 0) !
|
69 |
+
- **target_passages** (*list[list[str]]): for each source file, the list of passages (copy-pasted from the PDF file) that answer the query.
|
70 |
+
|
71 |
+
Example:
|
72 |
+
|
73 |
+
Considering a sample of the dataset which would be:
|
74 |
+
```py
|
75 |
+
{
|
76 |
+
"query": "dummy query ?",
|
77 |
+
"source_file": ["doc_A.pdf", "doc_B.pdf"],
|
78 |
+
"target_pages": [[1], [2, 3]],
|
79 |
+
"taget_passages": [["passage"], ["other passage", "other other passage"]]
|
80 |
+
}
|
81 |
+
```
|
82 |
+
That means that *``"dummy query ?"``* would be answered by *"passage"* which is on page ``1`` of ``doc_A.pdf``, by *"other passage"* which is on page ``2`` of ``doc_B.pdf`` and by *"other other passage"* which is on page ``3`` of ``doc_B.pdf``
|
83 |
+
|
84 |
+
### Usage of the dataset
|
85 |
+
|
86 |
+
The provided labeled queries allow to evaluate information retrieval from the parsed and chunked PDF file:
|
87 |
+
- By checking the source document and source page of a given chunk you can determine if it is likely to be the relevant chunk for the query.
|
88 |
+
- To confirm that the chunk contains the target text passage, you might want to compute a ROUGE score on unigram between the normalized chunk and target passage.
|
89 |
+
|
90 |
+
Regarding the confirmation that the chunk contains the target passage, this might do the trick:
|
91 |
+
|
92 |
+
```py
|
93 |
+
import re
|
94 |
+
from unidecode import unidecode
|
95 |
+
|
96 |
+
def chunk_contains_passage(chunk_text: str, target_passage: str, rouge_threshold: float = .7) -> bool:
|
97 |
+
"""
|
98 |
+
Returns True if the text of a target passage is in the chunk.
|
99 |
+
(rouge_threshold value may be adjusted but values between 0.7 and 0.9 give pretty reliable results).
|
100 |
+
"""
|
101 |
+
chunk_text = unidecode(chunk_text.lower())
|
102 |
+
target_passage = unidecode(target_passage.lower())
|
103 |
+
|
104 |
+
target_passage_words = re.findall(r"\w+", target_passage)
|
105 |
+
rouge_score = [word for word in target_passage_words if word in chunk_text] / len(target_passage_words)
|
106 |
+
|
107 |
+
return rouge_score >= rouge_threshold
|
108 |
+
```
|
109 |
+
|