add readme
Browse files
README.md
CHANGED
@@ -23,4 +23,76 @@ configs:
|
|
23 |
path: data/train-*
|
24 |
- split: validation
|
25 |
path: data/validation-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
path: data/train-*
|
24 |
- split: validation
|
25 |
path: data/validation-*
|
26 |
+
license: mit
|
27 |
+
task_categories:
|
28 |
+
- question-answering
|
29 |
+
- text2text-generation
|
30 |
+
language:
|
31 |
+
- en
|
32 |
+
size_categories:
|
33 |
+
- 100K<n<1M
|
34 |
---
|
35 |
+
|
36 |
+
# Dataset Card
|
37 |
+
|
38 |
+
## Table of Contents
|
39 |
+
- [Table of Contents](#table-of-contents)
|
40 |
+
- [Dataset Description](#dataset-description)
|
41 |
+
- [Dataset Summary](#dataset-summary)
|
42 |
+
- [Dataset Structure](#dataset-structure)
|
43 |
+
- [Data Instances](#data-instances)
|
44 |
+
- [Data Fields](#data-fields)
|
45 |
+
- [Data Splits](#data-splits)
|
46 |
+
- [Additional Information](#additional-information)
|
47 |
+
- [Licensing Information](#licensing-information)
|
48 |
+
|
49 |
+
## Dataset Description
|
50 |
+
|
51 |
+
The dataset contains simple, long-form answers to questions and corresponding contexts.
|
52 |
+
Similar to ELI5 but with context.
|
53 |
+
|
54 |
+
This dataset is a filtered version of [LLukas22/lfqa_preprocessed](https://huggingface.co/datasets/LLukas22/lfqa_preprocessed),
|
55 |
+
which in turn is a processed and simplified version of of [vblagoje's](https://huggingface.co/vblagoje) *[lfqa_support_docs](https://huggingface.co/datasets/vblagoje/lfqa_support_docs)* and *[lfqa](https://huggingface.co/datasets/vblagoje/lfqa)* datasets.
|
56 |
+
|
57 |
+
I have filtered out overly long answers, based on the number of tokens in the answer using the LED tokenizer.
|
58 |
+
It can be reproduced with the notebook `process-lfqa-dataset.ipynb`.
|
59 |
+
|
60 |
+
LLukas22/lfqa_preprocessed | stefanbschneider/lfqa-max-answer-length-1024
|
61 |
+
:-------------------------:|:-------------------------:
|
62 |
+
 | 
|
63 |
+
Max answer length: 5964 tokens | Max answer length: 1024 tokens (~6x shorter)
|
64 |
+
Num answers (train): 226147 | Num answers (train): 218894 (~3% less)
|
65 |
+
|
66 |
+
|
67 |
+
Details of the original LFQA dataset: [https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb)
|
68 |
+
|
69 |
+
|
70 |
+
## Dataset Structure
|
71 |
+
|
72 |
+
### Data Instances
|
73 |
+
|
74 |
+
An example of 'train' looks as follows.
|
75 |
+
|
76 |
+
```json
|
77 |
+
{
|
78 |
+
"question": "what's the difference between a forest and a wood?",
|
79 |
+
"answer": "They're used interchangeably a lot. You'll get different answers from different resources, but the ...",
|
80 |
+
"context": [
|
81 |
+
"Wood is divided, according to its botanical origin, into two kinds: softwoods, ...",
|
82 |
+
"Processing and products differs especially with regard to the distinction between softwood and hardwood ..."
|
83 |
+
]
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
### Data Fields
|
88 |
+
|
89 |
+
The data fields are the same among all splits.
|
90 |
+
|
91 |
+
- `question`: a `string` feature.
|
92 |
+
- `answer`: a `string` feature.
|
93 |
+
- `context`: a list feature containing `string` features.
|
94 |
+
|
95 |
+
|
96 |
+
### Licensing Information
|
97 |
+
|
98 |
+
This dataset is distributed under the MIT licence.
|