Update README.md
Browse files
README.md
CHANGED
@@ -21,4 +21,35 @@ task_categories:
|
|
21 |
- token-classification
|
22 |
language:
|
23 |
- en
|
24 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
- token-classification
|
22 |
language:
|
23 |
- en
|
24 |
+
---
|
25 |
+
|
26 |
+
# sample-no-overfit
|
27 |
+
|
28 |
+
A short-story dataset where **each input is a non-overlapping context of 20 tokens**, and the **output** is the **same 20 tokens shifted by one position**. This means **no overlap** between consecutive batches, reducing the risk of overfitting to the same text segments.
|
29 |
+
|
30 |
+
## Dataset Overview
|
31 |
+
|
32 |
+
- **Name:** `sample-no-overfit`
|
33 |
+
- **Context Size (`context_size`):** 20
|
34 |
+
- **Stride/Step:** After one batch of 20 tokens, we move to the **next 20 tokens** (no overlap).
|
35 |
+
- **Example**:
|
36 |
+
- **Batch 1 (input)**: `"IN the house of"`
|
37 |
+
- **Batch 1 (output)**: `"the house of there"`
|
38 |
+
- **Batch 2 (input)**: `"there lives a wolf,"`
|
39 |
+
- **Batch 2 (output)**: `"lives a wolf, some"`
|
40 |
+
(Hypothetical example)
|
41 |
+
|
42 |
+
## Why No Overlap?
|
43 |
+
Typical language modeling approaches may overlap consecutive batches for more training samples, but can lead to learning the same context repeatedly. Here, **each batch is distinct** and does **not share** tokens with the previous batch. This helps **reduce overfitting** and ensures **more variety** in each batch.
|
44 |
+
|
45 |
+
## Data Format
|
46 |
+
Each row in the dataset contains:
|
47 |
+
- **`input_text`**: A 20-token sequence from the short story.
|
48 |
+
- **`output_text`**: The **next 20 tokens**, shifted by one position.
|
49 |
+
|
50 |
+
**Example Row**:
|
51 |
+
```json
|
52 |
+
{
|
53 |
+
"input_text": "IN the house of",
|
54 |
+
"output_text": "the house of there"
|
55 |
+
}
|