Update README.md
Browse files
README.md
CHANGED
@@ -32,12 +32,6 @@ A short-story dataset where **each input is a non-overlapping context of 20 toke
|
|
32 |
- **Name:** `sample-no-overfit`
|
33 |
- **Context Size (`context_size`):** 20
|
34 |
- **Stride/Step:** After one batch of 20 tokens, we move to the **next 20 tokens** (no overlap).
|
35 |
-
- **Example**:
|
36 |
-
- **Batch 1 (input)**: `"IN the house of"`
|
37 |
-
- **Batch 1 (output)**: `"the house of there"`
|
38 |
-
- **Batch 2 (input)**: `"there lives a wolf,"`
|
39 |
-
- **Batch 2 (output)**: `"lives a wolf, some"`
|
40 |
-
(Hypothetical example)
|
41 |
|
42 |
## Why No Overlap?
|
43 |
Typical language modeling approaches may overlap consecutive batches for more training samples, but can lead to learning the same context repeatedly. Here, **each batch is distinct** and does **not share** tokens with the previous batch. This helps **reduce overfitting** and ensures **more variety** in each batch.
|
@@ -50,6 +44,6 @@ Each row in the dataset contains:
|
|
50 |
**Example Row**:
|
51 |
```json
|
52 |
{
|
53 |
-
"input_text": "
|
54 |
-
"output_text": "
|
55 |
}
|
|
|
32 |
- **Name:** `sample-no-overfit`
|
33 |
- **Context Size (`context_size`):** 20
|
34 |
- **Stride/Step:** After one batch of 20 tokens, we move to the **next 20 tokens** (no overlap).
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
## Why No Overlap?
|
37 |
Typical language modeling approaches may overlap consecutive batches for more training samples, but can lead to learning the same context repeatedly. Here, **each batch is distinct** and does **not share** tokens with the previous batch. This helps **reduce overfitting** and ensures **more variety** in each batch.
|
|
|
44 |
**Example Row**:
|
45 |
```json
|
46 |
{
|
47 |
+
"input_text": "t huis, waar deze eerlooze schurk, Michael Popow",
|
48 |
+
"output_text": "huis, waar deze eerlooze schurk, Michael Popowitch"
|
49 |
}
|