metadata
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 300443
num_examples: 2629
download_size: 198694
dataset_size: 300443
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- token-classification
language:
- en
sample-no-overfit
A short-story dataset where each input is a non-overlapping context of 20 tokens, and the output is the same 20 tokens shifted by one position. This means no overlap between consecutive batches, reducing the risk of overfitting to the same text segments.
Dataset Overview
- Name:
sample-no-overfit
- Context Size (
context_size
): 20 - Stride/Step: After one batch of 20 tokens, we move to the next 20 tokens (no overlap).
Why No Overlap?
Typical language modeling approaches may overlap consecutive batches for more training samples, but can lead to learning the same context repeatedly. Here, each batch is distinct and does not share tokens with the previous batch. This helps reduce overfitting and ensures more variety in each batch.
Data Format
Each row in the dataset contains:
input_text
: A 20-token sequence from the short story.output_text
: The next 20 tokens, shifted by one position.
Example Row:
{
"input_text": "t huis, waar deze eerlooze schurk, Michael Popow",
"output_text": "huis, waar deze eerlooze schurk, Michael Popowitch"
}