File size: 3,452 Bytes
8b0bd85 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
license: cc-by-sa-3.0
task_categories:
- text-generation
- text-classification
language:
- no
pretty_name: WIKI Paragraphs Norwegian
configs:
- config_name: default
data_files:
- split: train
path: train.jsonl
- split: validation
path: validation.jsonl
- split: test
path: test.jsonl
- split: validation1000
path: validation1000.jsonl
- split: test1000
path: test1000.jsonl
- split: validation100
path: validation100.jsonl
- split: test100
path: test100.jsonl
- split: pretrain
path: pretrain.jsonl
- split: reserve
path: reserve.jsonl
version: 1.0.0
citation: >
This dataset contains content from Wikipedia under CC BY-SA 3.0 license.
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: paragraph_number
dtype: int64
- name: corrupt
dtype: string
- name: corrupt_level
dtype: int64
splits:
- name: train
num_examples: 1000000
- name: validation
num_examples: 10000
- name: test
num_examples: 10000
- name: validation1000
num_examples: 1000
- name: test1000
num_examples: 1000
- name: validation100
num_examples: 100
- name: test100
num_examples: 100
- name: pretrain
num_examples: 10000
- name: reserve
num_examples: 100000
---
# WIKI Paragraphs Norwegian
A multi-split dataset for machine learning research and evaluation, containing text samples in JSON Lines format.
## Features
- **Multiple splits** for different use cases
- **Random shuffle** with Fisher-Yates algorithm
- **Structured format** with text and metadata
- **Size-varied validation/test sets** (100 to 10k samples)
## Splits Overview
| Split Name | Samples | Typical Usage |
|---------------------|--------:|------------------------|
| `train` | 1,000,000 | Primary training data |
| `validation` | 10,000 | Standard validation |
| `test` | 10,000 | Final evaluation |
| `validation1000` | 1,000 | Quick validation |
| `test1000` | 1,000 | Rapid testing |
| `validation100` | 100 | Debugging/development |
| `test100` | 100 | Small-scale checks |
| `pretrain` | 10,000 | Pre-training phase |
| `reverse` | 100,000 | Special tasks |
**Total Samples:** 1,132,200
## License
**Creative Commons Attribution-ShareAlike 3.0**
[](https://creativecommons.org/licenses/by-sa/3.0/)
This dataset inherits Wikipedia's licensing terms:
- **Attribution Required**
- **ShareAlike Mandatory**
- **Commercial Use Allowed**
## Usage
```python
from datasets import load_dataset
# Load main training split
dataset = load_dataset("your-username/dataset-name", split="train")
# Access smaller validation split
val_100 = load_dataset("your-username/dataset-name", "validation100")
```
## Data Structure
Each line contains JSON:
```json
Copy
{
"text": "Full text content...",
"metadata": {
"source": "Wikipedia",
"timestamp": "2023-01-01",
"url": "https://..."
}
}
```
## Notes
All splits accessible via:
load_dataset(repo_id, split_name)
Non-standard splits (e.g., reverse) require explicit config:
split="reverse"
When using, include attribution:
"Contains content from Wikipedia under CC BY-SA 3.0" |