pere commited on
Commit
8b0bd85
·
verified ·
1 Parent(s): f6825a3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-3.0
3
+ task_categories:
4
+ - text-generation
5
+ - text-classification
6
+ language:
7
+ - no
8
+ pretty_name: WIKI Paragraphs Norwegian
9
+ configs:
10
+ - config_name: default
11
+ data_files:
12
+ - split: train
13
+ path: train.jsonl
14
+ - split: validation
15
+ path: validation.jsonl
16
+ - split: test
17
+ path: test.jsonl
18
+ - split: validation1000
19
+ path: validation1000.jsonl
20
+ - split: test1000
21
+ path: test1000.jsonl
22
+ - split: validation100
23
+ path: validation100.jsonl
24
+ - split: test100
25
+ path: test100.jsonl
26
+ - split: pretrain
27
+ path: pretrain.jsonl
28
+ - split: reserve
29
+ path: reserve.jsonl
30
+ version: 1.0.0
31
+ citation: >
32
+ This dataset contains content from Wikipedia under CC BY-SA 3.0 license.
33
+ dataset_info:
34
+ features:
35
+ - name: text
36
+ dtype: string
37
+ - name: url
38
+ dtype: string
39
+ - name: paragraph_number
40
+ dtype: int64
41
+ - name: corrupt
42
+ dtype: string
43
+ - name: corrupt_level
44
+ dtype: int64
45
+
46
+ splits:
47
+ - name: train
48
+ num_examples: 1000000
49
+ - name: validation
50
+ num_examples: 10000
51
+ - name: test
52
+ num_examples: 10000
53
+ - name: validation1000
54
+ num_examples: 1000
55
+ - name: test1000
56
+ num_examples: 1000
57
+ - name: validation100
58
+ num_examples: 100
59
+ - name: test100
60
+ num_examples: 100
61
+ - name: pretrain
62
+ num_examples: 10000
63
+ - name: reserve
64
+ num_examples: 100000
65
+ ---
66
+ # WIKI Paragraphs Norwegian
67
+
68
+ A multi-split dataset for machine learning research and evaluation, containing text samples in JSON Lines format.
69
+
70
+ ## Features
71
+ - **Multiple splits** for different use cases
72
+ - **Random shuffle** with Fisher-Yates algorithm
73
+ - **Structured format** with text and metadata
74
+ - **Size-varied validation/test sets** (100 to 10k samples)
75
+
76
+ ## Splits Overview
77
+ | Split Name | Samples | Typical Usage |
78
+ |---------------------|--------:|------------------------|
79
+ | `train` | 1,000,000 | Primary training data |
80
+ | `validation` | 10,000 | Standard validation |
81
+ | `test` | 10,000 | Final evaluation |
82
+ | `validation1000` | 1,000 | Quick validation |
83
+ | `test1000` | 1,000 | Rapid testing |
84
+ | `validation100` | 100 | Debugging/development |
85
+ | `test100` | 100 | Small-scale checks |
86
+ | `pretrain` | 10,000 | Pre-training phase |
87
+ | `reverse` | 100,000 | Special tasks |
88
+
89
+ **Total Samples:** 1,132,200
90
+
91
+ ## License
92
+ **Creative Commons Attribution-ShareAlike 3.0**
93
+ [![CC BY-SA 3.0](https://licensebuttons.net/l/by-sa/3.0/88x31.png)](https://creativecommons.org/licenses/by-sa/3.0/)
94
+
95
+ This dataset inherits Wikipedia's licensing terms:
96
+ - **Attribution Required**
97
+ - **ShareAlike Mandatory**
98
+ - **Commercial Use Allowed**
99
+
100
+ ## Usage
101
+ ```python
102
+ from datasets import load_dataset
103
+
104
+ # Load main training split
105
+ dataset = load_dataset("your-username/dataset-name", split="train")
106
+
107
+ # Access smaller validation split
108
+ val_100 = load_dataset("your-username/dataset-name", "validation100")
109
+ ```
110
+
111
+ ## Data Structure
112
+
113
+ Each line contains JSON:
114
+
115
+ ```json
116
+ Copy
117
+ {
118
+ "text": "Full text content...",
119
+ "metadata": {
120
+ "source": "Wikipedia",
121
+ "timestamp": "2023-01-01",
122
+ "url": "https://..."
123
+ }
124
+ }
125
+ ```
126
+
127
+ ## Notes
128
+
129
+ All splits accessible via:
130
+ load_dataset(repo_id, split_name)
131
+ Non-standard splits (e.g., reverse) require explicit config:
132
+ split="reverse"
133
+ When using, include attribution:
134
+ "Contains content from Wikipedia under CC BY-SA 3.0"