BramVanroy commited on
Commit
30cb9b7
·
verified ·
1 Parent(s): 4ecc757

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md CHANGED
@@ -297,6 +297,116 @@ Every config also has a test set (for validation) of 1% the total size of the da
297
 
298
  Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.
299
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
300
 
301
  ## Filtering
302
 
 
297
 
298
  Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.
299
 
300
+ ## Configs
301
+
302
+ ### 10k -- 79 samples -- 10,087 tokens
303
+ - ratio_wikipedia: 100.00%
304
+ - total_num_tokens: 10,087
305
+ - train_num_tokens: 9,205
306
+ - test_num_tokens: 882
307
+ - total_num_samples: 79
308
+ - train_num_samples: 78
309
+ - test_num_samples: 1
310
+
311
+ ### 100k -- 1,057 samples -- 100,075 tokens
312
+ - ratio_wikipedia: 100.00%
313
+ - total_num_tokens: 100,075
314
+ - train_num_tokens: 98,044
315
+ - test_num_tokens: 2,031
316
+ - total_num_samples: 1,057
317
+ - train_num_samples: 1,047
318
+ - test_num_samples: 10
319
+
320
+ ### 1M -- 10,802 samples -- 1,000,239 tokens
321
+ - ratio_wikipedia: 100.00%
322
+ - total_num_tokens: 1,000,239
323
+ - train_num_tokens: 991,119
324
+ - test_num_tokens: 9,120
325
+ - total_num_samples: 10,802
326
+ - train_num_samples: 10,694
327
+ - test_num_samples: 108
328
+
329
+ ### 10M -- 141,263 samples -- 10,000,022 tokens
330
+ - ratio_wikipedia: 100.00%
331
+ - total_num_tokens: 10,000,022
332
+ - train_num_tokens: 9,874,772
333
+ - test_num_tokens: 125,250
334
+ - total_num_samples: 141,263
335
+ - train_num_samples: 139,851
336
+ - test_num_samples: 1,412
337
+
338
+ ### 100M -- 1,028,484 samples -- 100,000,047 tokens
339
+ - ratio_wikipedia: 100.00%
340
+ - total_num_tokens: 100,000,047
341
+ - train_num_tokens: 99,013,372
342
+ - test_num_tokens: 986,675
343
+ - total_num_samples: 1,028,484
344
+ - train_num_samples: 1,018,200
345
+ - test_num_samples: 10,284
346
+
347
+ ### 1B -- 5,153,898 samples -- 1,000,000,187 tokens
348
+ - ratio_wikipedia: 61.21%
349
+ - total_num_tokens: 1,000,000,187
350
+ - train_num_tokens: 989,990,190
351
+ - test_num_tokens: 10,009,997
352
+ - total_num_samples: 5,153,898
353
+ - train_num_samples: 5,102,360
354
+ - test_num_samples: 51,538
355
+
356
+ ### 5B -- 20,833,009 samples -- 5,000,000,076 tokens
357
+ - ratio_wikipedia: 25.35%
358
+ - total_num_tokens: 5,000,000,076
359
+ - train_num_tokens: 4,984,493,654
360
+ - test_num_tokens: 15,506,422
361
+ - total_num_samples: 20,833,009
362
+ - train_num_samples: 20,769,009
363
+ - test_num_samples: 64,000
364
+
365
+ ### 10B -- 40,240,566 samples -- 10,000,000,115 tokens
366
+ - ratio_wikipedia: 18.41%
367
+ - total_num_tokens: 10,000,000,115
368
+ - train_num_tokens: 9,984,156,828
369
+ - test_num_tokens: 15,843,287
370
+ - total_num_samples: 40,240,566
371
+ - train_num_samples: 40,176,566
372
+ - test_num_samples: 64,000
373
+
374
+ ### 15B -- 59,648,123 samples -- 15,000,000,154 tokens
375
+ - ratio_wikipedia: 15.98%
376
+ - total_num_tokens: 15,000,000,154
377
+ - train_num_tokens: 14,983,970,518
378
+ - test_num_tokens: 16,029,636
379
+ - total_num_samples: 59,648,123
380
+ - train_num_samples: 59,584,123
381
+ - test_num_samples: 64,000
382
+
383
+ ### 20B -- 79,055,679 samples -- 20,000,000,009 tokens
384
+ - ratio_wikipedia: 14.75%
385
+ - total_num_tokens: 20,000,000,009
386
+ - train_num_tokens: 19,983,799,357
387
+ - test_num_tokens: 16,200,652
388
+ - total_num_samples: 79,055,679
389
+ - train_num_samples: 78,991,679
390
+ - test_num_samples: 64,000
391
+
392
+ ### 25B -- 98,463,236 samples -- 25,000,000,048 tokens
393
+ - ratio_wikipedia: 14.00%
394
+ - total_num_tokens: 25,000,000,048
395
+ - train_num_tokens: 24,983,765,326
396
+ - test_num_tokens: 16,234,722
397
+ - total_num_samples: 98,463,236
398
+ - train_num_samples: 98,399,236
399
+ - test_num_samples: 64,000
400
+
401
+ ### 30B -- 117,870,793 samples -- 30,000,000,087 tokens
402
+ - ratio_wikipedia: 13.50%
403
+ - total_num_tokens: 30,000,000,087
404
+ - train_num_tokens: 29,983,707,932
405
+ - test_num_tokens: 16,292,155
406
+ - total_num_samples: 117,870,793
407
+ - train_num_samples: 117,806,793
408
+ - test_num_samples: 64,000
409
+
410
 
411
  ## Filtering
412