depth
int64 11
47
| width
int64 509
88.2k
| tokens
int64 6.48B
3,027B
| FLOPs_per_token
float64 636M
33,033B
| FLOPs
float64 4,120,605,235B
99,979,649,681,170,780B
| params
float64 54.4M
5,486B
| params_with_embeds
float64 102M
5,494B
| FLOPs_6N
float64 2,116,153,502B
99,624,955,125,929,460B
| params_pred_loss
float64 2.24
3.91
| wd_ratio
float64 36.4
2.11k
| wd_pred_loss
float64 2.14
3.8
| bucket
stringlengths 31
48
|
---|---|---|---|---|---|---|---|---|---|---|---|
14 | 509 | 6,480,719,826 | 635,825,239.5 | 4,120,605,235,498,848,000 | 54,421,771 | 102,046,865 | 2,116,153,501,714,391,000 | 3.913806 | 36.357143 | 3.801099 | (1e+18, 4.1246263829013647e+18] |
13 | 897 | 11,591,196,185 | 1,467,007,395.75 | 17,004,370,528,984,185,000 | 156,922,974 | 245,640,759 | 10,913,549,865,405,925,000 | 3.400098 | 69 | 3.353341 | (4.1246263829013647e+18, 1.7012542798525858e+19] |
15 | 1,367 | 20,731,621,274 | 3,383,504,418.75 | 70,145,532,188,430,510,000 | 420,497,402 | 552,676,733 | 52,305,557,309,789,580,000 | 3.053754 | 91.133333 | 3.037555 | (1.7012542798525858e+19, 7.017038286703836e+19] |
11 | 2,576 | 36,317,859,190 | 7,966,275,492 | 289,318,071,587,203,970,000 | 1,094,962,288 | 1,352,768,368 | 238,600,117,163,665,360,000 | 2.814876 | 234.181818 | 2.836559 | (7.017038286703836e+19, 2.89426612471674e+20] |
15 | 3,579 | 61,033,902,880 | 19,546,104,543.75 | 1,192,975,046,405,564,100,000 | 2,882,190,174 | 3,215,584,761 | 1,055,467,890,969,637,800,000 | 2.646707 | 238.6 | 2.658221 | (2.89426612471674e+20, 1.1937766417144357e+21] |
14 | 5,935 | 102,570,398,801 | 47,989,967,842.5 | 4,922,350,140,052,391,000,000 | 7,397,259,365 | 7,982,652,155 | 4,552,439,058,614,892,000,000 | 2.52674 | 423.928571 | 2.545147 | (1.1937766417144357e+21, 4.923882631706752e+21] |
14 | 9,430 | 172,374,470,806 | 117,777,168,465 | 20,301,777,087,183,486,000,000 | 18,674,502,470 | 19,572,276,190 | 19,314,044,884,989,540,000,000 | 2.440307 | 673.571429 | 2.457013 | (4.923882631706752e+21, 2.0309176209047305e+22] |
18 | 13,427 | 277,899,506,501 | 301,395,714,589.5 | 83,757,720,345,938,300,000,000 | 48,677,265,629 | 49,944,747,575 | 81,164,328,576,703,140,000,000 | 2.377552 | 745.944444 | 2.371825 | (2.0309176209047305e+22, 8.376776400682923e+22] |
15 | 23,661 | 448,025,367,982 | 770,686,831,406.25 | 345,287,251,239,666,740,000,000 | 125,965,390,716 | 128,229,819,399 | 338,614,143,171,193,840,000,000 | 2.331793 | 1,577.4 | 2.333569 | (8.376776400682923e+22, 3.455107294592233e+23] |
22 | 31,698 | 707,455,942,281 | 2,014,240,265,127 | 1,424,986,244,745,753,000,000,000 | 331,573,283,730 | 334,501,544,970 | 1,407,440,939,258,475,000,000,000 | 2.298312 | 1,440.818182 | 2.265183 | (3.455107294592233e+23, 1.4251026703029964e+24] |
26 | 46,246 | 1,164,480,618,372 | 5,044,778,877,687 | 5,874,547,227,038,962,000,000,000 | 834,092,532,278 | 838,454,270,014 | 5,827,707,526,599,317,000,000,000 | 2.27378 | 1,778.692308 | 2.222053 | (1.4251026703029964e+24, 5.878016072274924e+24] |
32 | 67,471 | 1,838,776,980,082 | 13,178,281,143,939 | 24,231,920,004,523,720,000,000,000 | 2,185,125,589,295 | 2,191,540,192,207 | 24,107,751,793,102,565,000,000,000 | 2.255774 | 2,108.46875 | 2.183951 | (5.878016072274924e+24, 2.4244620170823405e+25] |
47 | 88,213 | 3,026,648,059,395 | 33,033,126,983,768.25 | 99,979,649,681,170,790,000,000,000 | 5,485,989,405,380 | 5,494,072,450,783 | 99,624,955,125,929,450,000,000,000 | 2.242549 | 1,876.87234 | 2.143538 | (2.4244620170823405e+25, 1e+26] |
This dataset is my cache for the scaling-laws related to the gemstone models.
In data_cache
is the approach 3 data cache with the mins for delta=1e-4
, the mins for delta=1e-3
are in mins_1e-3
.
This is the code I used to upload it:
import pandas as pd
from datasets import Dataset
import os
import gc
def get_data_dict(path):
contents = os.listdir(path)
ds_store = {}
for i, file in enumerate(contents):
gc.collect()
df = pd.read_parquet(f"{path}{file}")
for col in df.columns:
if pd.api.types.is_interval_dtype(df[col]):
df[col] = df[col].astype(str)
hf_dataset = Dataset.from_pandas(df)
ds_store[file.replace(".parquet", "")] = hf_dataset
hf_dataset.push_to_hub(
"smcleish/scaling-laws-cache",
private=True,
data_dir=path.split("/")[1] + "/" + file.replace(".parquet", ""),
)
gc.collect()
ds_1 = get_data_dict("plotters/data_cache/")
ds_2 = get_data_dict("plotters/mins_1e-3/")
To download it do the oppostite of this. The cache is very large, so maybe target specific files you would like. The approach 3 code is expecting pandas .parquet
files.
Please open a discussion with any questions as this is currently very experimental.
- Downloads last month
- 35