rdiehlmartinez commited on
Commit
32c7549
·
1 Parent(s): ce5933e

pico-decoder-tiny-1 trained to 125k steps

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +57 -0
  2. config.json +22 -0
  3. eval_results/step_0.json +1 -0
  4. eval_results/step_1000.json +1 -0
  5. eval_results/step_10000.json +1 -0
  6. eval_results/step_100000.json +1 -0
  7. eval_results/step_101000.json +1 -0
  8. eval_results/step_102000.json +1 -0
  9. eval_results/step_103000.json +1 -0
  10. eval_results/step_104000.json +1 -0
  11. eval_results/step_105000.json +1 -0
  12. eval_results/step_106000.json +1 -0
  13. eval_results/step_107000.json +1 -0
  14. eval_results/step_108000.json +1 -0
  15. eval_results/step_109000.json +1 -0
  16. eval_results/step_11000.json +1 -0
  17. eval_results/step_110000.json +1 -0
  18. eval_results/step_111000.json +1 -0
  19. eval_results/step_112000.json +1 -0
  20. eval_results/step_113000.json +1 -0
  21. eval_results/step_114000.json +1 -0
  22. eval_results/step_115000.json +1 -0
  23. eval_results/step_116000.json +1 -0
  24. eval_results/step_117000.json +1 -0
  25. eval_results/step_118000.json +1 -0
  26. eval_results/step_119000.json +1 -0
  27. eval_results/step_12000.json +1 -0
  28. eval_results/step_120000.json +1 -0
  29. eval_results/step_121000.json +1 -0
  30. eval_results/step_122000.json +1 -0
  31. eval_results/step_123000.json +1 -0
  32. eval_results/step_124000.json +1 -0
  33. eval_results/step_125000.json +1 -0
  34. eval_results/step_13000.json +1 -0
  35. eval_results/step_14000.json +1 -0
  36. eval_results/step_15000.json +1 -0
  37. eval_results/step_16000.json +1 -0
  38. eval_results/step_17000.json +1 -0
  39. eval_results/step_18000.json +1 -0
  40. eval_results/step_19000.json +1 -0
  41. eval_results/step_2000.json +1 -0
  42. eval_results/step_20000.json +1 -0
  43. eval_results/step_21000.json +1 -0
  44. eval_results/step_22000.json +1 -0
  45. eval_results/step_23000.json +1 -0
  46. eval_results/step_24000.json +1 -0
  47. eval_results/step_25000.json +1 -0
  48. eval_results/step_26000.json +1 -0
  49. eval_results/step_27000.json +1 -0
  50. eval_results/step_28000.json +1 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - pico-lm/pretokenized-dolma
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ metrics:
8
+ - pico-lm/perplexity
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ # Pico Decoder Tiny
13
+
14
+ **pico-decoder-tiny** is the smallest (11M) model in the `pico-decoder` suite — a lightweight, LLaMA-style decoder-only transformer trained from scratch using [`pico-train`](https://github.com/pico-lm/pico-train). It is designed for transparent and reproducible research into the learning dynamics of language models, and is fully compatible with the `pico-analyze` toolkit for detailed interpretability analysis.
15
+
16
+ > NOTE: The `pico-decoder-tiny-1` branch contains the full commit history for the training run.
17
+
18
+ ## 🔧 Model Details
19
+
20
+ | Field | Value |
21
+ |---------------------|------------------------------------|
22
+ | **Architecture** | Decoder-only transformer (LLaMA-style) |
23
+ | **Parameters** | 11M |
24
+ | **Layers** | 12 |
25
+ | **Hidden Size** | 96 |
26
+ | **Feed Foward Size** | 384 |
27
+ | **Attention Heads** | 12 |
28
+ | **Key/Value Heads** | 4 |
29
+
30
+ ## 📚 Training
31
+
32
+ - **Dataset**: [`pretokenized-dolma`](https://huggingface.co/datasets/pico-lm/pretokenized-dolma), English-only
33
+ - **Training steps**: 200,000
34
+ - **Batch size**: 1024
35
+ - **Sequence length**: 2048
36
+ - **Optimizer**: AdamW
37
+ - **Learning rate schedule**: Linear decay with warmup
38
+ - **Compute**: 16 A100-SXM4-80GB GPUs
39
+
40
+ ## 📈 Evaluation and Analysis
41
+
42
+ This model supports fine-grained analysis using [`pico-analyze`](https://github.com/pico-lm/pico-analyze). This tool enables researchers to understand how learning unfolds over training, even at very small scales.
43
+
44
+ We also evaluate perplexity of the model on the [`pico-paloma-tinsy`](https://huggingface.co/datasets/pico-lm/pretokenized-paloma-tinsy) dataset.
45
+
46
+ ## 📄 Citation
47
+
48
+ If you use `pico-tiny` or any other `pico-decoder` model in your research, please cite:
49
+
50
+ ```bibtex
51
+ @software{pico2025,
52
+ author = {Diehl Martinez, Richard},
53
+ title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
54
+ year = {2025,
55
+ url = {https://github.com/pico-lm}
56
+ }
57
+ ```
config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_hidden_dim": 384,
3
+ "architectures": [
4
+ "PicoDecoderHF"
5
+ ],
6
+ "attention_n_heads": 12,
7
+ "attention_n_kv_heads": 4,
8
+ "auto_map": {
9
+ "AutoConfig": "pico_decoder.PicoDecoderHFConfig",
10
+ "AutoModelForCausalLM": "pico_decoder.PicoDecoderHF"
11
+ },
12
+ "batch_size": 1024,
13
+ "d_model": 96,
14
+ "max_seq_len": 2048,
15
+ "model_type": "pico_decoder",
16
+ "n_layers": 12,
17
+ "norm_eps": 1e-06,
18
+ "position_emb_theta": 10000.0,
19
+ "torch_dtype": "float32",
20
+ "transformers_version": "4.48.3",
21
+ "vocab_size": 50304
22
+ }
eval_results/step_0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 59435.05139917247}
eval_results/step_1000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 2176.422658291594}
eval_results/step_10000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 313.8343901737226}
eval_results/step_100000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 142.78638106821307}
eval_results/step_101000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 141.94935619208042}
eval_results/step_102000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 141.70286827486152}
eval_results/step_103000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 142.04302229000717}
eval_results/step_104000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 141.28717064840868}
eval_results/step_105000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 140.51475878293505}
eval_results/step_106000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 140.16921514750356}
eval_results/step_107000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 140.26420981211115}
eval_results/step_108000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 140.04452455683452}
eval_results/step_109000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 139.86317522019044}
eval_results/step_11000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 268.332413412885}
eval_results/step_110000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 139.03501056627945}
eval_results/step_111000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 138.82995247192915}
eval_results/step_112000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 139.3511911510175}
eval_results/step_113000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 138.58265911295024}
eval_results/step_114000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 137.69777231083515}
eval_results/step_115000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 138.02265500158384}
eval_results/step_116000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 137.61472352954985}
eval_results/step_117000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 137.60625675962362}
eval_results/step_118000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 138.34565367748513}
eval_results/step_119000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 136.82007657393345}
eval_results/step_12000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 254.46162965488767}
eval_results/step_120000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 136.73713672285712}
eval_results/step_121000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 136.92190282984478}
eval_results/step_122000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 136.2517988980855}
eval_results/step_123000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 137.01620536614794}
eval_results/step_124000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 136.65228910047418}
eval_results/step_125000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 136.16973869742418}
eval_results/step_13000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 249.49274518747362}
eval_results/step_14000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 260.20426084006704}
eval_results/step_15000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 239.41960436525244}
eval_results/step_16000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 229.52608692687562}
eval_results/step_17000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 225.62338353731906}
eval_results/step_18000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 212.63083450470236}
eval_results/step_19000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 208.58393890899234}
eval_results/step_2000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 884.8345587062504}
eval_results/step_20000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 205.5417249480191}
eval_results/step_21000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 201.91932611332538}
eval_results/step_22000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 199.08593051392026}
eval_results/step_23000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 198.37421456945066}
eval_results/step_24000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 193.6218826051373}
eval_results/step_25000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 192.3463352126942}
eval_results/step_26000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 190.8057739201323}
eval_results/step_27000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 192.40458190831572}
eval_results/step_28000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 188.59788890599373}