rdiehlmartinez commited on
Commit
ed6050a
·
1 Parent(s): 33409b8

pico-decoder-medium-1 trained to 125k steps

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +54 -0
  2. config.json +22 -0
  3. eval_results/step_0.json +1 -0
  4. eval_results/step_1000.json +1 -0
  5. eval_results/step_10000.json +1 -0
  6. eval_results/step_100000.json +1 -0
  7. eval_results/step_101000.json +1 -0
  8. eval_results/step_102000.json +1 -0
  9. eval_results/step_103000.json +1 -0
  10. eval_results/step_104000.json +1 -0
  11. eval_results/step_105000.json +1 -0
  12. eval_results/step_106000.json +1 -0
  13. eval_results/step_107000.json +1 -0
  14. eval_results/step_108000.json +1 -0
  15. eval_results/step_109000.json +1 -0
  16. eval_results/step_11000.json +1 -0
  17. eval_results/step_110000.json +1 -0
  18. eval_results/step_111000.json +1 -0
  19. eval_results/step_112000.json +1 -0
  20. eval_results/step_113000.json +1 -0
  21. eval_results/step_114000.json +1 -0
  22. eval_results/step_115000.json +1 -0
  23. eval_results/step_116000.json +1 -0
  24. eval_results/step_117000.json +1 -0
  25. eval_results/step_118000.json +1 -0
  26. eval_results/step_119000.json +1 -0
  27. eval_results/step_12000.json +1 -0
  28. eval_results/step_120000.json +1 -0
  29. eval_results/step_121000.json +1 -0
  30. eval_results/step_122000.json +1 -0
  31. eval_results/step_123000.json +1 -0
  32. eval_results/step_124000.json +1 -0
  33. eval_results/step_125000.json +1 -0
  34. eval_results/step_13000.json +1 -0
  35. eval_results/step_14000.json +1 -0
  36. eval_results/step_15000.json +1 -0
  37. eval_results/step_16000.json +1 -0
  38. eval_results/step_17000.json +1 -0
  39. eval_results/step_18000.json +1 -0
  40. eval_results/step_19000.json +1 -0
  41. eval_results/step_2000.json +1 -0
  42. eval_results/step_20000.json +1 -0
  43. eval_results/step_21000.json +1 -0
  44. eval_results/step_22000.json +1 -0
  45. eval_results/step_23000.json +1 -0
  46. eval_results/step_24000.json +1 -0
  47. eval_results/step_25000.json +1 -0
  48. eval_results/step_26000.json +1 -0
  49. eval_results/step_27000.json +1 -0
  50. eval_results/step_28000.json +1 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - pico-lm/pretokenized-dolma
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ metrics:
8
+ - pico-lm/perplexity
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ # Pico Decoder Medium
13
+
14
+ **pico-decoder-medium** is a 181M parameter model in the `pico-decoder` suite, balancing scale and analyzability. Built with [`pico-train`](https://github.com/pico-lm) and instrumented with [`pico-analyze`](https://github.com/pico-lm), it enables detailed studies of layer-wise learning behavior during language model pretraining.
15
+
16
+ > NOTE: The `pico-decoder-medium-1` branch contains the full commit history for the training run.
17
+
18
+ ## 🔧 Model Details
19
+
20
+ | Field | Value |
21
+ |---------------------|------------------------------------|
22
+ | **Architecture** | Decoder-only transformer (LLaMA-style) |
23
+ | **Parameters** | 181M |
24
+ | **Layers** | 12 |
25
+ | **Hidden Size** | 768 |
26
+ | **Feed Forward Size**| 3072 |
27
+ | **Attention Heads** | 12 |
28
+ | **Key/Value Heads** | 4 |
29
+
30
+ ## 📚 Training
31
+
32
+ - **Dataset**: [`pretokenized-dolma`](https://github.com/pico-lm)
33
+ - **Training steps**: 200,000
34
+ - **Batch size**: 1024
35
+ - **Sequence length**: 2048
36
+ - **Optimizer**: AdamW
37
+ - **Learning rate schedule**: Linear decay with warmup
38
+ - **Compute**: 16 A100-SXM4-80GB GPUs
39
+
40
+ ## 📈 Evaluation and Analysis
41
+
42
+ This model supports fine-grained analysis using [pico-analyze](https://github.com/pico-lm). This tool enables researchers to understand how learning unfolds over training, even at very small scales.
43
+
44
+ We also evaluate perplexity of the model on the [pico-paloma-tinsy](https://huggingface.co/datasets/pico-lm/pretokenized-paloma-tinsy) dataset.
45
+
46
+ ## 📄 Citation
47
+
48
+ ```bibtex
49
+ @software{pico2025,
50
+ author = {Diehl Martinez, Richard},
51
+ title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
52
+ year = {2025},
53
+ url = {https://github.com/pico-lm}
54
+ }
config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_hidden_dim": 3072,
3
+ "architectures": [
4
+ "PicoDecoderHF"
5
+ ],
6
+ "attention_n_heads": 12,
7
+ "attention_n_kv_heads": 4,
8
+ "auto_map": {
9
+ "AutoConfig": "pico_decoder.PicoDecoderHFConfig",
10
+ "AutoModelForCausalLM": "pico_decoder.PicoDecoderHF"
11
+ },
12
+ "batch_size": 1024,
13
+ "d_model": 768,
14
+ "max_seq_len": 2048,
15
+ "model_type": "pico_decoder",
16
+ "n_layers": 12,
17
+ "norm_eps": 1e-06,
18
+ "position_emb_theta": 10000.0,
19
+ "torch_dtype": "float32",
20
+ "transformers_version": "4.48.3",
21
+ "vocab_size": 50304
22
+ }
eval_results/step_0.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 59416.7212543554}
eval_results/step_1000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 499.5738274564311}
eval_results/step_10000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 51.102903796073036}
eval_results/step_100000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.98631727670543}
eval_results/step_101000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.9158632586642}
eval_results/step_102000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.87562179640195}
eval_results/step_103000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.830084008123816}
eval_results/step_104000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.771775771350395}
eval_results/step_105000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.77101141268368}
eval_results/step_106000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.71518267995389}
eval_results/step_107000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.651495790315423}
eval_results/step_108000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.646850140418742}
eval_results/step_109000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.60034494516326}
eval_results/step_11000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 48.70718716470207}
eval_results/step_110000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.57924894937655}
eval_results/step_111000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.548662317006844}
eval_results/step_112000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.50031009675734}
eval_results/step_113000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.47728565412116}
eval_results/step_114000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.407706258355116}
eval_results/step_115000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.382002816715307}
eval_results/step_116000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.30816983603434}
eval_results/step_117000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.328557960330816}
eval_results/step_118000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.25960363577467}
eval_results/step_119000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.239006059759586}
eval_results/step_12000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 48.04790251246728}
eval_results/step_120000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.233270551435623}
eval_results/step_121000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.18521380598952}
eval_results/step_122000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.14745272227696}
eval_results/step_123000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.138502722334778}
eval_results/step_124000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.074931180435605}
eval_results/step_125000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 30.083627031821408}
eval_results/step_13000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 45.9789450450226}
eval_results/step_14000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 45.154312149988236}
eval_results/step_15000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 44.130179383032}
eval_results/step_16000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 43.383745260105734}
eval_results/step_17000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 42.70362300017154}
eval_results/step_18000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 42.00629499373951}
eval_results/step_19000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 41.85491885225117}
eval_results/step_2000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 177.08616091166638}
eval_results/step_20000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 40.92828142551595}
eval_results/step_21000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 40.3970818064354}
eval_results/step_22000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 40.0735236918054}
eval_results/step_23000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 39.55490014910283}
eval_results/step_24000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 39.20364000381908}
eval_results/step_25000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 38.902612380283635}
eval_results/step_26000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 38.46148998878559}
eval_results/step_27000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 38.17645851178452}
eval_results/step_28000.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paloma": 37.92720969861393}