Pico Decoder Model Suite
Collection
Pico Decoder models (10M-500M)
β’
4 items
β’
Updated
β’
1
pico-decoder-medium is a 181M parameter model in the pico-decoder suite, balancing scale and analyzability. Built with pico-train and instrumented with pico-analyze, it enables detailed studies of layer-wise learning behavior during language model pretraining.
NOTE: The
pico-decoder-medium-1branch contains the full commit history for the training run.
| Field | Value |
|---|---|
| Architecture | Decoder-only transformer (LLaMA-style) |
| Parameters | 181M |
| Layers | 12 |
| Hidden Size | 768 |
| Feed Forward Size | 3072 |
| Attention Heads | 12 |
| Key/Value Heads | 4 |
pretokenized-dolmaThis model supports fine-grained analysis using pico-analyze. This tool enables researchers to understand how learning unfolds over training, even at very small scales.
We also evaluate perplexity of the model on the pico-paloma-tinsy dataset.
@software{pico2025,
author = {Diehl Martinez, Richard},
title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
year = {2025},
url = {https://github.com/pico-lm}
}