Question Answering
Transformers
Safetensors
English
doge
text-generation
trl
sft
dpo
custom_code
JingzeShi commited on
Commit
337a10f
verified
1 Parent(s): 84bcd69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -16
README.md CHANGED
@@ -27,9 +27,9 @@ tags:
27
  <a href="https://discord.gg/P2yYH95N" target="_blank" style="margin: 2px;">
28
  <img alt="Discord" src="https://img.shields.io/badge/Discord-Small%20Doges-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
- <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
31
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
32
- </a>
33
  <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
34
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
35
  </a>
@@ -85,19 +85,31 @@ outputs = model.generate(
85
 
86
  We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) and then DPO on [UltraFeedback Binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
87
 
88
- > TODO: The larger model is under training and will be uploaded soon.
89
-
90
  **SFT**:
91
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
92
  |---|---|---|---|---|---|---|
93
- | [Doge-20M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-20M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 8e-4 | 0.25M | bfloat16 |
94
- | [Doge-60M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-60M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 6e-4 | 0.25M | bfloat16 |
 
 
95
 
96
  **DPO**:
97
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
98
  |---|---|---|---|---|---|---|
99
- | [Doge-20M-Instruct](https://huggingface.co/SmallDoge/Doge-20M-Instruct) | [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 8e-5 | 0.125M | bfloat16 |
100
- | [Doge-60M-Instruct](https://huggingface.co/SmallDoge/Doge-60M-Instruct) | [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 6e-5 | 0.125M | bfloat16 |
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
 
103
  **Procedure**:
@@ -118,13 +130,11 @@ We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/dat
118
  ## Citation
119
 
120
  ```bibtex
121
- @misc{shi2024wonderfulmatrices,
122
- title={Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture},
123
- author={Jingze Shi and Bingheng Wu},
124
- year={2024},
125
- eprint={2412.11834},
126
- archivePrefix={arXiv},
127
- primaryClass={cs.LG},
128
- url={https://arxiv.org/abs/2412.11834},
129
  }
130
  ```
 
27
  <a href="https://discord.gg/P2yYH95N" target="_blank" style="margin: 2px;">
28
  <img alt="Discord" src="https://img.shields.io/badge/Discord-Small%20Doges-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
+ <!-- <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
31
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
32
+ </a> -->
33
  <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
34
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
35
  </a>
 
85
 
86
  We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) and then DPO on [UltraFeedback Binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
87
 
 
 
88
  **SFT**:
89
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
90
  |---|---|---|---|---|---|---|
91
+ | [Doge-20M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-20M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 8e-4 | 0.25M | bfloat16 |
92
+ | [Doge-60M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-60M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 6e-4 | 0.25M | bfloat16 |
93
+ | [Doge-160M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-160M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 4e-4 | 0.25M | bfloat16 |
94
+ | [Doge-320M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-320M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 2e-4 | 0.25M | bfloat16 |
95
 
96
  **DPO**:
97
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
98
  |---|---|---|---|---|---|---|
99
+ | [Doge-20M-Instruct](https://huggingface.co/SmallDoge/Doge-20M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 8e-5 | 0.125M | bfloat16 |
100
+ | [Doge-60M-Instruct](https://huggingface.co/SmallDoge/Doge-60M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 6e-5 | 0.125M | bfloat16 |
101
+ | [Doge-160M-Instruct](https://huggingface.co/SmallDoge/Doge-160M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 4e-5 | 0.125M | bfloat16 |
102
+ | [Doge-320M-Instruct](https://huggingface.co/SmallDoge/Doge-320M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 2e-5 | 0.125M | bfloat16 |
103
+
104
+
105
+ **Evaluation**:
106
+
107
+ | Model | IFEval (Prompt Strict Acc) | MMLU | BBH | ARC | PIQA | HellaSwag | tokens / s on i7-11 CPU |
108
+ |---|---|---|---|---|---|---|---|
109
+ | [Doge-20M-Instruct](https://huggingface.co/SmallDoge/Doge-20M-Instruct) | 7.3 | 26.3 | 18.3 | 29.2 | 57.8 | 27.8 | 142 |
110
+ | [Doge-60M-Instruct](https://huggingface.co/SmallDoge/Doge-60M-Instruct) | 7.4 | 27.5 | 27.7 | 37.5 | 61.4 | 32.1 | 62 |
111
+ | [Doge-160M-Instruct](https://huggingface.co/SmallDoge/Doge-160M-Instruct) | 16.8 | 29.7 | 29.1 | 42.8 | 64.1 | 37.1 | 28 |
112
+ | [Doge-320M-Instruct](https://huggingface.co/SmallDoge/Doge-320M-Instruct) | 28.5 | 30.3 | 31.9 | 51.7 | 71.0 | 50.6 | 16 |
113
 
114
 
115
  **Procedure**:
 
130
  ## Citation
131
 
132
  ```bibtex
133
+ @misc{smalldoges,
134
+ title={SmallDoges: A Family of Dynamic UltraFast Small Language Models},
135
+ author={Jingze, Shi and Yifan, Wu and Bingheng, Wu and Yuyu, Luo},
136
+ year={2025},
137
+ month={March},
138
+ url={https://github.com/SmallDoges/small-doge}
 
 
139
  }
140
  ```