Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: question-answering
|
|
14 |
|
15 |
# **Doge 20M Instruct**
|
16 |
|
17 |
-
 | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 |
|
74 |
-
| [Doge-60M-Instruct](https://huggingface.co/JingzeShi/Doge-60M-Instruct) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 |
|
75 |
|
76 |
**DPO**:
|
77 |
| Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
|
@@ -83,10 +83,10 @@ We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/dat
|
|
83 |
**Procedure**:
|
84 |
|
85 |
**SFT**:
|
86 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/
|
87 |
|
88 |
**DPO**:
|
89 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/
|
90 |
|
91 |
|
92 |
**Environment**:
|
|
|
14 |
|
15 |
# **Doge 20M Instruct**
|
16 |
|
17 |
+

|
18 |
|
19 |
Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.
|
20 |
|
|
|
70 |
**SFT**:
|
71 |
| Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
|
72 |
|---|---|---|---|---|---|---|
|
73 |
+
| [Doge-20M-Instruct-SFT](https://huggingface.co/JingzeShi/Doge-20M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 8e-4 | 0.25M | bfloat16 |
|
74 |
+
| [Doge-60M-Instruct](https://huggingface.co/JingzeShi/Doge-60M-Instruct) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 6e-4 | 0.25M | bfloat16 |
|
75 |
|
76 |
**DPO**:
|
77 |
| Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
|
|
|
83 |
**Procedure**:
|
84 |
|
85 |
**SFT**:
|
86 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/eohr6fuj)
|
87 |
|
88 |
**DPO**:
|
89 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/h6c2p2fe)
|
90 |
|
91 |
|
92 |
**Environment**:
|