Question Answering
Transformers
Safetensors
English
doge
text-generation
trl
sft
dpo
custom_code
JingzeShi commited on
Commit
8968d05
verified
1 Parent(s): 616e282

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: question-answering
14
 
15
  # **Doge 20M Instruct**
16
 
17
- ![architecture](https://cdn-uploads.huggingface.co/production/uploads/673ab3647afcea17eb4378fd/BBPbIRiw1KxdgYp_5CABf.png)
18
 
19
  Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.
20
 
@@ -70,8 +70,8 @@ We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/dat
70
  **SFT**:
71
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
72
  |---|---|---|---|---|---|---|
73
- | [Doge-20M-Instruct-SFT](https://huggingface.co/JingzeShi/Doge-20M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 8192 | 8e-4 | 1M | bfloat16 |
74
- | [Doge-60M-Instruct](https://huggingface.co/JingzeShi/Doge-60M-Instruct) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 8192 | 6e-4 | 1M | bfloat16 |
75
 
76
  **DPO**:
77
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
@@ -83,10 +83,10 @@ We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/dat
83
  **Procedure**:
84
 
85
  **SFT**:
86
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/xl21ytg8)
87
 
88
  **DPO**:
89
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/874wshj2)
90
 
91
 
92
  **Environment**:
 
14
 
15
  # **Doge 20M Instruct**
16
 
17
+ ![architecture](Doge.png)
18
 
19
  Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.
20
 
 
70
  **SFT**:
71
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
72
  |---|---|---|---|---|---|---|
73
+ | [Doge-20M-Instruct-SFT](https://huggingface.co/JingzeShi/Doge-20M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 8e-4 | 0.25M | bfloat16 |
74
+ | [Doge-60M-Instruct](https://huggingface.co/JingzeShi/Doge-60M-Instruct) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 6e-4 | 0.25M | bfloat16 |
75
 
76
  **DPO**:
77
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
 
83
  **Procedure**:
84
 
85
  **SFT**:
86
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/eohr6fuj)
87
 
88
  **DPO**:
89
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/h6c2p2fe)
90
 
91
 
92
  **Environment**: