JingzeShi commited on
Commit
6ed25d4
·
verified ·
1 Parent(s): 700a420

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -10
README.md CHANGED
@@ -10,11 +10,6 @@ pipeline_tag: text-generation
10
 
11
  # **Doge 160M checkpoint**
12
 
13
- **NOTE: This model is training, you can find the real-time training logs on wandb.**
14
-
15
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/3uyc9a89)
16
-
17
-
18
  ![wsd_scheduler](./wsd_scheduler.png)
19
 
20
  Doge uses `wsd_scheduler` as the training scheduler, which divides the learning rate into three stages: `warmup`, `stable`, and `decay`. It allows us to continue training on any new dataset from any checkpoint in the `stable stage` without spikes of the training.
@@ -24,11 +19,11 @@ Here are the initial learning rates required to continue training at each checkp
24
  - **[Doge-20M](https://huggingface.co/SmallDoge/Doge-20M-checkpoint)**: 8e-3
25
  - **[Doge-60M](https://huggingface.co/SmallDoge/Doge-60M-checkpoint)**: 6e-3
26
  - **[Doge-160M](https://huggingface.co/SmallDoge/Doge-160M-checkpoint)**: 4e-3
27
- - **Doge-320M**: 2e-3
28
 
29
  | Model | Learning Rate | Schedule | Warmup Steps | Stable Steps |
30
  |-------|---------------|----------|--------------|--------------|
31
- | Doge-20M | 8e-3 | wsd_scheduler | 800 | 6400 |
32
- | Doge-60M | 6e-3 | wsd_scheduler | 1600 | 12800 |
33
- | Doge-160M | 4e-3 | wsd_scheduler | 2400 | 19200 |
34
- | Doge-320M | 2e-3 | wsd_scheduler | 3200 | 25600 |
 
10
 
11
  # **Doge 160M checkpoint**
12
 
 
 
 
 
 
13
  ![wsd_scheduler](./wsd_scheduler.png)
14
 
15
  Doge uses `wsd_scheduler` as the training scheduler, which divides the learning rate into three stages: `warmup`, `stable`, and `decay`. It allows us to continue training on any new dataset from any checkpoint in the `stable stage` without spikes of the training.
 
19
  - **[Doge-20M](https://huggingface.co/SmallDoge/Doge-20M-checkpoint)**: 8e-3
20
  - **[Doge-60M](https://huggingface.co/SmallDoge/Doge-60M-checkpoint)**: 6e-3
21
  - **[Doge-160M](https://huggingface.co/SmallDoge/Doge-160M-checkpoint)**: 4e-3
22
+ - **[Doge-320M](https://huggingface.co/SmallDoge/Doge-320M-checkpoint)**: 2e-3
23
 
24
  | Model | Learning Rate | Schedule | Warmup Steps | Stable Steps |
25
  |-------|---------------|----------|--------------|--------------|
26
+ | [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M-checkpoint) | 8e-3 | wsd_scheduler | 800 | 6400 |
27
+ | [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M-checkpoint) | 6e-3 | wsd_scheduler | 1600 | 12800 |
28
+ | [Doge-160M](https://huggingface.co/SmallDoge/Doge-160M-checkpoint) | 4e-3 | wsd_scheduler | 2400 | 19200 |
29
+ | [Doge-320M](https://huggingface.co/SmallDoge/Doge-320M-checkpoint) | 2e-3 | wsd_scheduler | 3200 | 25600 |