Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,6 @@
|
|
2 |
license: bigscience-bloom-rail-1.0
|
3 |
datasets:
|
4 |
- c4
|
5 |
-
- CarperAI/pile-v2-small-filtered
|
6 |
language:
|
7 |
- en
|
8 |
library_name: transformers
|
@@ -20,6 +19,4 @@ rotary_dim - 64
|
|
20 |
tokenizer - gpt-j
|
21 |
```
|
22 |
|
23 |
-
Trained on 4,194,304 samples from the [c4](https://hf.co/datasets/c4) dataset, at a length of 128 tokens each, that comes out to 536,870,912 (0.53B) tokens seen during training. A batch size of 16 with 128 gradient accumulation steps was used, making the effective batch size 2048. A cosine learning rate schedule was used starting at 1e-3.
|
24 |
-
|
25 |
-
Another 20,480 samples from [CarperAI/pile-v2-small-filtered](https://hf.co/CarperAI/pile-v2-small-filtered) were used for finetuning, again at 128 tokens each (for a total of 2.6M more tokens) at a batch size of 16 with 256 gradient accumulation steps, with a learning rate of 1e-4 with a linearly decreasing schedule. (That's 5 entire steps for y'all counting at home, it took ~4:30 lol)
|
|
|
2 |
license: bigscience-bloom-rail-1.0
|
3 |
datasets:
|
4 |
- c4
|
|
|
5 |
language:
|
6 |
- en
|
7 |
library_name: transformers
|
|
|
19 |
tokenizer - gpt-j
|
20 |
```
|
21 |
|
22 |
+
Trained on 4,194,304 samples from the [c4](https://hf.co/datasets/c4) dataset, at a length of 128 tokens each, that comes out to 536,870,912 (0.53B) tokens seen during training. A batch size of 16 with 128 gradient accumulation steps was used, making the effective batch size 2048. A cosine learning rate schedule was used starting at 1e-3.
|
|
|
|