Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ As per the [Roblox website](https://create.roblox.com/docs/assistant/guide), the
|
|
25 |
|
26 |
# print("Stages of pre-training")
|
27 |
|
28 |
-
This model was continually pre-trained in 3 stages.
|
29 |
|
30 |
- Stage 1: Pre-training on the Pinkstack/roblox-luau-corpus-text & Roblox/luau_corpus on 4096 context (the maximum olmo 2 can usually reach)
|
31 |
|
|
|
25 |
|
26 |
# print("Stages of pre-training")
|
27 |
|
28 |
+
This model was continually pre-trained in 3 stages. (Note, allenai states that olmo 2 1B, which is the model this is based on was pre-trained on 4 trillion or so tokens.)
|
29 |
|
30 |
- Stage 1: Pre-training on the Pinkstack/roblox-luau-corpus-text & Roblox/luau_corpus on 4096 context (the maximum olmo 2 can usually reach)
|
31 |
|