Commit
·
b682e49
1
Parent(s):
8b00bb1
Update README.md
Browse files
README.md
CHANGED
@@ -25,8 +25,6 @@ This official repository unveils the TransNormerLLM3 model along with its open-s
|
|
25 |
- **TransNormerLLM3-15B** features **14.83 billion** parameters. It is structured with **42 layers**, includes **40 attention heads**, and has a total **embedding size of 5120**.
|
26 |
- **TransNormerLLM3-15B** is purely intergrated with **[Lightning Attention-2](http://arxiv.org/abs/2401.04658)**, which can maintain a **stable TGS** during training of **unlimited sequence lengths**, up until encountering firm limitations like GPU memory constraints.
|
27 |
- **Titoken** tokenizer is used with a total **vocabulary size** of about **100,000**.
|
28 |
-
- It incorporates **Simple GLU** for its channel mixer, **GLA** in the token mixer, and **SRMSNorm** for normalization.
|
29 |
-
- In terms of position encoding, the first layer employs **LRPE with exponential decay**, whereas the subsequent layers continue with **exponential decay encoding**.
|
30 |
|
31 |
### Pre-training Logbook
|
32 |
* Realtime Track: https://api.wandb.ai/links/opennlplab/kip314lq
|
|
|
25 |
- **TransNormerLLM3-15B** features **14.83 billion** parameters. It is structured with **42 layers**, includes **40 attention heads**, and has a total **embedding size of 5120**.
|
26 |
- **TransNormerLLM3-15B** is purely intergrated with **[Lightning Attention-2](http://arxiv.org/abs/2401.04658)**, which can maintain a **stable TGS** during training of **unlimited sequence lengths**, up until encountering firm limitations like GPU memory constraints.
|
27 |
- **Titoken** tokenizer is used with a total **vocabulary size** of about **100,000**.
|
|
|
|
|
28 |
|
29 |
### Pre-training Logbook
|
30 |
* Realtime Track: https://api.wandb.ai/links/opennlplab/kip314lq
|