Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,9 @@ license: apache-2.0
|
|
3 |
---
|
4 |
|
5 |
|
6 |
-
This model is continually pre-trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with the structure proposed in [MemoryLLM](https://arxiv.org/abs/2402.04624).
|
|
|
|
|
7 |
|
8 |
To use the model, please use the following code:
|
9 |
```
|
|
|
3 |
---
|
4 |
|
5 |
|
6 |
+
This model is continually pre-trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with the structure proposed in [MemoryLLM](https://arxiv.org/abs/2402.04624).
|
7 |
+
We equip Llama-3 with 12800 memory tokens in each layer, leading to a memory pool of 1.67B parameters.
|
8 |
+
|
9 |
|
10 |
To use the model, please use the following code:
|
11 |
```
|