Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,70 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# rwkv7-168m-pile
|
6 |
+
|
7 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
8 |
+
|
9 |
+
This is RWKV-7 model under flash-linear attention format.
|
10 |
+
|
11 |
+
## Model Details
|
12 |
+
|
13 |
+
|
14 |
+
### Model Description
|
15 |
+
|
16 |
+
<!-- Provide a longer summary of what this model is. -->
|
17 |
+
|
18 |
+
- **Developed by:** Bo Peng, Yu Zhang, Songlin Yang, Ruochong Zhang
|
19 |
+
- **Funded by:** Shenzhen Yuanshi Intelligent Co. Ltd.
|
20 |
+
- **Model type:** RWKV-7
|
21 |
+
- **Language(s) (NLP):** English
|
22 |
+
- **License:** Apache-2.0
|
23 |
+
- **Parameter count:** 165M
|
24 |
+
- **Tokenizer:** GPT-NeoX 20B tokenizer
|
25 |
+
|
26 |
+
### Model Sources [optional]
|
27 |
+
|
28 |
+
<!-- Provide the basic links for the model. -->
|
29 |
+
|
30 |
+
- **Repository:** https://github.com/fla-org/flash-linear-attention ; https://github.com/BlinkDL/RWKV-LM
|
31 |
+
- **Paper:** With in Progress
|
32 |
+
- **Weights:** Converted from https://modelscope.cn/models/RWKV/rwkv-7-pile/file/view/master?fileName=RWKV-x070-Pile-168M-20241120-ctx4096.pth
|
33 |
+
|
34 |
+
## Uses
|
35 |
+
|
36 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
37 |
+
Install flash-linear-attention before using this model:
|
38 |
+
|
39 |
+
```
|
40 |
+
git clone https://github.com/fla-org/flash-linear-attention
|
41 |
+
cd flash-linear-attention
|
42 |
+
pip install -e .
|
43 |
+
```
|
44 |
+
|
45 |
+
### Direct Use
|
46 |
+
|
47 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
48 |
+
You can use this model just as any other HuggingFace models:
|
49 |
+
```
|
50 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
51 |
+
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-168m-pile', trust_remote_code=True)
|
52 |
+
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-168m-pile', trust_remote_code=True)
|
53 |
+
```
|
54 |
+
|
55 |
+
## Training Details
|
56 |
+
|
57 |
+
### Training Data
|
58 |
+
|
59 |
+
This model is trained on the Pile with a total of 332 billion tokens.
|
60 |
+
|
61 |
+
#### Training Hyperparameters
|
62 |
+
|
63 |
+
- **Training regime:** bfloat16, lr 8e-4 to 3e-5 cosine decay, wd 0.1, bsz 8x30x4096
|
64 |
+
|
65 |
+
## Evaluation
|
66 |
+
|
67 |
+
#### Metrics
|
68 |
+
|
69 |
+
`lambada_openai`: ppl 14.2 acc 45.6%
|
70 |
+
`piqa`: acc 65.5%
|