Update README.md
Browse files
README.md
CHANGED
@@ -1,16 +1,16 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
-
### Huggingface RWKV Flock of Finches
|
5 |
|
6 |
-
> HF compatible model for Finch-MoE-
|
7 |
|
8 |

|
9 |
|
10 |
|
11 |
> **! Important Note !**
|
12 |
>
|
13 |
-
> The following is the HF transformers implementation of the Flock of Finches Mixture of Experts
|
14 |
>
|
15 |
>
|
16 |
|
@@ -18,21 +18,20 @@ license: apache-2.0
|
|
18 |
## Quickstart with the hugging face transformer library
|
19 |
|
20 |
```
|
21 |
-
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-
|
22 |
-
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-
|
23 |
```
|
24 |
|
25 |
## Evaluation
|
26 |
|
27 |
-
The following demonstrates the improvements from Eagle 7B to Flock of Finches
|
28 |
|
29 |
-
| | [Eagle 7B](https://huggingface.co/RWKV/v6-Finch-7B-HF) | [Finch 7B](https://huggingface.co/RWKV/v6-Finch-7B-HF) | [Finch 14B](https://huggingface.co/RWKV/v6-Finch-14B-HF) | [Flock of Finches
|
30 |
| --- | --- | --- | --- | --- |
|
31 |
-
| [ARC](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/arc) | 39.59% | 41.47% | 46.33% | 48.
|
32 |
-
| [HellaSwag](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/hellaswag) | 53.09% | 55.96% | 57.69% |
|
33 |
-
| [MMLU](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/mmlu) | 30.86% | 41.70% | 56.05% | 55.
|
34 |
-
| [
|
35 |
-
| [Winogrande](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/winogrande) | 67.56% | 71.19% | 74.43% | 75.77%
|
36 |
|
37 |
#### Running on CPU via HF transformers
|
38 |
|
@@ -59,8 +58,8 @@ User: {instruction}
|
|
59 |
Assistant:"""
|
60 |
|
61 |
|
62 |
-
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-
|
63 |
-
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-
|
64 |
|
65 |
text = "请介绍北京的旅游景点"
|
66 |
prompt = generate_prompt(text)
|
@@ -115,8 +114,8 @@ User: {instruction}
|
|
115 |
Assistant:"""
|
116 |
|
117 |
|
118 |
-
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-
|
119 |
-
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-
|
120 |
|
121 |
text = "介绍一下大熊猫"
|
122 |
prompt = generate_prompt(text)
|
@@ -162,8 +161,8 @@ User: {instruction}
|
|
162 |
|
163 |
Assistant:"""
|
164 |
|
165 |
-
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-
|
166 |
-
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-
|
167 |
|
168 |
texts = ["请介绍北京的旅游景点", "介绍一下大熊猫", "乌兰察布"]
|
169 |
prompts = [generate_prompt(text) for text in texts]
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
### Huggingface RWKV Flock of Finches 37B-A11B Mixture of Experts Model
|
5 |
|
6 |
+
> HF compatible model for Finch-MoE-37B-A11B-v0.1
|
7 |
|
8 |

|
9 |
|
10 |
|
11 |
> **! Important Note !**
|
12 |
>
|
13 |
+
> The following is the HF transformers implementation of the Flock of Finches Mixture of Experts 37B-A11B model. This is meant to be used with the huggingface transformers
|
14 |
>
|
15 |
>
|
16 |
|
|
|
18 |
## Quickstart with the hugging face transformer library
|
19 |
|
20 |
```
|
21 |
+
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True).to(torch.float32)
|
22 |
+
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True)
|
23 |
```
|
24 |
|
25 |
## Evaluation
|
26 |
|
27 |
+
The following demonstrates the improvements from Eagle 7B to Flock of Finches 37B-A11B v0.1
|
28 |
|
29 |
+
| | [Eagle 7B](https://huggingface.co/RWKV/v6-Finch-7B-HF) | [Finch 7B](https://huggingface.co/RWKV/v6-Finch-7B-HF) | [Finch 14B](https://huggingface.co/RWKV/v6-Finch-14B-HF) | [Flock of Finches 37B-A11B v0.1](https://huggingface.co/RWKV/Finch-MoE-37B-A11B-v0.1-HF)
|
30 |
| --- | --- | --- | --- | --- |
|
31 |
+
| [ARC C](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/arc) | 39.59% | 41.47% | 46.33% | 48.04%
|
32 |
+
| [HellaSwag](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/hellaswag) | 53.09% | 55.96% | 57.69% | 56.76%
|
33 |
+
| [MMLU](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/mmlu) | 30.86% | 41.70% | 56.05% | 55.58%
|
34 |
+
| [Winogrande](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/winogrande) | 67.56% | 71.19% | 74.43% | 75.14%
|
|
|
35 |
|
36 |
#### Running on CPU via HF transformers
|
37 |
|
|
|
58 |
Assistant:"""
|
59 |
|
60 |
|
61 |
+
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True).to(torch.float32)
|
62 |
+
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True)
|
63 |
|
64 |
text = "请介绍北京的旅游景点"
|
65 |
prompt = generate_prompt(text)
|
|
|
114 |
Assistant:"""
|
115 |
|
116 |
|
117 |
+
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True, torch_dtype=torch.float16).to(0)
|
118 |
+
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True)
|
119 |
|
120 |
text = "介绍一下大熊猫"
|
121 |
prompt = generate_prompt(text)
|
|
|
161 |
|
162 |
Assistant:"""
|
163 |
|
164 |
+
model = AutoModelForCausalLM.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True).to(torch.float32)
|
165 |
+
tokenizer = AutoTokenizer.from_pretrained("RWKV/Finch-MoE-37B-A11B-v0.1-HF", trust_remote_code=True)
|
166 |
|
167 |
texts = ["请介绍北京的旅游景点", "介绍一下大熊猫", "乌兰察布"]
|
168 |
prompts = [generate_prompt(text) for text in texts]
|