readme
Browse files
README.md
CHANGED
@@ -1,3 +1,59 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
-
此模型在项目[wdndev/tiny-llm-zh (github.com)](https://github.com/wdndev/tiny-llm-zh)中训练生成,详细训练微调过程,见github项目。
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: wdn
|
4 |
+
language:
|
5 |
+
- zh
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
tags:
|
8 |
+
- chat
|
9 |
+
---
|
10 |
+
|
11 |
+
|
12 |
+
## Tiny LLM 92M SFT
|
13 |
+
|
14 |
+
### 简介
|
15 |
+
|
16 |
+
本项目[wdndev/tiny-llm-zh (github.com)](https://github.com/wdndev/tiny-llm-zh)旨在构建一个小参数量的中文语言大模型,用于快速入门学习大模型相关知识。
|
17 |
+
|
18 |
+
模型架构:整体模型架构采用开源通用架构,包括:RMSNorm,RoPE,MHA等
|
19 |
+
|
20 |
+
实现细节:实现大模型两阶段训练及后续人类对齐,即:预训练(PTM) -> 指令微调(SFT) -> 人类对齐(RLHF, DPO) -> 测评。
|
21 |
+
|
22 |
+
注意:因资源限制,本项目的第一要务是走通大模型整个流程,而不是调教比较好的效果,故评测结果分数较低,部分生成结构错误。
|
23 |
+
|
24 |
+
|
25 |
+
### 模型细节
|
26 |
+
|
27 |
+
大约在9B的中文预料中训练,主要包含百科内容,模型架构采用开源通用架构,包括:RMSNorm,RoPE,MHA等。
|
28 |
+
|
29 |
+
### 环境
|
30 |
+
|
31 |
+
只需要安装 `transformers` 即可运行
|
32 |
+
|
33 |
+
### 快速开始
|
34 |
+
|
35 |
+
```python
|
36 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
37 |
+
|
38 |
+
model_id = "wdndev/tiny_llm_sft_92m"
|
39 |
+
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
|
41 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
|
42 |
+
|
43 |
+
sys_text = "你是由wdndev开发的个人助手。"
|
44 |
+
# user_text = "中国的首都是哪儿?"
|
45 |
+
# user_text = "你叫什么名字?"
|
46 |
+
user_text = "介绍一下中国"
|
47 |
+
input_txt = "\n".join(["<|system|>", sys_text.strip(),
|
48 |
+
"<|user|>", user_text.strip(),
|
49 |
+
"<|assistant|>"]).strip() + "\n"
|
50 |
+
|
51 |
+
model_inputs = tokenizer(input_txt, return_tensors="pt").to(model.device)
|
52 |
+
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=200)
|
53 |
+
generated_ids = [
|
54 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
55 |
+
]
|
56 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
57 |
+
print(response)
|
58 |
+
```
|
59 |
|
|