Commit
·
5365f6a
1
Parent(s):
679b0f7
Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,44 @@ pipeline_tag: text-generation
|
|
8 |
Llama-zh-base is an open-source project that offers a complete training pipeline for building Chinese large language models, ranging from dataset preparation to tokenization, pre-training, prompt tuning, and the reinforcement learning technique RLHF.
|
9 |
This is the Llama-zh-base model trained from scratch using the Chinese pretrain corpus in this project.The amount of parameters is about 0.8B.
|
10 |
|
11 |
-
使用30G中文语料重头开始预训练的Llama模型,旨在提供可用的中小型基础模型。重新构建了embedding层和tokenizer。目前未经过指令微调。参数量约为0.8B左右。
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
Llama-zh-base is an open-source project that offers a complete training pipeline for building Chinese large language models, ranging from dataset preparation to tokenization, pre-training, prompt tuning, and the reinforcement learning technique RLHF.
|
9 |
This is the Llama-zh-base model trained from scratch using the Chinese pretrain corpus in this project.The amount of parameters is about 0.8B.
|
10 |
|
11 |
+
使用30G中文语料重头开始预训练的Llama模型,旨在提供可用的中小型基础模型。重新构建了embedding层和tokenizer。目前未经过指令微调。参数量约为0.8B左右。
|
12 |
+
|
13 |
+
[Repo Links](https://github.com/enze5088/Chatterbox/blob/main/docs/model/llama-zh-base.md)
|
14 |
+
|
15 |
+
## 简介
|
16 |
+
|
17 |
+
LLama-zh-base模型是基于目前llama系列的模型架构,从头重新预训练的LLama模型。
|
18 |
+
由于llama原模型本身并未在中文语料上单独训练,词表中也并未包括太多的中文字符。
|
19 |
+
本项目重新构建了Llama的分词工具与词表。并重新初始化了对应的模型,在中文领域上的持续预训练。
|
20 |
+
|
21 |
+
## 模型内容
|
22 |
+
|
23 |
+
Chatterbox-Llama-zh系列
|
24 |
+
|
25 |
+
| 模型名称 | 模型大小 | 链接 |
|
26 |
+
| ------------------------ | -------- | ----------------------------------------------------------- |
|
27 |
+
| Chatterbox-Llama-zh-base | 0.8B | https://huggingface.co/TurboPascal/Chatterbox-LLaMA-zh-base |
|
28 |
+
| Chatterbox-Llama-zh-2b6 | 2B6 | Coming soon |
|
29 |
+
| | | |
|
30 |
+
|
31 |
+
Notes:
|
32 |
+
|
33 |
+
1. 本模型没有使用原LLaMA的权重,因此无需顾虑LLama权重协议的问题。
|
34 |
+
|
35 |
+
## 数据
|
36 |
+
|
37 |
+
预训练阶段使用开源数据与本项目爬取的部分数据。共使用约33G中文预训练数据
|
38 |
+
|
39 |
+
### 中文预训练数据
|
40 |
+
|
41 |
+
- 新浪新闻数据(SinaNews),220万条新闻文档数据
|
42 |
+
- 人民日报数据(People's Daily Datasets),
|
43 |
+
- [维基百科(wiki2019zh),100万个结构良好的中文词条](https://github.com/brightmart/nlp_chinese_corpus)
|
44 |
+
- [新闻语料(news2016zh),250万篇新闻,含关键词、描述](https://github.com/brightmart/nlp_chinese_corpus)
|
45 |
+
- [社区问答json版(webtext2019zh),410万个高质量社区问答](https://github.com/brightmart/nlp_chinese_corpus)
|
46 |
+
- [THUCNews数据(THUCNews) ,74万篇新闻文档(2.19 GB)](http://thuctc.thunlp.org/#%E4%B8%AD%E6%96%87%E6%96%87%E6%9C%AC%E5%88%86%E7%B1%BB%E6%95%B0%E6%8D%AE%E9%9B%86THUCNews)
|
47 |
+
- [评论数据-语料 (comments2019zh_corpus),240万条评论数据](https://github.com/CLUEbenchmark/CLUECorpus2020)
|
48 |
+
- [社区互动-语料 (webText2019zh_corpus),310W条社区互动数据](https://github.com/CLUEbenchmark/CLUECorpus2020)
|
49 |
+
- [科学文献数据(CSL), 约40W篇中文核心期刊文献摘要](https://github.com/ydli-ai/CSL)
|
50 |
+
- [Belle数据集](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
|
51 |
+
|