Taka008 commited on
Commit
a63ca36
·
verified ·
1 Parent(s): 4c82ed0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +175 -3
README.md CHANGED
@@ -1,3 +1,175 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ja
6
+ programming_language:
7
+ - C
8
+ - C++
9
+ - C#
10
+ - Go
11
+ - Java
12
+ - JavaScript
13
+ - Lua
14
+ - PHP
15
+ - Python
16
+ - Ruby
17
+ - Rust
18
+ - Scala
19
+ - TypeScript
20
+ pipeline_tag: text-generation
21
+ library_name: transformers
22
+ inference: false
23
+ ---
24
+
25
+ # llm-jp-3-1.8b
26
+
27
+ This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
28
+
29
+
30
+ | Model Variants |
31
+ | :--- |
32
+ | [llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b) |
33
+ | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) |
34
+ | [llm-jp-3-3.7b](https://huggingface.co/llm-jp/llm-jp-3-3.7b) |
35
+ | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) |
36
+ | [llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) |
37
+ | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) |
38
+ | [llm-jp-3-172b-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1) |
39
+ | [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) |
40
+
41
+
42
+ Checkpoints format: Hugging Face Transformers
43
+
44
+ ## Required Libraries and Their Versions
45
+
46
+ - torch>=2.3.0
47
+ - transformers>=4.40.1
48
+ - tokenizers>=0.19.1
49
+ - accelerate>=0.29.3
50
+ - flash-attn>=2.5.8
51
+
52
+ ## Usage
53
+
54
+ ```python
55
+ import torch
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
+ tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-1.8b")
58
+ model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-1.8b", device_map="auto", torch_dtype=torch.bfloat16)
59
+ text = "自然言語処理とは何か"
60
+ tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
61
+ with torch.no_grad():
62
+ output = model.generate(
63
+ tokenized_input,
64
+ max_new_tokens=100,
65
+ do_sample=True,
66
+ top_p=0.95,
67
+ temperature=0.7,
68
+ repetition_penalty=1.05,
69
+ )[0]
70
+ print(tokenizer.decode(output))
71
+ ```
72
+
73
+ ## Model Details
74
+
75
+ - **Model type:** Transformer-based Language Model
76
+ - **Total seen tokens:** 2.1T
77
+
78
+ |Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
79
+ |:---:|:---:|:---:|:---:|:---:|:---:|:---:|
80
+ |1.8b|24|2048|16|4096|407,896,064|1,459,718,144|
81
+ |3.7b|28|3072|24|4096|611,844,096|3,171,068,928|
82
+ |13b|40|5120|40|4096|1,019,740,160|12,688,184,320|
83
+
84
+ ## Tokenizer
85
+
86
+ The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
87
+ The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
88
+ Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
89
+
90
+ ## Datasets
91
+
92
+ ### Pre-training
93
+
94
+ The models have been pre-trained using a blend of the following datasets.
95
+
96
+ | Language | Dataset | Tokens|
97
+ |:---|:---|---:|
98
+ |Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
99
+ ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
100
+ ||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|237.3B
101
+ ||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
102
+ ||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
103
+ |English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
104
+ ||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
105
+ ||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
106
+ ||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
107
+ ||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
108
+ ||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
109
+ ||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
110
+ |Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
111
+ |Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
112
+ |Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
113
+
114
+ ### Instruction tuning
115
+
116
+ The models have been fine-tuned on the following datasets.
117
+
118
+ | Language | Dataset | description |
119
+ |:---|:---|:---|
120
+ |Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed instruction dataset |
121
+ | |[answer-carefully-002](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed instruction dataset focusing on LLMs' safety |
122
+ | |ichikara-instruction-format| A small amount of instruction dataset edited from ichikara-instruction, with some constraints on the output format. |
123
+ | |[AutoMultiTurnByCalm3-22B](https://huggingface.co/datasets/kanhatakeyama/AutoMultiTurnByCalm3-22B)| A synthetic instruction dataset. |
124
+ | |[ramdom-to-fixed-multiturn-Calm3](https://huggingface.co/datasets/kanhatakeyama/ramdom-to-fixed-multiturn-Calm3)| A synthetic instruction dataset. |
125
+ | |[wizardlm8x22b-logical-math-coding-sft_additional-ja](https://huggingface.co/datasets/kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja)| A synthetic instruction dataset. |
126
+ | |[Synthetic-JP-EN-Coding-Dataset-567k](https://huggingface.co/datasets/Aratako/Synthetic-JP-EN-Coding-Dataset-567k)| A synthetic instruction dataset. We used sampled one.|
127
+ |English |[FLAN](https://huggingface.co/datasets/Open-Orca/FLAN) | We used sampled one. |
128
+
129
+
130
+ ## Evaluation
131
+
132
+ ### llm-jp-eval (v1.3.1)
133
+
134
+ We evaluated the models using 100 examples from the dev split.
135
+
136
+ | Model name | average | EL | FA | HE | MC | MR | MT | NLI | QA | RC |
137
+ | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
138
+ | [llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b) | 0.3767 | 0.3725 | 0.1948 | 0.2350 | 0.2500 | 0.0900 | 0.7730 | 0.3080 | 0.4629 | 0.7040 |
139
+ | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) | 0.4596 | 0.4280 | 0.1987 | 0.3250 | 0.3300 | 0.4200 | 0.7900 | 0.3520 | 0.4698 | 0.8224 |
140
+ | [llm-jp-3-3.7b](https://huggingface.co/llm-jp/llm-jp-3-3.7b) | 0.4231 | 0.3812 | 0.2440 | 0.2200 | 0.1900 | 0.3600 | 0.7947 | 0.3800 | 0.4688 | 0.7694 |
141
+ | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) | 0.5188 | 0.4191 | 0.2504 | 0.3400 | 0.5000 | 0.5800 | 0.8166 | 0.4500 | 0.4881 | 0.8247 |
142
+ | [llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) | 0.5802 | 0.5570 | 0.2593 | 0.4600 | 0.7000 | 0.6300 | 0.8292 | 0.3460 | 0.5937 | 0.8469 |
143
+ | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 0.6168 | 0.5408 | 0.2757 | 0.4950 | 0.9200 | 0.7100 | 0.8317 | 0.4640 | 0.4642 | 0.8500 |
144
+
145
+
146
+ ### Japanese MT Bench
147
+
148
+ | Model name | average | coding | extraction | humanities | math | reasoning | roleplay | stem | writing |
149
+ | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
150
+ | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) | 4.93 | 1.50 | 4.70 | 7.80 | 1.55 | 2.60 | 7.80 | 6.10 | 7.40 |
151
+ | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) | 5.50 | 1.95 | 4.05 | 8.25 | 2.25 | 4.00 | 8.80 | 7.25 | 7.45 |
152
+ | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 6.47 | 3.15 | 7.05 | 9.15 | 3.75 | 5.40 | 8.30 | 7.50 | 7.45 |
153
+
154
+
155
+
156
+ ## Risks and Limitations
157
+
158
+ The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
159
+
160
+
161
+ ## Send Questions to
162
+
163
+ llm-jp(at)nii.ac.jp
164
+
165
+
166
+ ## License
167
+
168
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
169
+
170
+
171
+ ## Model Card Authors
172
+
173
+ *The names are listed in alphabetical order.*
174
+
175
+ Hirokazu Kiyomaru and Takashi Kodama.