Update README.md
Browse files
README.md
CHANGED
@@ -27,12 +27,12 @@ LLM-jp-3.1 is the series of large language models developed by the [Research and
|
|
27 |
|
28 |
The LLM-jp-3.1 series consists of models that have undergone mid-training ([instruction pre-training](https://aclanthology.org/2024.emnlp-main.148/)) based on the LLM-jp-3 series, resulting in a significant improvement in instruction-following capabilities compared to the original LLM-jp-3 models.
|
29 |
|
30 |
-
|
31 |
This repository provides **llm-jp-3.1-1.8b-instruct4** model.
|
32 |
For an overview of the LLM-jp-3.1 models across different parameter sizes, please refer to:
|
33 |
- [LLM-jp-3.1 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-31-pre-trained-models-68368787c32e462c40a45f7b)
|
34 |
- [LLM-jp-3.1 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-31-fine-tuned-models-68368681b9b35de1c4ac8de4).
|
35 |
|
|
|
36 |
|
37 |
Checkpoints format: Hugging Face Transformers
|
38 |
|
@@ -73,6 +73,7 @@ print(tokenizer.decode(output))
|
|
73 |
## Model Details
|
74 |
|
75 |
- **Model type:** Transformer-based Language Model
|
|
|
76 |
|
77 |
Dense model:
|
78 |
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|
|
|
27 |
|
28 |
The LLM-jp-3.1 series consists of models that have undergone mid-training ([instruction pre-training](https://aclanthology.org/2024.emnlp-main.148/)) based on the LLM-jp-3 series, resulting in a significant improvement in instruction-following capabilities compared to the original LLM-jp-3 models.
|
29 |
|
|
|
30 |
This repository provides **llm-jp-3.1-1.8b-instruct4** model.
|
31 |
For an overview of the LLM-jp-3.1 models across different parameter sizes, please refer to:
|
32 |
- [LLM-jp-3.1 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-31-pre-trained-models-68368787c32e462c40a45f7b)
|
33 |
- [LLM-jp-3.1 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-31-fine-tuned-models-68368681b9b35de1c4ac8de4).
|
34 |
|
35 |
+
For more details on training and evaluation results, please refer to [this blog post]() (in Japanese).
|
36 |
|
37 |
Checkpoints format: Hugging Face Transformers
|
38 |
|
|
|
73 |
## Model Details
|
74 |
|
75 |
- **Model type:** Transformer-based Language Model
|
76 |
+
- **Architectures:**
|
77 |
|
78 |
Dense model:
|
79 |
|Params|Layers|Hidden size|Heads|Context length|Embedding parameters|Non-embedding parameters|
|