Upload folder using huggingface_hub
#1
by
soldni
- opened
- README.md +52 -41
- config.json +1 -1
- generation_config.json +1 -1
- model-00001-of-00029.safetensors +1 -1
- model-00002-of-00029.safetensors +1 -1
- model-00003-of-00029.safetensors +1 -1
- model-00004-of-00029.safetensors +1 -1
- model-00005-of-00029.safetensors +1 -1
- model-00006-of-00029.safetensors +1 -1
- model-00007-of-00029.safetensors +1 -1
- model-00008-of-00029.safetensors +1 -1
- model-00009-of-00029.safetensors +1 -1
- model-00010-of-00029.safetensors +1 -1
- model-00011-of-00029.safetensors +1 -1
- model-00012-of-00029.safetensors +1 -1
- model-00013-of-00029.safetensors +1 -1
- model-00014-of-00029.safetensors +1 -1
- model-00015-of-00029.safetensors +1 -1
- model-00016-of-00029.safetensors +1 -1
- model-00017-of-00029.safetensors +1 -1
- model-00018-of-00029.safetensors +1 -1
- model-00019-of-00029.safetensors +1 -1
- model-00020-of-00029.safetensors +1 -1
- model-00021-of-00029.safetensors +1 -1
- model-00022-of-00029.safetensors +1 -1
- model-00023-of-00029.safetensors +1 -1
- model-00024-of-00029.safetensors +1 -1
- model-00025-of-00029.safetensors +1 -1
- model-00026-of-00029.safetensors +1 -1
- model-00027-of-00029.safetensors +1 -1
- model-00028-of-00029.safetensors +1 -1
- model-00029-of-00029.safetensors +1 -1
README.md
CHANGED
@@ -11,10 +11,12 @@ language:
|
|
11 |
|
12 |
# Model Card for OLMo 2 32B
|
13 |
|
14 |
-
We introduce OLMo 2 32B,
|
|
|
|
|
15 |
|
16 |
-
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
|
17 |
-
|
18 |
|
19 |
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|
20 |
|------|--------|---------|-------------|-----------------|----------------|
|
@@ -24,7 +26,7 @@ These models are trained on the Dolma dataset. We have released all code, checkp
|
|
24 |
|
25 |
The core models released in this batch include the following:
|
26 |
|
27 |
-
| **Stage** | **OLMo 2 32B** | **OLMo 2 13B** | **OLMo 2 7B**
|
28 |
|----------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
|
29 |
| **Base Model** | [allenai/OLMo-2-0325-32B](https://huggingface.co/allenai/OLMo-2-0325-32B) | [allenai/OLMo-2-1124-13B](https://huggingface.co/allenai/OLMo-2-1124-13B) | [allenai/OLMo-2-1124-7B](https://huggingface.co/allenai/OLMo-2-1124-7B) |
|
30 |
| **SFT** | [allenai/OLMo-2-0325-32B-SFT](https://huggingface.co/allenai/OLMo-2-0325-32B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) |
|
@@ -34,11 +36,13 @@ The core models released in this batch include the following:
|
|
34 |
|
35 |
## Installation
|
36 |
|
37 |
-
OLMo 2
|
38 |
```bash
|
39 |
-
pip install
|
40 |
```
|
41 |
|
|
|
|
|
42 |
## Inference
|
43 |
|
44 |
You can use OLMo with the standard HuggingFace transformers library:
|
@@ -58,8 +62,8 @@ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
|
|
58 |
|
59 |
For faster performance, you can quantize the model using the following method:
|
60 |
```python
|
61 |
-
AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B",
|
62 |
-
torch_dtype=torch.float16,
|
63 |
load_in_8bit=True) # Requires bitsandbytes
|
64 |
```
|
65 |
The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
|
@@ -81,7 +85,6 @@ from huggingface_hub import list_repo_refs
|
|
81 |
out = list_repo_refs("allenai/OLMo-2-0325-32B")
|
82 |
branches = [b.name for b in out.branches]
|
83 |
```
|
84 |
-
Note: vLLM for OLMo2 32B does not correctly handle attention when the number of heads differs from the number of KV heads (i.e., when using Grouped-Query Attention (GQA) or Multi-Query Attention (MQA) instead of Multi-Head Attention (MHA)). Specifically, it incorrectly splits QKV into equal chunks rather than based on the actual sizes of Q, K, and V. vLLM hasn't released a version with the fix yet ([Issue](https://github.com/vllm-project/vllm/pull/13687)).
|
85 |
|
86 |
### Fine-tuning
|
87 |
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
|
@@ -111,7 +114,7 @@ For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo-
|
|
111 |
### Model Sources
|
112 |
|
113 |
- **Project Page:** https://allenai.org/olmo
|
114 |
-
- **Repositories:**
|
115 |
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo-core
|
116 |
- Evaluation code: https://github.com/allenai/OLMo-Eval
|
117 |
- Further fine-tuning code: https://github.com/allenai/open-instruct
|
@@ -123,70 +126,78 @@ For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo-
|
|
123 |
## Evaluation
|
124 |
Core model results for OLMo 2 32B are found below.
|
125 |
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
|
|
|
130 |
| Mistral-7B-v0.3 | n/a | 58.8 | 78.3 | 83.1 | 77.7 | 63.5 | 51.8 | 37.2 | 47.3 | 40.1 | 30 | 79.3 |
|
131 |
-
| Llama-3.1-8B | 7.2·10
|
132 |
| Mistral-Nemo-12B | n/a | 66.9 | 85.2 | 85.6 | 81.5 | 69.5 | 69.2 | 39.7 | 54.7 | 62.1 | 36.7 | 84.6 |
|
133 |
-
| Qwen-2.5-7B | 8.2·10
|
134 |
-
| Gemma-2-9B | 4.4·10
|
135 |
-
|
|
136 |
-
|
|
137 |
-
|
|
|
|
|
|
|
|
138 |
| Zamba-2-7B | n/c | 65.2 | 92.2 | 89.4 | 79.6 | 68.5 | 51.7 | 36.5 | 55.5 | 67.2 | 32.8 | 78.8 |
|
139 |
-
|
|
140 |
-
| Amber-7B | 0.5·10
|
141 |
-
| OLMo-7B | 1.0·10
|
142 |
-
| MAP-Neo-7B | 2.1·10
|
143 |
-
| OLMo-0424-7B | 0.9·10
|
144 |
-
| DCLM-7B | 1.0·10
|
145 |
-
|
|
146 |
-
|
|
|
|
|
|
|
|
|
|
147 |
|
148 |
## Model Details
|
149 |
|
150 |
### Pretraining
|
151 |
| | **OLMo 2 32B** | **OLMo 2 13B** | **OLMo 2 7B** |
|
152 |
|-------------------|------------|------------|------------|
|
153 |
-
| Pretraining Stage 1 | 6 trillion tokens<br>(1 epoch) | 5 trillion tokens<br>(1.2 epochs) | 4 trillion tokens<br>(1 epoch) |
|
154 |
| Pretraining Stage 2 | 100B tokens (2 runs)<br>300B tokens (1 run)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* | 50B tokens (3 runs)<br>*merged* |
|
155 |
| Post-training | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-32b-pref-mix-v1)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix)) |
|
156 |
|
157 |
#### Stage 1: Initial Pretraining
|
158 |
-
- Dataset: [OLMo-
|
159 |
-
- Coverage:
|
160 |
-
- 32B Model: ~1 epoch
|
161 |
|
162 |
#### Stage 2: Fine-tuning
|
163 |
-
- Dataset: Dolmino-Mix-
|
164 |
-
-
|
165 |
-
- 100B tokens
|
166 |
- 100B tokens
|
167 |
- 300B tokens
|
168 |
-
- Mix composition: 50% high-quality data + academic/Q&A/instruction/math content
|
169 |
|
170 |
#### Model Merging
|
171 |
-
- 32B Model:
|
172 |
|
173 |
|
174 |
## Bias, Risks, and Limitations
|
175 |
-
Like any base
|
176 |
|
177 |
|
178 |
## Citation
|
179 |
```
|
180 |
@misc{olmo20242olmo2furious,
|
181 |
-
title={2 OLMo 2 Furious},
|
182 |
author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
|
183 |
year={2024},
|
184 |
eprint={2501.00656},
|
185 |
archivePrefix={arXiv},
|
186 |
primaryClass={cs.CL},
|
187 |
-
url={https://arxiv.org/abs/2501.00656},
|
188 |
}
|
189 |
```
|
190 |
|
191 |
## Model Card Contact
|
192 |
-
For errors in this model card, contact `[email protected]`.
|
|
|
|
11 |
|
12 |
# Model Card for OLMo 2 32B
|
13 |
|
14 |
+
We introduce OLMo 2 32B, the largest model in the OLMo 2 family.
|
15 |
+
OLMo 2 was pre-trained on [OLMo-mix-1124](https://huggingface.co/datasets/allenai/olmo-mix-1124)
|
16 |
+
and uses [Dolmino-mix-1124](https://huggingface.co/datasets/allenai/dolmino-mix-1124) for mid-training.
|
17 |
|
18 |
+
OLMo 2 is the latest in a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
|
19 |
+
We have released all code, checkpoints, logs, and associated training details on [GitHub](https://github.com/allenai/OLMo-core).
|
20 |
|
21 |
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|
22 |
|------|--------|---------|-------------|-----------------|----------------|
|
|
|
26 |
|
27 |
The core models released in this batch include the following:
|
28 |
|
29 |
+
| **Stage** | **OLMo 2 32B** | **OLMo 2 13B** | **OLMo 2 7B**
|
30 |
|----------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
|
31 |
| **Base Model** | [allenai/OLMo-2-0325-32B](https://huggingface.co/allenai/OLMo-2-0325-32B) | [allenai/OLMo-2-1124-13B](https://huggingface.co/allenai/OLMo-2-1124-13B) | [allenai/OLMo-2-1124-7B](https://huggingface.co/allenai/OLMo-2-1124-7B) |
|
32 |
| **SFT** | [allenai/OLMo-2-0325-32B-SFT](https://huggingface.co/allenai/OLMo-2-0325-32B-SFT) | [allenai/OLMo-2-1124-13B-SFT](https://huggingface.co/allenai/OLMo-2-1124-13B-SFT) | [allenai/OLMo-2-1124-7B-SFT](https://huggingface.co/allenai/OLMo-2-1124-7B-SFT) |
|
|
|
36 |
|
37 |
## Installation
|
38 |
|
39 |
+
OLMo 2 32B is supported in transformers v4.48 or higher:
|
40 |
```bash
|
41 |
+
pip install transformers>=4.48
|
42 |
```
|
43 |
|
44 |
+
If using vLLM, you will need to install from the main branch until v0.7.4 is released. Please
|
45 |
+
|
46 |
## Inference
|
47 |
|
48 |
You can use OLMo with the standard HuggingFace transformers library:
|
|
|
62 |
|
63 |
For faster performance, you can quantize the model using the following method:
|
64 |
```python
|
65 |
+
AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-0325-32B",
|
66 |
+
torch_dtype=torch.float16,
|
67 |
load_in_8bit=True) # Requires bitsandbytes
|
68 |
```
|
69 |
The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
|
|
|
85 |
out = list_repo_refs("allenai/OLMo-2-0325-32B")
|
86 |
branches = [b.name for b in out.branches]
|
87 |
```
|
|
|
88 |
|
89 |
### Fine-tuning
|
90 |
Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.
|
|
|
114 |
### Model Sources
|
115 |
|
116 |
- **Project Page:** https://allenai.org/olmo
|
117 |
+
- **Repositories:**
|
118 |
- Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo-core
|
119 |
- Evaluation code: https://github.com/allenai/OLMo-Eval
|
120 |
- Further fine-tuning code: https://github.com/allenai/open-instruct
|
|
|
126 |
## Evaluation
|
127 |
Core model results for OLMo 2 32B are found below.
|
128 |
|
129 |
+
|
130 |
+
| Model | Training FLOPs | Average | ARC/C | HSwag | WinoG | MMLU | DROP | NQ | AGIEval | GSM8k | MMLUPro | TriviaQA |
|
131 |
+
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
132 |
+
| **Open weights models** | | | | | | | | | | | | |
|
133 |
+
| Llama-2-13B | 1.6 · 10^23 | 54.1 | 67.3 | 83.9 | 74.9 | 55.7 | 45.6 | 38.4 | 41.5 | 28.1 | 23.9 | 81.3 |
|
134 |
| Mistral-7B-v0.3 | n/a | 58.8 | 78.3 | 83.1 | 77.7 | 63.5 | 51.8 | 37.2 | 47.3 | 40.1 | 30 | 79.3 |
|
135 |
+
| Llama-3.1-8B | 7.2 · 10^23 | 61.8 | 79.5 | 81.6 | 76.6 | 66.9 | 56.4 | 33.9 | 51.3 | 56.5 | 34.7 | 80.3 |
|
136 |
| Mistral-Nemo-12B | n/a | 66.9 | 85.2 | 85.6 | 81.5 | 69.5 | 69.2 | 39.7 | 54.7 | 62.1 | 36.7 | 84.6 |
|
137 |
+
| Qwen-2.5-7B | 8.2 · 10^23 | 67.4 | 89.5 | 89.7 | 74.2 | 74.4 | 55.8 | 29.9 | 63.7 | 81.5 | 45.8 | 69.4 |
|
138 |
+
| Gemma-2-9B | 4.4 · 10^23 | 67.8 | 89.5 | 87.3 | 78.8 | 70.6 | 63 | 38 | 57.3 | 70.1 | 42 | 81.8 |
|
139 |
+
| Mistral-Small-24B | n/a | 75.2 | 93.3 | 91.3 | 77.8 | 80.7 | 74.4 | 42.3 | 69.1 | 79.7 | 54.2 | 88.8 |
|
140 |
+
| Gemma-2-27B | 2.1 · 10^24 | 71.3 | 90.7 | 88.4 | 74.5 | 75.7 | 70.1 | 44.7 | 61.5 | 75.7 | 44.7 | 87.4 |
|
141 |
+
| Qwen-2.5-14B | 1.6 · 10^24 | 72.2 | 94.0 | 94.0 | 80.0 | 79.3 | 51.5 | 37.3 | 71.0 | 83.4 | 52.8 | 79.1 |
|
142 |
+
| Qwen-2.5-32B | 3.5 · 10^24 | 74.9 | 95.6 | 96.0 | 84.0 | 83.1 | 53.1 | 37.0 | 78.0 | 83.3 | 59.0 | 79.9 |
|
143 |
+
| **Partially open models** | | | | | | | | | | | | |
|
144 |
+
| StableLM-2-12B | 2.9 · 10^23 | 62.2 | 81.9 | 84.5 | 77.7 | 62.4 | 55.5 | 37.6 | 50.9 | 62 | 29.3 | 79.9 |
|
145 |
| Zamba-2-7B | n/c | 65.2 | 92.2 | 89.4 | 79.6 | 68.5 | 51.7 | 36.5 | 55.5 | 67.2 | 32.8 | 78.8 |
|
146 |
+
| **Fully open models** | | | | | | | | | | | | |
|
147 |
+
| Amber-7B | 0.5 · 10^23 | 35.2 | 44.9 | 74.5 | 65.5 | 24.7 | 26.1 | 18.7 | 21.8 | 4.8 | 11.7 | 59.3 |
|
148 |
+
| OLMo-7B | 1.0 · 10^23 | 38.3 | 46.4 | 78.1 | 68.5 | 28.3 | 27.3 | 24.8 | 23.7 | 9.2 | 12.1 | 64.1 |
|
149 |
+
| MAP-Neo-7B | 2.1 · 10^23 | 49.6 | 78.4 | 72.8 | 69.2 | 58 | 39.4 | 28.9 | 45.8 | 12.5 | 25.9 | 65.1 |
|
150 |
+
| OLMo-0424-7B | 0.9 · 10^23 | 50.7 | 66.9 | 80.1 | 73.6 | 54.3 | 50 | 29.6 | 43.9 | 27.7 | 22.1 | 58.8 |
|
151 |
+
| DCLM-7B | 1.0 · 10^23 | 56.9 | 79.8 | 82.3 | 77.3 | 64.4 | 39.3 | 28.8 | 47.5 | 46.1 | 31.3 | 72.1 |
|
152 |
+
| OLMo-2-1124-7B | 1.8 · 10^23 | 62.9 | 79.8 | 83.8 | 77.2 | 63.7 | 60.8 | 36.9 | 50.4 | 67.5 | 31.0 | 78 |
|
153 |
+
| OLMo-2-1124-13B | 4.6 · 10^23 | 68.3 | 83.5 | 86.4 | 81.5 | 67.5 | 70.7 | 46.7 | 54.2 | 75.1 | 35.1 | 81.9 |
|
154 |
+
| **OLMo-2-0325-32B** | 1.3 · 10^24 | 72.9 | 90.4 | 89.7 | 78.7 | 74.9 | 74.3 | 50.2 | 61.0 | 78.8 | 43.3 | 88.0 |
|
155 |
+
|
156 |
+
- *Columns ARC/C through NQ represent metrics tracked during OLMo 2 development.*
|
157 |
+
- *Columns AGIEval through TriviaQA represent unseen evals.*
|
158 |
|
159 |
## Model Details
|
160 |
|
161 |
### Pretraining
|
162 |
| | **OLMo 2 32B** | **OLMo 2 13B** | **OLMo 2 7B** |
|
163 |
|-------------------|------------|------------|------------|
|
164 |
+
| Pretraining Stage 1 | 6 trillion tokens<br>(1.5 epoch) | 5 trillion tokens<br>(1.2 epochs) | 4 trillion tokens<br>(1 epoch) |
|
165 |
| Pretraining Stage 2 | 100B tokens (2 runs)<br>300B tokens (1 run)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* | 50B tokens (3 runs)<br>*merged* |
|
166 |
| Post-training | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-32b-pref-mix-v1)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix)) |
|
167 |
|
168 |
#### Stage 1: Initial Pretraining
|
169 |
+
- Dataset: [OLMo-mix-1124](https://huggingface.co/datasets/allenai/olmo-mix-1124) (3.9T tokens)
|
170 |
+
- Coverage: 95%+ of total pretraining budget
|
171 |
+
- 32B Model: ~1.5 epoch
|
172 |
|
173 |
#### Stage 2: Fine-tuning
|
174 |
+
- Dataset: Dolmino-Mix-1124
|
175 |
+
- Two training mixes:
|
|
|
176 |
- 100B tokens
|
177 |
- 300B tokens
|
178 |
+
- Mix composition: 50% high-quality web data + academic/Q&A/instruction/math content
|
179 |
|
180 |
#### Model Merging
|
181 |
+
- 32B Model: 3 versions on 100B mix + 1 version on 300B mix, merged for final checkpoint
|
182 |
|
183 |
|
184 |
## Bias, Risks, and Limitations
|
185 |
+
Like any base or fine-tuned language model, AI can be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.
|
186 |
|
187 |
|
188 |
## Citation
|
189 |
```
|
190 |
@misc{olmo20242olmo2furious,
|
191 |
+
title={{2 OLMo 2 Furious}},
|
192 |
author={Team OLMo and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
|
193 |
year={2024},
|
194 |
eprint={2501.00656},
|
195 |
archivePrefix={arXiv},
|
196 |
primaryClass={cs.CL},
|
197 |
+
url={https://arxiv.org/abs/2501.00656},
|
198 |
}
|
199 |
```
|
200 |
|
201 |
## Model Card Contact
|
202 |
+
For errors in this model card, contact `[email protected]`.
|
203 |
+
|
config.json
CHANGED
@@ -21,7 +21,7 @@
|
|
21 |
"rope_theta": 500000,
|
22 |
"tie_word_embeddings": false,
|
23 |
"torch_dtype": "float32",
|
24 |
-
"transformers_version": "4.
|
25 |
"use_cache": true,
|
26 |
"vocab_size": 100352
|
27 |
}
|
|
|
21 |
"rope_theta": 500000,
|
22 |
"tie_word_embeddings": false,
|
23 |
"torch_dtype": "float32",
|
24 |
+
"transformers_version": "4.49.0",
|
25 |
"use_cache": true,
|
26 |
"vocab_size": 100352
|
27 |
}
|
generation_config.json
CHANGED
@@ -3,5 +3,5 @@
|
|
3 |
"bos_token_id": 100257,
|
4 |
"eos_token_id": 100257,
|
5 |
"pad_token_id": 100277,
|
6 |
-
"transformers_version": "4.
|
7 |
}
|
|
|
3 |
"bos_token_id": 100257,
|
4 |
"eos_token_id": 100257,
|
5 |
"pad_token_id": 100277,
|
6 |
+
"transformers_version": "4.49.0"
|
7 |
}
|
model-00001-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4823541920
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cb1a14b6f31c5ea5aebd91738cfcb67a8515b97474c51e866bf39785bc02be8f
|
3 |
size 4823541920
|
model-00002-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067512
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6441be4d3bb8e508cdee671bb1d64da498932ec5490ab815c26183f2448019df
|
3 |
size 4467067512
|
model-00003-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792208
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e368dad6a1531e2c8e72ab3937f1fe4955471b2b288f8064ab3b1e9af144985a
|
3 |
size 4718792208
|
model-00004-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067512
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:90ea66a068affcf737ab812d04fc12d9bcdbbb21d78a8df59b29f5d43504b5f9
|
3 |
size 4467067512
|
model-00005-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067520
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:435fd760305d180ffd33d8dcc25758c827c4a2d6ae317bd7a9ed278c5b659c32
|
3 |
size 4467067520
|
model-00006-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0f93caec1f5b99ab8e6a3357376ca48334d92baf3e93b81af0618aadf13e25e8
|
3 |
size 4718792240
|
model-00007-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bd526a0e9eebfa8ae7db712fb08b158f6bf9204a86655a84529b57b32890a87e
|
3 |
size 4467067536
|
model-00008-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067528
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4a0d8f0863b022ebdaa00cd6669a806e1f6d3ad1eab3b61cd6f6cc1e276dfa44
|
3 |
size 4467067528
|
model-00009-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:791d96e11698d6defbed904f9bb7b1b61ec8d761934b6ee55576eb7c9850dfbe
|
3 |
size 4718792240
|
model-00010-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3d04bb4cc9dd226744d175d940eee7800bb9e0c835021636d36e9210ced1a606
|
3 |
size 4467067536
|
model-00011-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067528
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ecdbdbf35515f5df5cb386ffeb313f346fe472a006d6d557f4f1427c9b19d106
|
3 |
size 4467067528
|
model-00012-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dbe70de1b9c018870bfcda170fe62f7ca85785edec9634d017240be6be3efb7c
|
3 |
size 4718792240
|
model-00013-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dcb9ecafcad8f50442299b25eb7bd43453e91d6e6dd6711f80c5f77f9e85c9a1
|
3 |
size 4467067536
|
model-00014-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067528
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:274e615c668079e73b14588de6d97fedd372bb295e3d07503ceeb21345d72bfa
|
3 |
size 4467067528
|
model-00015-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4330dc8d27dc0f7c7f32f0a767eec88f4cd96caf0905f939209444cfb3ed80d3
|
3 |
size 4718792240
|
model-00016-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:70071dca4a22c352aaea3f9dc0a843a8d29c9d1beb4dbddcab006b0ef54580ae
|
3 |
size 4467067536
|
model-00017-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067528
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ae6ae74a27d664ecb66a1192f8987f2d90c1c40d9ad479ad278da44fcd10230f
|
3 |
size 4467067528
|
model-00018-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fb6c93823a17ec8ca736288124e24d3bf04e4b13f7cb839c8e5f74b38110dabc
|
3 |
size 4718792240
|
model-00019-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:75a10e5fcee973e6a885d56bf3acf069b6106a93989d4e164289adec11e55ab6
|
3 |
size 4467067536
|
model-00020-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067528
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ec5ba2b627c55bca604450133ddc9b2f1f4025940ac02d3dd1434ff8dc45d1b4
|
3 |
size 4467067528
|
model-00021-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:af0b2e37c7d92e5d861395768be33c37cc8336c144fd2a282b30aed502c738ed
|
3 |
size 4718792240
|
model-00022-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6090acbe1d24e65e74dd6dd980d65b63f774d8b80cb0f68378efff7cf25cd70d
|
3 |
size 4467067536
|
model-00023-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067528
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6112636230e1ade0c039c651a54e5cb1fc83aa49c36f87846bb6b7226a290c27
|
3 |
size 4467067528
|
model-00024-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c46c9b7725fb7cdc9415e18bc593e3741c3a53d84a2435afe2be1dca3bd4b21b
|
3 |
size 4718792240
|
model-00025-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067536
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1581a8bacbb610fcd0df2b34d71e145050742bd83e564f15c7a9fe614e8c3228
|
3 |
size 4467067536
|
model-00026-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4467067528
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ee3ad8ad412cfab585a290ed71cfabe8f8fe3314d45f0038128a6f4f0a547776
|
3 |
size 4467067528
|
model-00027-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4718792240
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:52822419e1979666657bd72c8fa213ff47e5d0e4f182007caa96fff6e12f7fcc
|
3 |
size 4718792240
|
model-00028-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 3649173440
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:71464699e194d19280e74cd0d458680338ef69cfc4ff67f5fdd49ce41ec6c72e
|
3 |
size 3649173440
|
model-00029-of-00029.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 2055209088
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:32bd36e52e4635532e9bf7174716abe5540e1227b0aa67bc357d2b98ca754193
|
3 |
size 2055209088
|