Update README.md
Browse files
README.md
CHANGED
@@ -36,12 +36,11 @@ inference:
|
|
36 |
|
37 |
## Summary
|
38 |
|
39 |
-
Minueza-32M-Base is a foundation model with 32 million parameters trained from scratch.
|
40 |
|
41 |
-
|
42 |
|
43 |
-
- [Minueza-32M-
|
44 |
-
- [Minueza-32M-Chat](https://huggingface.co/Felladrin/Minueza-32M-Chat): A version of the base model fine-tuned on conversational datasets.
|
45 |
|
46 |
## Intended Uses
|
47 |
|
@@ -68,37 +67,9 @@ The model was trained on a subset of each of the following non-synthetic dataset
|
|
68 |
|
69 |
The subsets were interleaved to form the final training corpus of approximately 650 million tokens.
|
70 |
|
71 |
-
## Usage
|
72 |
-
|
73 |
-
This is a pre-trained foundation model. For your task, you will likely want to perform application-specific fine-tuning.
|
74 |
-
|
75 |
-
Also note that this model was trained on internet text data, which may contain biases, offensive or inappropriate content, and may produce incorrect or irrelevant responses. No evaluation has been conducted, so use with care.
|
76 |
-
|
77 |
-
Having that said, here's how you can run it:
|
78 |
-
|
79 |
-
```python
|
80 |
-
from transformers import pipeline
|
81 |
-
|
82 |
-
generate = pipeline("text-generation", "Felladrin/Minueza-32M-Base")
|
83 |
-
|
84 |
-
prompt = "The best way to improve your health is"
|
85 |
-
|
86 |
-
output = generate(
|
87 |
-
prompt,
|
88 |
-
max_new_tokens=256,
|
89 |
-
do_sample=True,
|
90 |
-
temperature=0.72,
|
91 |
-
top_p=0.73,
|
92 |
-
top_k=50,
|
93 |
-
repetition_penalty=1.176,
|
94 |
-
)
|
95 |
-
|
96 |
-
print(output[0]["generated_text"])
|
97 |
-
```
|
98 |
-
|
99 |
## Model Architecture
|
100 |
|
101 |
-
|
102 |
|
103 |
| Configuration | Value |
|
104 |
| :---------------------- | :---- |
|
@@ -110,7 +81,7 @@ Trained on a context window of 2048 tokens, this is a transformer model with the
|
|
110 |
| num_key_value_heads | 4 |
|
111 |
| vocab_size | 32002 |
|
112 |
|
113 |
-
|
114 |
|
115 |
| Hyperparameter | Value |
|
116 |
| :-------------------------- | :-------------------------------------------- |
|
@@ -122,7 +93,6 @@ Trained on a context window of 2048 tokens, this is a transformer model with the
|
|
122 |
| total_train_batch_size | 8 |
|
123 |
| optimizer | Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
124 |
| lr_scheduler_type | linear |
|
125 |
-
| num_epochs | 1.0 |
|
126 |
|
127 |
| Framework | Version |
|
128 |
| :----------- | :---------- |
|
@@ -131,6 +101,60 @@ Trained on a context window of 2048 tokens, this is a transformer model with the
|
|
131 |
| Datasets | 2.16.1 |
|
132 |
| Tokenizers | 0.15.1 |
|
133 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
134 |
## License
|
135 |
|
136 |
This model is licensed under the [Apache License 2.0](https://huggingface.co/Felladrin/Minueza-32M-Base/resolve/main/license.txt).
|
|
|
36 |
|
37 |
## Summary
|
38 |
|
39 |
+
Minueza-32M-Base is a foundation model with 32 million parameters trained from scratch on a large corpus of text in English.
|
40 |
|
41 |
+
It's available in the following formats: [Safetensors](https://huggingface.co/Felladrin/Minueza-32M-Base), [GGUF](https://huggingface.co/Felladrin/gguf-Minueza-32M-Base), and [ONNX](https://huggingface.co/Felladrin/onnx-Minueza-32M-Base).
|
42 |
|
43 |
+
And it's being released alongside a fine-tuned version trained in conversational datasets: [Minueza-32M-Chat](https://huggingface.co/Felladrin/Minueza-32M-Chat)
|
|
|
44 |
|
45 |
## Intended Uses
|
46 |
|
|
|
67 |
|
68 |
The subsets were interleaved to form the final training corpus of approximately 650 million tokens.
|
69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
## Model Architecture
|
71 |
|
72 |
+
This is a transformer model with the Mistral architecture, trained on a context window of 2048 tokens.
|
73 |
|
74 |
| Configuration | Value |
|
75 |
| :---------------------- | :---- |
|
|
|
81 |
| num_key_value_heads | 4 |
|
82 |
| vocab_size | 32002 |
|
83 |
|
84 |
+
The pretraining was made with these hyperparameters and frameworks:
|
85 |
|
86 |
| Hyperparameter | Value |
|
87 |
| :-------------------------- | :-------------------------------------------- |
|
|
|
93 |
| total_train_batch_size | 8 |
|
94 |
| optimizer | Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
95 |
| lr_scheduler_type | linear |
|
|
|
96 |
|
97 |
| Framework | Version |
|
98 |
| :----------- | :---------- |
|
|
|
101 |
| Datasets | 2.16.1 |
|
102 |
| Tokenizers | 0.15.1 |
|
103 |
|
104 |
+
## Fine-tuning
|
105 |
+
|
106 |
+
The recommended settings for fine-tuning this model are the following.
|
107 |
+
|
108 |
+
For Supervised Fine-Tuning:
|
109 |
+
|
110 |
+
| Hyperparameter | Value |
|
111 |
+
| :-------------------------- | :-------------------------------------------- |
|
112 |
+
| learning_rate | 2e-5 |
|
113 |
+
| total_train_batch_size | 24 |
|
114 |
+
| max_seq_length | 2048 |
|
115 |
+
| weight_decay | 0 |
|
116 |
+
| warmup_ratio | 0.02 |
|
117 |
+
|
118 |
+
For Direct Preference Optimization:
|
119 |
+
|
120 |
+
| Hyperparameter | Value |
|
121 |
+
| :-------------------------- | :-------------------------------------------- |
|
122 |
+
| learning_rate | 7.5e-7 |
|
123 |
+
| total_train_batch_size | 6 |
|
124 |
+
| max_length | 2048 |
|
125 |
+
| max_prompt_length | 1536 |
|
126 |
+
| max_steps | 200 |
|
127 |
+
| weight_decay | 0 |
|
128 |
+
| warmup_ratio | 0.02 |
|
129 |
+
|
130 |
+
## Usage
|
131 |
+
|
132 |
+
This is just a base model. For your task, you will likely want to perform application-specific fine-tuning as recommended above.
|
133 |
+
|
134 |
+
Also note that this model was trained on internet text data, which may contain biases, offensive or inappropriate content, and may produce incorrect or irrelevant responses. No evaluation has been conducted, so use with care.
|
135 |
+
|
136 |
+
Having that said, here's how you can run it:
|
137 |
+
|
138 |
+
```python
|
139 |
+
from transformers import pipeline
|
140 |
+
|
141 |
+
generate = pipeline("text-generation", "Felladrin/Minueza-32M-Base")
|
142 |
+
|
143 |
+
prompt = "The best way to improve your health is"
|
144 |
+
|
145 |
+
output = generate(
|
146 |
+
prompt,
|
147 |
+
max_new_tokens=256,
|
148 |
+
do_sample=True,
|
149 |
+
temperature=0.72,
|
150 |
+
top_p=0.73,
|
151 |
+
top_k=50,
|
152 |
+
repetition_penalty=1.176,
|
153 |
+
)
|
154 |
+
|
155 |
+
print(output[0]["generated_text"])
|
156 |
+
```
|
157 |
+
|
158 |
## License
|
159 |
|
160 |
This model is licensed under the [Apache License 2.0](https://huggingface.co/Felladrin/Minueza-32M-Base/resolve/main/license.txt).
|