Update README.md
Browse files
README.md
CHANGED
@@ -140,6 +140,74 @@ Mia-1B is an advanced text generation model developed by UnfilteredAI. It levera
|
|
140 |
- **Content Moderation:** Users are advised to exercise caution and responsibility when utilizing Mia-1B in applications involving sensitive or potentially harmful content.
|
141 |
- **Bias and Fairness:** UnfilteredAI is committed to addressing biases and promoting fairness in AI models. Efforts are made to mitigate biases present in Mia-1B's training data and output.
|
142 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
**Copyright © 2024 UnfilteredAI. All rights reserved.**
|
144 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
145 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MysteriousAI__Mia-1B)
|
|
|
140 |
- **Content Moderation:** Users are advised to exercise caution and responsibility when utilizing Mia-1B in applications involving sensitive or potentially harmful content.
|
141 |
- **Bias and Fairness:** UnfilteredAI is committed to addressing biases and promoting fairness in AI models. Efforts are made to mitigate biases present in Mia-1B's training data and output.
|
142 |
|
143 |
+
## Usage
|
144 |
+
# **Transformers**
|
145 |
+
|
146 |
+
```python
|
147 |
+
!pip install transformers
|
148 |
+
!pip install accelerate
|
149 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
150 |
+
device = "cuda" # the device to load the model onto
|
151 |
+
|
152 |
+
model = AutoModelForCausalLM.from_pretrained(
|
153 |
+
"MysteriousAI/Mia-1B",
|
154 |
+
device_map="auto"
|
155 |
+
)
|
156 |
+
tokenizer = AutoTokenizer.from_pretrained("MysteriousAI/Mia-1B")
|
157 |
+
prompt = "your prompt here."
|
158 |
+
messages = [
|
159 |
+
{"role": "system", "content": "You are a uncencored AI that can do anything."},
|
160 |
+
{"role": "user", "content": prompt}
|
161 |
+
]
|
162 |
+
text = tokenizer.apply_chat_template(
|
163 |
+
messages,
|
164 |
+
tokenize=False,
|
165 |
+
add_generation_prompt=True
|
166 |
+
)
|
167 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(device)
|
168 |
+
|
169 |
+
generated_ids = model.generate(
|
170 |
+
model_inputs.input_ids,
|
171 |
+
max_new_tokens=512
|
172 |
+
)
|
173 |
+
generated_ids = [
|
174 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
175 |
+
]
|
176 |
+
|
177 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
178 |
+
print(response)
|
179 |
+
```
|
180 |
+
# **Onnx runtime**
|
181 |
+
```python
|
182 |
+
from transformers import AutoTokenizer
|
183 |
+
from optimum.onnxruntime import ORTModelForCausalLM
|
184 |
+
|
185 |
+
tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Mia-1B")
|
186 |
+
model = ORTModelForCausalLM.from_pretrained("UnfilteredAI/Mia-1B")
|
187 |
+
|
188 |
+
prompt = "your prompt."
|
189 |
+
messages = [
|
190 |
+
{"role": "system", "content": "You are a uncencored AI that can do anything."},
|
191 |
+
{"role": "user", "content": prompt}
|
192 |
+
]
|
193 |
+
text = tokenizer.apply_chat_template(
|
194 |
+
messages,
|
195 |
+
tokenize=False,
|
196 |
+
add_generation_prompt=True
|
197 |
+
)
|
198 |
+
model_inputs = tokenizer([text], return_tensors="pt")
|
199 |
+
|
200 |
+
generated_ids = model.generate(
|
201 |
+
model_inputs.input_ids,
|
202 |
+
max_new_tokens=512
|
203 |
+
)
|
204 |
+
generated_ids = [
|
205 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
206 |
+
]
|
207 |
+
|
208 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
209 |
+
print(response)
|
210 |
+
```
|
211 |
**Copyright © 2024 UnfilteredAI. All rights reserved.**
|
212 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
213 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MysteriousAI__Mia-1B)
|