File size: 4,401 Bytes
51f7cfc e1c0c74 51f7cfc e571170 51f7cfc 0c1da83 3a0e5e5 0c1da83 b5b607a 51f7cfc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
---
language:
- it
license: cc-by-nc-4.0
tags:
- sft
- it
- mistral
- chatml
- axolotl
prompt_template: <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|>
<|im_start|>assistant
model-index:
- name: maestrale-chat-v0.4-beta
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/yu0sVwC.png" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Maestrale chat beta ༄
By @efederici and @mferraretto
## Model description
- **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus, merged with [occiglot](https://huggingface.co/occiglot/occiglot-7b-eu5).
- **Fine-Tuning**: SFT performed on 1.7M convs/instructions for 2 epochs.
- **DPO**: Aligned with DPO on multiple datasets.
**v0.4**
- Agent
- Improved truthfullness
- Improved Math & Reasoning capabilities
- Mermaid mindmaps
- More latin translations, poems, ...
This model uses ChatML prompt format:
```
<|im_start|>system
Sei un assistente utile.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Scores
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|------------|------:|------|-----:|--------|-----:|---|-----:|
|hellaswag_it| 1|none | 0|acc |0.5270|± |0.0052|
| | |none | 0|acc_norm|0.7037|± |0.0048|
|arc_it | 1|none | 0|acc |0.1771|± |0.0112|
| | |none | 0|acc_norm|0.5218|± |0.0146|
|m_mmlu_it | 0|none | 5|acc |0.5623|± |0.0043|
## Usage:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
TextStreamer
)
import torch
tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.4-beta")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.4-beta", load_in_8bit=True, device_map="auto")
gen = GenerationConfig(
do_sample=True,
temperature=0.7,
repetition_penalty=1.2,
top_k=50,
top_p=0.95,
max_new_tokens=500,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
messages = [
{"role": "system", "content": "Sei un assistente utile."},
{"role": "user", "content": "{prompt}"}
]
with torch.no_grad():
temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(temp, return_tensors="pt").to("cuda")
_ = model.generate(
**inputs,
streamer=streamer,
generation_config=gen
)
```
## Examples
### Mindmaps
```python
messages = [
{"role": "system", "content": "Fornisci una mindmap Mermaid sull'argomento in input."},
{"role": "user", "content": "Argomento: [argomento]"}
]
```
### SQL
```python
schema = "[db schema]"
messages = [
{"role": "system", "content": f"Sei un assistente SQL e il tuo compito è convertire la domanda dell'utente in codice SQL valido rispetto allo schema del database fornito.\n\nSchema:\n```sql\n{schema}\n```"},
{"role": "user", "content": "Conta il numero di X prodotti dall'azienda Y"}
]
```
### Article from index
```python
messages = [
{"role": "system", "content": "Sei un assistente utile."},
{"role": "user", "content": (
"Scrivi un articolo a partire dal titolo e dall'indice dei contenuti.\n\n"
"Titolo: [titolo]\n\n"
"Indice:\n\n"
"1. Introduzione\n"
"2. [heading]\n"
"..."
)}
]
```
## Intended uses & limitations
It's a beta version; it's quite `safe`, and it can refuse to answer to toxic questions.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |