File size: 1,568 Bytes
a426e5b ae7ecb2 a426e5b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
datasets:
- HuggingFaceH4/no_robots
language:
- en
license: cc-by-nc-4.0
---
# Good Robot 2 🤖
The model "Good Robot" had one simple goal in mind: to be a good instruction-following model that doesn't talk like ChatGPT.
Built upon the Mistral 7b 0.2 base, this model aims to provide responses that are as human-like as possible, thanks to some DPO training using the (for now, private) `minerva-ai/yes-robots-dpo` dataset.
HuggingFaceH4/no-robots was used as the base for generating a custom dataset to create DPO pairs.
It should follow instructions and be generally as smart as a typical Mistral model - just not as soulless and full of GPT slop.
Changes from the original [good-robot](https://huggingface.co/kubernetes-bad/good-robot) model:
- Mistral 7b-0.2 base (32k native context, no SWA)
- ChatML prompt format
- Trained using GaLore method
## Prompt Format:
ChatML
```
<|im_start|>system
System message
<|im_start|>user
User message<|im_end|>
<|im_start|>assistant
```
## Credits:
Model made in collaboration with [Gryphe](https://huggingface.co/Gryphe).
## Training Data:
- [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots)
- [MinervaAI/yes-robots-dpo](https://huggingface.co/MinervaAI)
- private datasets with common GPTisms
## Limitations:
While I did my best to minimize GPTisms, no model is perfect, and there may still be instances where the generated content has GPT's common phrases - I have a suspicion that's due to them being engrained into Mistral model itself.
## License:
cc-by-nc-4.0
|