File size: 1,584 Bytes
9432a56 4571668 9432a56 4571668 c96b846 4571668 adca226 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: cc-by-nc-4.0
datasets:
- KnutJaegersberg/trilobite
---

This model was fine tuned on AI filtered subsets of GPT-4 based subset of the Dolphin dataset and EvolInstruct V2.
It has not been explicitly aligned to positive, negative or bureaucratically prescribed value systems.
It might kill us all! Time to shit your pants, regulators. I literally put black goo on Dolphin-7B sperm, which then fertilized Evolved Instructions...
What's different is evil... ;)
I intend to train 3 sizes.
Prompt Example:
```
### System:
You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__deacon-3b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 34.2 |
| ARC (25-shot) | 39.68 |
| HellaSwag (10-shot) | 66.42 |
| MMLU (5-shot) | 27.13 |
| TruthfulQA (0-shot) | 36.07 |
| Winogrande (5-shot) | 64.64 |
| GSM8K (5-shot) | 0.38 |
| DROP (3-shot) | 5.06 |
|