Text Generation
Transformers
PyTorch
Safetensors
mistral
text-generation-inference
File size: 1,005 Bytes
c1e49e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
pipeline_tag: text-generation
---
# Wukong-0.1-Mistral-7B-v0.2

Join Our Discord! https://discord.gg/cognitivecomputations 

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/655dc641accde1bbc8b41aec/xOe1Nb3S9Nb53us7_Ja3s.jpeg)

Wukong-0.1-Mistral-7B-v0.2 is a dealigned chat finetune of the original fantastic Mistral-7B-v0.2 model by the Mistral team.

This model was trained on the teknium OpenHeremes-2.5 dataset, code datasets from Multimodal Art Projection https://m-a-p.ai, and the Dolphin dataset from Cognitive Computations https://erichartford.com/dolphin 🐬

This model was trained for 3 epochs over 4 4090's.

# Example Outputs

TBD

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)