Model from: https://huggingface.co/TheBloke/wizardLM-7B-HF/tree/main

Trained on: https://huggingface.co/datasets/gmongaras/reddit_political_2019

For about 6000 steps with a batch sise of 8, 2 accumulation steps, and using LoRA adapters on all layers.

Downloads last month
10
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Collection including gmongaras/Wizard_7B_Reddit_Political_2019