Weni/WeniGPT-2.9.1-Zephyr-7B-zephyr-prompt-binarized-GPTQ

This model is a fine-tuned version of [Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT] on the dataset HuggingFaceH4/ultrafeedback_binarized with the DPO trainer. It is part of the WeniGPT project for Weni.

It achieves the following results on the evaluation set: {'eval_loss': 1.4246530532836914, 'eval_runtime': 87.1748, 'eval_samples_per_second': 2.294, 'eval_steps_per_second': 0.574, 'eval_rewards/chosen': 7.902245998382568, 'eval_rewards/rejected': -1.2779319286346436, 'eval_rewards/accuracies': 0.6299999952316284, 'eval_rewards/margins': 9.18017864227295, 'eval_logps/rejected': -333.343017578125, 'eval_logps/chosen': -329.1789855957031, 'eval_logits/rejected': -2.569888114929199, 'eval_logits/chosen': -2.604318141937256, 'epoch': 1.0}

Intended uses & limitations

This model has not been trained to avoid specific intructions.

Training procedure

Finetuning was done on the model Weni/WeniGPT-2.2.3-Zephyr-7B-merged-LLM_Base_2.0.3_SFT with the following prompt:

Prompt:
<|user|>{prompt}</s>


Chosen:
<|assistant|>{chosen}</s>


Rejected:
<|assistant|>{rejected}</s>

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • gradient_accumulation_steps: 4
  • num_gpus: 1
  • total_train_batch_size: 16
  • optimizer: AdamW
  • lr_scheduler_type: cosine
  • num_steps: 112
  • quantization_type: qptq
  • LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 16\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)

Training results

Framework versions

Hardware

  • Cloud provided: runpod.io
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for Weni/WeniGPT-2.9.1-Zephyr-7B-zephyr-prompt-binarized-bitsandbytes