Model Description

This is a chat model specialized for SQL, which has been fine-tuned based on the 'Mistral-7B-Instruct-v0.1' using 'sql-create-context' datasets.

SQLに特化したMistral-7Bモデルです(普段のクエリ書くために作りました、よろしければ試してみてください)

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float16

Framework versions

  • PEFT 0.6.0.dev0
Downloads last month
17
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for kanxxyc/Mistral-7B-SQLTuned

Adapter
(375)
this model