Sagarkrishna's picture
Update README.md
c89478f verified
|
raw
history blame
3 kB
metadata
license: cc-by-sa-4.0
metrics:
  - accuracy
pipeline_tag: text-generation
tags:
  - code

A capable language model for text to SQL generation for Postgres, Redshift and Snowflake that is on-par with the most capable generalist frontier models.

image/png

Model Description

Developed by: Defog, Inc Model type: [Text to SQL] License: [CC-by-SA-4.0] Finetuned from model: [Meta-Llama-3-8B-Instruct]

defog/llama-3-sqlcoder-8b for CTranslate2

The model is quantized version of the defog/llama-3-sqlcoder-8b with int8_float16 quantization and can be used in CTranslate2.

Conversion details

The original model was converted on 2024-06 with the following command:

ct2-transformers-converter --model Path\To\Local\meta-llama\Meta-Llama-3-8B-Instruct \
    --quantization int8_float16 --output_dir Meta-Llama-3-8B-Instruct-ct2-int8_float16

How to use

This repository for use with CTranslate2.

Use with CTranslate2

This example code is obtained from CTranslate2_transformers and tokenizer AutoTokenizer.
More detailed information about the generate_batch methon can be found at CTranslate2_Generator.generate_batch.

import ctranslate2
import transformers

from huggingface_hub import snapshot_download
model_id = "SagarKrishna/Llama-3-8B-Text2SQL_Instruct-ct2-int8_float16"
model_path = snapshot_download(model_id)
model = ctranslate2.Generator(model_path)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

input_ids = tokenizer.apply_chat_template(
    messages, 
    tokenize=False, 
    add_generation_prompt=True
)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_ids))

results = model.generate_batch([input_tokens], include_prompt_in_result=False, max_length=256, sampling_temperature=0.6, sampling_topp=0.9, end_token=terminators)
output = tokenizer.decode(results[0].sequences_ids[0])

print(output)

Ideal prompt and inference parameters

Set temperature to 0, and do not do sampling.

Evaluation

This model was evaluated on SQL-Eval, a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.

You can read more about the methodology behind SQLEval here.