metadata
			license: apache-2.0
tags:
  - text-to-sql
  - llama3
  - lora
  - sql-generation
  - code-generation
library_name: transformers
base_model: unsloth/Meta-Llama-3.1-8B
pipeline_tag: text-generation
Llama3 SQL Translator
Llama3 SQL Translator is a LoRA fine-tuned version of the 8B parameter Llama 3.1 model. It is designed to translate natural language database queries into executable SQL statements and provide human-readable explanations. The model streamlines query generation for structured databases and enables non-technical users to interact with relational data more effectively.
Table of Contents
- Model Details
 - Intended Uses
 - Limitations & Warnings
 - Training Overview
 - Evaluation
 - Usage Example
 - Technical Specifications
 - Citation & Contact
 
Model Details
- Model Type: Causal language model
 - Architecture: Llama 3.1 (8B parameters)
 - Fine-Tuning Method: Parameter-efficient fine-tuning (LoRA)
 - Base Model: unsloth/Meta-Llama-3.1-8B
 - Language: English
 - Tokenizer: Llama 3 tokenizer (compatible with Meta's original)
 
Intended Uses
Primary Use
- Translating natural language prompts into valid SQL queries.
 - Providing explanations of the generated SQL logic.
 
Example Input
Database schema: CREATE TABLE employees (id INT, name TEXT, salary FLOAT);
Prompt: List all employees with salary over 50000.
Example Output
SQL: SELECT name FROM employees WHERE salary > 50000;
Explanation: This query retrieves all employee names where the salary is greater than 50000.
Not Intended For
- General chat, Q&A, or non-database related tasks.
 - Use without human review in critical systems or production databases.
 
Limitations & Warnings
- Schema Dependency: The model relies heavily on accurate and complete schema descriptions.
 - SQL Safety: The output SQL should not be executed without manual validation. Injection risks must be mitigated.
 - Complex Queries: Deeply nested subqueries, advanced joins, or vendor-specific SQL dialects may produce suboptimal results.
 
Training Overview
- The model was trained on a large-scale synthetic dataset containing pairs of natural language instructions, database schemas, corresponding SQL queries, and their step-by-step explanations. The dataset covers a wide range of relational data scenarios and query types, including filtering, aggregation, joins, and nested logic.
 - Fine-tuned on a single A100 GPU using:
max_seq_length=1024batch_size=2,gradient_accumulation_steps=2- LoRA with 4-bit quantization
 packing=Trueto maximize throughput- Trained for 1 epoch (~5 hours)
 
 
Evaluation
| Metric | Result | 
|---|---|
| SQL compilation success | > 95% | 
| Manual output quality | ~90%+ | 
| Explanation clarity | High | 
Note: Evaluation was based on random sampling and manual review. Formal benchmarks will be added later.
Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "happyhackingspace/llama3-sql-translator"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = """Below is an instruction that describes a task, paired with an input that provides further context.
Write a response that appropriately completes the request.
### Instruction
Database schema: CREATE TABLE sales (id INT, product TEXT, price FLOAT);
### Input:
Prompt: Show all products priced over 100.
### Response:"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Technical Specifications
- Architecture: Llama 3.1 - 8B
 - Quantization: 4-bit via bitsandbytes
 - Fine-tuning: LoRA
 - Frameworks: Transformers, TRL, PEFT, Unsloth
 
Citation & Contact
@misc{llama3_sql_translator_2025,
  title = {Llama3 SQL Translator},
  author = {happyhackingspace},
  year = {2025},
  howpublished = {\url{https://huggingface.co/happyhackingspace/llama3-sql-translator}}
}
Contact: For questions or contributions, feel free to open an issue on the Hugging Face model page.