library: gemma3-text-to-sql

Gemma 3 Text-to-SQL

A powerful LoRA-fine-tuned adapter for Gemma 3 that converts natural language questions into SQL queries with high accuracy and contextual understanding.

Hugging Face License: Apache 2.0

Overview

This model is a specialized adapter built on top of Gemma 3 27B that has been fine-tuned to bridge the gap between natural language and SQL. It allows users to describe their data queries in plain English and receive accurate SQL code in return.

Key capabilities:

  • Converts natural language questions to SQL queries
  • Understands database schema context when provided
  • Generates clean, optimized SQL with proper table joins
  • Handles complex queries including aggregations, filters, and sorting

Model Details

  • Base Model: lmstudio-community/gemma-3-27b-it-GGUF
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • LoRA Configuration:
    • Rank: 16
    • Alpha: 16
    • Dropout: 0.05
    • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Training Data: Synthetic and curated text-to-SQL datasets
  • Model Size: Base (27B parameters) + Adapter (~70MB)
  • Format: SafeTensors

Usage

Using Transformers Library

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model and tokenizer
model_id = "lmstudio-community/gemma-3-27b-it-GGUF"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Load adapter
adapter_path = "parole-study-viper/gemma-3-text-to-sql"  # Replace with your HF model path
model = PeftModel.from_pretrained(model, adapter_path)

# Format prompt
question = "Find all customers who made a purchase over $1000 in the last month"
prompt = f"Convert the following natural language query to SQL: {question}"

# Generate response
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
    inputs.input_ids,
    max_new_tokens=200,
    temperature=0.7,
    do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Using MLX (Apple Silicon)

For Apple Silicon users, you can use MLX for efficient inference:

import mlx.core as mx
from mlx_lm.utils import get_model_path

# Setup paths
model_path = "lmstudio-community/gemma-3-27b-it-GGUF"
adapter_path = "parole-study-viper/gemma-3-text-to-sql/adapter_model.safetensors"

# Run generation
prompt = "Convert the following natural language query to SQL: Find all customers in New York"
command = f"""
python -m mlx_lm.generate \\
    --model {model_path} \\
    --adapter {adapter_path} \\
    --prompt "{prompt}" \\
    --max-tokens 200 \\
    --temp 0.7
"""

API Usage with Inference Endpoint

You can also use the Hugging Face Inference API:

import requests

API_URL = "https://api-inference.huggingface.co/models/parole-study-viper/gemma-3-text-to-sql"
headers = {"Authorization": f"Bearer {API_TOKEN}"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()
    
output = query({
    "inputs": "Convert to SQL: List all customers who placed orders in the last 30 days",
    "parameters": {"max_new_tokens": 200, "temperature": 0.7}
})

Example Inputs and Outputs

Example 1: Basic Query

Input:

Find all customers in New York

Output:

SELECT * 
FROM customers 
WHERE state = 'NY' OR city = 'New York';

Example 2: Complex Query with Joins

Input:

List the top 5 products by revenue in the last quarter

Output:

SELECT p.product_id, p.product_name, SUM(oi.quantity * oi.unit_price) as revenue
FROM products p
JOIN order_items oi ON p.product_id = oi.product_id
JOIN orders o ON oi.order_id = o.order_id
WHERE o.order_date >= DATE_SUB(CURRENT_DATE(), INTERVAL 3 MONTH)
GROUP BY p.product_id, p.product_name
ORDER BY revenue DESC
LIMIT 5;

Example 3: With Schema Context

Input:

Schema:
CREATE TABLE employees (
  employee_id INT PRIMARY KEY,
  name VARCHAR(100),
  department VARCHAR(100),
  salary INT,
  hire_date DATE
);

Query: Find the average salary by department

Output:

SELECT department, AVG(salary) as average_salary
FROM employees
GROUP BY department
ORDER BY average_salary DESC;

Training Details

This model was fine-tuned using LoRA, a parameter-efficient fine-tuning technique that significantly reduces the number of trainable parameters while maintaining performance. The training process involved:

  1. Dataset Preparation: A combination of synthetic and curated text-to-SQL pairs
  2. Training Configuration:
    • Learning Rate: 5e-5
    • Batch Size: 8
    • Training Steps: 1000
    • LoRA Rank: 16
    • Gradient Checkpointing: True
  3. Hardware: Apple Silicon with MLX acceleration

Limitations

  • The model performs best when the database schema is included in the prompt
  • Complex nested queries may require refining the prompt
  • Performance varies based on domain-specific terminology
  • The model may occasionally generate SQL syntax that is specific to certain database systems

Ethical Considerations

This model is designed as a productivity tool for database queries and should be used responsibly:

  • Always review and test generated SQL before executing in production environments
  • Be aware that the model may reflect biases present in its training data
  • The model should not be used to generate queries intended to exploit database vulnerabilities

Citation

If you use this model in your research or applications, please cite:

@misc{gemma3-text-to-sql,
  author = {parole-study-viper},
  title = {Gemma 3 Text-to-SQL: A LoRA-fine-tuned adapter for natural language to SQL conversion},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/parole-study-viper/gemma-3-text-to-sql}}
}

License

This model adapter is licensed under the Apache 2.0 License. Usage of the base Gemma 3 model is subject to Google's Gemma license terms.

Acknowledgements

We thank Google for releasing the Gemma 3 models and the Hugging Face team for their transformers library and model hosting. We also acknowledge the contributions of the MLX team at Apple for enabling efficient inference on Apple Silicon.


If you find any issues or have suggestions for improvement, please open an issue on the GitHub repository or reach out on the Hugging Face community forums.

This model created by [@parole-study-viper]

Downloads last month
60
GGUF
Model size
1,000M params
Architecture
gemma3
Video Preview
loading

Model tree for parole-study-viper/gemma3-text-to-sql

Quantized
(50)
this model

Dataset used to train parole-study-viper/gemma3-text-to-sql