File size: 3,195 Bytes
f6216d6
 
 
 
 
 
 
 
d3a6d76
 
 
635d222
9e609dc
 
 
 
 
 
 
 
012fa32
 
 
9e609dc
 
 
 
 
 
 
 
 
 
4716bd4
012fa32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4716bd4
1ac2713
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: mit
datasets:
- gretelai/synthetic_text_to_sql
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
tags:
- text-to-sql
---


## Model Details

This model is a fine-tuned version of Llama-3.2-3B-Instruct designed specifically for Text-to-SQL tasks. It was trained to accept a database schema and a natural language question, and output a valid SQL query along with a brief explanation of the logic.
It is lightweight (3B parameters), making it suitable for local deployment on consumer GPUs using 4-bit quantization.

### Model Description
1) Base Model: unsloth/Llama-3.2-3B-Instruct
2) Fine-tuning Framework: Unsloth (QLoRA)
3) Dataset: gretelai/synthetic_text_to_sql


## Uses

The model was trained using the Alpaca prompt format. For best results, structure your input as follows:


![image](https://cdn-uploads.huggingface.co/production/uploads/656b8d33e8bf55919a6aa345/1XEAfrZ5iU7doBDy_T5ef.png)

## How to Get Started with the Model
```python

import torch
from transformers import pipeline

model_id = "Ary-007/Text-to-sql-llama-3.2"

# Load the pipeline
pipe = pipeline(
    "text-generation", 
    model=model_id, 
    device_map="auto",
)

# Define the schema (Context)
schema = """
CREATE TABLE employees (
    id INT,
    name TEXT,
    department TEXT,
    salary INT,
    hire_date DATE
);
"""

# Define the user question
question = "Find the name and salary of employees in the 'Engineering' department who earn more than 80000."

# Format the prompt exactly as trained
prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
Company Database : {schema}

### Input:
SQL Prompt :{question}

### Response:
"""

outputs = pipe(
    prompt, 
    max_new_tokens=200, 
    do_sample=True, 
    temperature=0.1, 
    top_p=0.9
)

print(outputs[0]["generated_text"])
```
## Training Details

The model was fine-tuned using Unsloth on a Tesla T4 GPU (Google Colab).

Hyperparameters
1) Rank (r): 16
2) LoRA Alpha: 16
3) Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
4) Quantization: 4-bit (Normal Float4)
5) Max Sequence Length: 2048
6) Learning Rate: 2e-4
7) Optim: adamw_8bit
8) Max Steps: 60


## Dataset Info
The model was trained on the gretelai/synthetic_text_to_sql dataset, utilizing the following fields:
1) sql_context: Used as the database schema context.
2) sql_prompt: The natural language question.
3) sql: The target SQL query.
4) sql_explanation: The explanation of the query logic.

## Limitations
1) Training Steps: This model was trained for a limited number of steps (60) as a proof of concept. It may not generalize well to extremely complex or unseen database schemas.
2) Hallucination: Like all LLMs, it may generate syntactically correct but logically incorrect SQL. Always validate the output before running it on a production database.
3) Scope: It is optimized for standard SQL (similar to SQLite/PostgreSQL) as presented in the GretelAI dataset.

## License
This model is derived from Llama-3.2 and is subject to the Llama 3.2 Community License.