metadata
license: openrail
datasets:
- bugdaryan/spider-natsql-wikisql-instruct
language:
- en
tags:
- cod
Wizard Coder SQL-Generation Model
Overview
- Model Name: WizardCoderSQL-15B-V1.0
- Repository: GitHub Repository
- License: OpenRAIL-M
- Fine-Tuned Model Name: WizardCoderSQL-15B-V1.0
- Fine-Tuned Dataset: bugdaryan/spider-natsql-wikisql-instruct
Description
This is a fine-tuned version of the Wizard Coder 15B model specifically designed for SQL generation tasks. The model has been fine-tuned on the bugdaryan/spider-natsql-wikisql-instruct dataset to empower it with the ability to generate SQL queries based on natural language instructions.
Model Details
- Base Model: Wizard Coder 15B
- Fine-Tuned Model Name: WizardCoderSQL-15B-V1.0
- Fine-Tuning Parameters:
- QLoRA Parameters:
- LoRA Attention Dimension (lora_r): 64
- LoRA Alpha Parameter (lora_alpha): 16
- LoRA Dropout Probability (lora_dropout): 0.1
- bitsandbytes Parameters:
- Use 4-bit Precision Base Model (use_4bit): True
- Compute Dtype for 4-bit Base Models (bnb_4bit_compute_dtype): float16
- Quantization Type (bnb_4bit_quant_type): nf4
- Activate Nested Quantization (use_nested_quant): False
- TrainingArguments Parameters:
- Number of Training Epochs (num_train_epochs): 1
- Enable FP16/BF16 Training (fp16/bf16): False/True
- Batch Size per GPU for Training (per_device_train_batch_size): 48
- Batch Size per GPU for Evaluation (per_device_eval_batch_size): 4
- Gradient Accumulation Steps (gradient_accumulation_steps): 1
- Enable Gradient Checkpointing (gradient_checkpointing): True
- Maximum Gradient Norm (max_grad_norm): 0.3
- Initial Learning Rate (learning_rate): 2e-4
- Weight Decay (weight_decay): 0.001
- Optimizer (optim): paged_adamw_32bit
- Learning Rate Scheduler Type (lr_scheduler_type): cosine
- Maximum Training Steps (max_steps): -1
- Warmup Ratio (warmup_ratio): 0.03
- Group Sequences into Batches with Same Length (group_by_length): True
- Save Checkpoint Every X Update Steps (save_steps): 0
- Log Every X Update Steps (logging_steps): 25
- SFT Parameters:
- Maximum Sequence Length (max_seq_length): 500
- QLoRA Parameters:
Performance
- Fine-Tuned Model Metrics: (Provide any relevant evaluation metrics if available)
Dataset
- Fine-Tuned Dataset: bugdaryan/spider-natsql-wikisql-instruct
- Dataset Description: This dataset contains natural language instructions paired with SQL queries. It serves as the training data for fine-tuning the Wizard Coder model for SQL generation tasks.
Model Card Information
- Maintainer: Spartak Bughdaryan
- Contact: [email protected]
- Date Created: September 15, 2023
- Last Updated: September 15, 2023
Usage
To use this fine-tuned model for SQL generation tasks, you can load it using the Hugging Face Transformers library in Python. Here's an example of how to use it:
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
pipeline
)
import torch
model_name = 'bugdaryan/WizardCoderSQL-15B-V1.0'
model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
tables = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"
question = 'Find the salesperson who made the most sales.'
prompt = f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Convert text to SQLite query: {question} {tables} ### Response:"
ans = pipe(prompt, max_new_tokens=200)
print(ans[0]['generated_text'])