Titlebreaker LoRA Adapter
This is a LoRA (Low-Rank Adaptation) adapter for the Qwen3-0.6B model, fine-tuned for title cleaning tasks.
Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
# Load and apply the LoRA adapter
model = PeftModel.from_pretrained(base_model, "sch-ai/titlebreaker-lora-adapter")
# Generate clean title
def clean_title(dirty_title, max_length=200):
prompt = f"<title_clean> "
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=max_length,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract the clean title from between the tags
if "</title_clean>" in generated:
clean_title = generated.split("</title_clean>")[0].split("<title_clean>")[-1].strip()
return clean_title
return generated
# Example usage
dirty_title = "Your dirty title here"
clean_result = clean_title(dirty_title)
print(f"Clean title: {clean_result}")
Training Details
- Base model: Qwen/Qwen3-0.6B
- LoRA rank: 64
- LoRA alpha: 16
- LoRA dropout: 0.1
- Task type: Causal Language Modeling
Framework versions
- PEFT 0.17.0
- Downloads last month
- 8