llm-prompt-recovery / README.md
billa-man's picture
Update README.md
abd3e11 verified
|
raw
history blame
1.93 kB
metadata
license: apache-2.0
tags:
  - unsloth
  - trl
  - sft
datasets:
  - billa-man/llm-prompt-recovery
language:
  - en
base_model:
  - unsloth/Llama-3.2-3B-Instruct
pipeline_tag: text2text-generation
library_name: transformers

Model that I used for this Kaggle Competition: LLM Prompt Recovery

My Kaggle implementation: Notebook

Usage:

from unsloth import FastLanguageModel
from peft import PeftModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/Llama-3.2-3B-Instruct",
    max_seq_length = 512,
    dtype = None,
    load_in_4bit = True,
)

model = PeftModel.from_pretrained(model, "billa-man/llm-prompt-recovery")

Input to the LLM is in the following format:

{"role": "user", "content": "Return the prompt that was used to tranform the original text into the rewritten text. Original Text: " + original_text +", Rewritten Text: " + rewritten_text}

An example:

original_text = "Recent breakthroughs have demonstrated several ways to induce magnetism in materials using light, with significant implications for future computing and data storage technologies."
rewritten_text = "Light-induced magnetic phase transitions in non-magnetic materials have been experimentally demonstrated through ultrafast optical excitation, offering promising pathways for photomagnetic control in next-generation spintronic devices and quantum computing architectures."

inference(original_text, rewritten_text)  (check notebook for code)
> 'Rewrite the following sentence while maintaining its original meaning but using different wording and terminology: "Recent breakthroughs have demonstrated several ways to induce magnetism in materials using light, with significant implications for future computing and data storage technologies."'