Travel-Llama-3.1-8B-Instruct LoRA Adapter

This is a LoRA fine-tuned adapter for unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit.
It was trained to improve the model’s performance on travel-related conversations, such as giving travel recommendations (hotel, restaurant and destination to travel) and answering questions about destinations in İstanbul.


Model Details

  • Base model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
  • Adapter type: LoRA (PEFT)
  • Language(s): Turkish
  • Model type: Causal LM (Instruction-tuned)
  • Finetuned for: Travel domain assistance

How to Use

You can load this adapter on top of the base model with peft:

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit"
lora_model = "She4AI/Travel-Llama-3.1-8B-Instruct-bnb-4bit"

tokenizer = AutoTokenizer.from_pretrained(base_model)

model = AutoModelForCausalLM.from_pretrained(
    base_model,
    device_map="auto"
)
model = PeftModel.from_pretrained(model, lora_model)

# Example
inputs = tokenizer("Give me a 3-day travel plan for Istanbul.", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for She4AI/Travel-Llama-3.1-8B-Instruct-bnb-4bit