Description
RecAlpaca is a model which instruction tuned Alpaca-LoRA model with recommendation task dataset MovieLens 100k.
Usage
from transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer
from peft import PeftModel
device = "cuda"
base_model_name = "decapoda-research/llama-7b-hf"
lora_weights = "AGI-Edgerunners/RecAlpaca-lora-7b-v1"
tokenizer = LlamaTokenizer.from_pretrained(base_model_name)
model = PeftModel.from_pretrained(LlamaForCausalLM.from_pretrained(base_model_name), lora_weights).to(device)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.73,
top_k=40,
num_beams=4
)
max_new_tokens = 128
instruction = "Based on the movies that I've watched before, could you suggest some similar movies for me to watch next? Please use the MovieLens 100K dataset to recommend movies that you think would appeal to my tastes."
inputs = "The Long Kiss Goodnight, French Kiss, The Maltese Falcon, Dazed and Confused, and Strange Days"
prompt = f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. # noqa: E501
### Instruction:
{instruction}
### Input:
{inputs}
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(device)
generation_output = model.generate(input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
)
print(tokenizer.decode(generation_output.sequences[0]))
More Detail
see our github repository: RecAlpaca
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.