metadata
license: apache-2.0
language:
- en
tags:
- text-generation
- llama-2
- fine-tuning
- mindsdb
- huggingface
base_model: meta-llama/Llama-2-7b-hf
pipeline_tag: text-generation
library_name: transformers
Llama2 Fine-tuned on MindsDB Docs
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf
on the MindsDB documentation dataset.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ako-oak/llama2-finetuned-mindsdb"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
def chat(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
return tokenizer.decode(output[0], skip_special_tokens=True)
print(chat("What is the purpose of handlers in MindsDB?"))