Llama2 Fine-tuned on MindsDB Docs
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf
on the MindsDB documentation dataset.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ako-oak/llama2-finetuned-mindsdb"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
def chat(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
return tokenizer.decode(output[0], skip_special_tokens=True)
print(chat("What is the purpose of handlers in MindsDB?"))
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for ako-oak/llama2-finetuned-mindsdb
Base model
meta-llama/Llama-2-7b-hf