Model Trained Using AutoTrain

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the mental_health_counseling_conversations dataset.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "Hey Alex! I have been feeling a bit down lately.I could really use some advice on how to feel better?"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)
Downloads last month
217
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2

Finetuned
(916)
this model
Adapters
1 model
Quantizations
3 models

Dataset used to train prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2

Space using prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2 1