MaWared HR Reasoning Model
Model Details
- Base Model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
- Finetuned by: Daemontatox
- License: Apache-2.0
- Language: English
- Tags: text-generation-inference, transformers, unsloth, qwen2, trl
Overview
This model is a finetuned version of the deepseek-r1-distill-qwen-7b
model, optimized for MaWared HR reasoning. It was trained using Unsloth and Hugging Face's TRL library, enabling 2x faster training performance.
Features
- HR Query Reasoning: Provides logical and well-structured responses to complex HR-related inquiries.
- Decision Support: Assists HR professionals in making informed decisions based on policies and regulations.
- Enhanced Performance: Optimized for deep reasoning and contextual understanding in HR-related scenarios.
Installation
To use this model, install the required dependencies:
pip install torch transformers accelerate unsloth
Usage
You can load and use the model with the following Python snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Daemontatox/mawared-hr-reasoning"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
input_text = "How should I handle a conflict between employees?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_length=100)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
Acknowledgments
This model was developed using Unsloth and Hugging Face's TRL library. Special thanks to the open-source community for their contributions.
License This model is licensed under the Apache-2.0 license.
vbnet
Let me know if you need any modifications! 🚀
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.