jatingocodeo's picture
Upload README.md with huggingface_hub
a42bf1d verified
metadata
language: en
tags:
  - phi-2
  - openassistant
  - conversational
license: mit

Phi-2 Fine-tuned on OpenAssistant

This model is a fine-tuned version of Microsoft's Phi-2 model, trained on the OpenAssistant dataset using QLoRA techniques.

Model Description

  • Base Model: Microsoft Phi-2
  • Training Data: OpenAssistant Conversations Dataset
  • Training Method: QLoRA (Quantized Low-Rank Adaptation)
  • Use Case: Conversational AI and text generation

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("your-username/phi2-finetuned-openassistant")
tokenizer = AutoTokenizer.from_pretrained("your-username/phi2-finetuned-openassistant")

# Generate text
input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Training Details

  • Fine-tuned for 1 epoch
  • Used 4-bit quantization for efficient training
  • Implemented gradient checkpointing and mixed precision training

Limitations

  • The model inherits limitations from both Phi-2 and the OpenAssistant dataset
  • May produce incorrect or biased information
  • Should be used with appropriate content filtering and moderation

License

This model is released under the MIT License.