TinyLlama Dialogue Summarization Fine-Tuned Model

This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 specifically for dialogue summarization tasks. It was fine-tuned using Parameter-Efficient Fine-Tuning (PEFT) with LoRA on the DialogSum dataset.

Model Details

  • Model Name: TinyLlama-1.1B-Chat-v1.0 (fine-tuned)
  • Architecture: Causal Language Model
  • Fine-tuning Method: Parameter-Efficient Fine-Tuning (PEFT) with LoRA
  • Dataset: DialogSum
  • Quantization: 4-bit quantization was used during training to reduce memory consumption.

Intended Uses

This model is intended for generating concise and accurate summaries of dialogues. It can be used for various applications, including:

  • Summarizing customer service conversations.
  • Generating meeting summaries.
  • Creating summaries of chat logs.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
peft_model_id = "artisokka/tinyllama-dialogsum-finetuned"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model = PeftModel.from_pretrained(model, peft_model_id)

input_text = "Dialogue: ... (your dialogue here) ..."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(model.device)

outputs = model.generate(input_ids, max_length=200)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(summary)
Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text2text-generation models for peft library.