LLAMA 3 Story Point Estimator - jirasoftware

This model is fine-tuned on issue descriptions from jirasoftware and tested on jirasoftware for story point estimation.

Model Details

  • Base Model: LLAMA 3.2 1B

  • Training Project: jirasoftware

  • Test Project: jirasoftware

  • Task: Story Point Estimation (Regression)

  • Architecture: PEFT (LoRA)

  • Input: Issue titles

  • Output: Story point estimation (continuous value)

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftConfig, PeftModel

# Load peft config model
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-jirasoftware")

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-jirasoftware")
base_model = AutoModelForSequenceClassification.from_pretrained(
    config.base_model_name_or_path,
    num_labels=1,
    torch_dtype=torch.float16,
    device_map='auto'
)
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/0-LLAMA3SP-jirasoftware")

# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")

# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()

Training Details

  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Sequence length: 20 tokens
  • Best training epoch: 3 / 20 epochs
  • Batch size: 32
  • Training time: 45.346 seconds
  • Mean Absolute Error (MAE): 2.355
  • Median Absolute Error (MdAE): 2.212

Framework versions

  • PEFT 0.14.0
Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for peft library.

Model tree for DEVCamiloSepulveda/0-LLAMA3SP-jirasoftware

Adapter
(265)
this model

Evaluation results