LLAMA 3 Story Point Estimator - duracloud

This model is fine-tuned on issue descriptions from duracloud and tested on duracloud for story point estimation.

Model Details

  • Base Model: LLAMA 3.2 1B

  • Training Project: duracloud

  • Test Project: duracloud

  • Task: Story Point Estimation (Regression)

  • Architecture: PEFT (LoRA)

  • Input: Issue titles

  • Output: Story point estimation (continuous value)

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftConfig, PeftModel

# Load peft config model
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-duracloud")

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-duracloud")
base_model = AutoModelForSequenceClassification.from_pretrained(
    config.base_model_name_or_path,
    num_labels=1,
    torch_dtype=torch.float16,
    device_map='auto'
)
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/0-LLAMA3SP-duracloud")

# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")

# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()

Training Details

  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Sequence length: 20 tokens
  • Best training epoch: 13 / 20 epochs
  • Batch size: 32
  • Training time: 288.061 seconds
  • Mean Absolute Error (MAE): 1.118
  • Median Absolute Error (MdAE): 0.944

Framework versions

  • PEFT 0.14.0
Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for peft library.

Model tree for DEVCamiloSepulveda/0-LLAMA3SP-duracloud

Adapter
(265)
this model

Evaluation results