LLAMA 3 Story Point Estimator - mule

This model is fine-tuned on issue descriptions from mule and tested on mule for story point estimation.

Model Details

  • Base Model: LLAMA 3.2 1B

  • Training Project: mule

  • Test Project: mule

  • Task: Story Point Estimation (Regression)

  • Architecture: PEFT (LoRA)

  • Input: Issue titles

  • Output: Story point estimation (continuous value)

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftConfig, PeftModel

# Load peft config model
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-mule")

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-mule")
base_model = AutoModelForSequenceClassification.from_pretrained(
    config.base_model_name_or_path,
    num_labels=1,
    torch_dtype=torch.float16,
    device_map='auto'
)
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/0-LLAMA3SP-mule")

# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")

# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()

Training Details

  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Sequence length: 20 tokens
  • Best training epoch: 1 / 20 epochs
  • Batch size: 32
  • Training time: 45.941 seconds
  • Mean Absolute Error (MAE): 2.894
  • Median Absolute Error (MdAE): 2.599

Framework versions

  • PEFT 0.14.0
Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for peft library.

Model tree for DEVCamiloSepulveda/0-LLAMA3SP-mule

Adapter
(265)
this model

Evaluation results