LLAMA 3 Story Point Estimator - mesos

This model is fine-tuned on issue descriptions from mesos and tested on mesos for story point estimation.

Model Details

  • Base Model: LLAMA 3.2 1B

  • Training Project: mesos

  • Test Project: mesos

  • Task: Story Point Estimation (Regression)

  • Architecture: PEFT (LoRA)

  • Input: Issue titles

  • Output: Story point estimation (continuous value)

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftConfig, PeftModel

# Load peft config model
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-mesos")

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/0-LLAMA3SP-mesos")
base_model = AutoModelForSequenceClassification.from_pretrained(
    config.base_model_name_or_path,
    num_labels=1,
    torch_dtype=torch.float16,
    device_map='auto'
)
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/0-LLAMA3SP-mesos")

# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")

# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()

Training Details

  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Sequence length: 20 tokens
  • Best training epoch: 11 / 20 epochs
  • Batch size: 32
  • Training time: 568.656 seconds
  • Mean Absolute Error (MAE): 1.914
  • Median Absolute Error (MdAE): 1.722

Framework versions

  • PEFT 0.14.0
Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-classification models for peft library.

Model tree for DEVCamiloSepulveda/0-LLAMA3SP-mesos

Adapter
(265)
this model

Evaluation results