cheaptrix's picture
Update README.md
e0eb847 verified
|
raw
history blame
2.12 kB
metadata
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
  - generated_from_trainer
  - NLP
  - text-to-text
  - Summarization
metrics:
  - rouge
model-index:
  - name: MTSUFall2024SoftwareEngineering
    results: []
datasets:
  - MTSUFall2024SoftwareEngineering/UnitedStatesSenateBillsAndSummaries
language:
  - en

MTSUFall2024SoftwareEngineering

This model is a fine-tuned version of google-t5/t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9830
  • Rouge1: 0.2539
  • Rouge2: 0.201
  • Rougel: 0.2469
  • Rougelsum: 0.2469
  • Gen Len: 18.9996

Model description

More information needed

Intended uses & limitations

Used for Middle Tennessee State University's Software Engineering Fall 2024 class.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 14
  • eval_batch_size: 14
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.7481 1.0 749 2.1062 0.2552 0.1993 0.2475 0.2475 18.9996
2.3354 2.0 1498 2.0224 0.2531 0.2 0.246 0.246 18.9996
2.2351 3.0 2247 1.9929 0.2542 0.2011 0.2471 0.247 18.9996
2.193 4.0 2996 1.9830 0.2539 0.201 0.2469 0.2469 18.9996

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1