gpt2-medium-wikitext

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1671
  • Accuracy: 0.4217
  • Perplexity: 23.7377
  • Bleu: 0.1460

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy Perplexity Bleu
6.0809 0.2806 500 5.9580 0.1883 386.8254 0.0333
5.0644 0.5612 1000 4.9191 0.2623 136.8761 0.0651
4.3331 0.8418 1500 4.2124 0.3226 67.5163 0.0890
3.9451 1.1223 2000 3.8835 0.3532 48.5942 0.1090
3.7568 1.4029 2500 3.7051 0.3684 40.6559 0.1226
3.6478 1.6835 3000 3.5827 0.3787 35.9710 0.1311
3.5435 1.9641 3500 3.4940 0.3877 32.9179 0.1343
3.4222 2.2447 4000 3.4292 0.3936 30.8527 0.1343
3.3604 2.5253 4500 3.3728 0.3990 29.1601 0.1414
3.3288 2.8058 5000 3.3269 0.4038 27.8518 0.1381
3.2074 3.0864 5500 3.2887 0.4079 26.8092 0.1423
3.2007 3.3670 6000 3.2605 0.4115 26.0632 0.1464
3.1787 3.6476 6500 3.2328 0.4140 25.3497 0.1428
3.1529 3.9282 7000 3.2085 0.4166 24.7424 0.1425
3.0849 4.2088 7500 3.1921 0.4184 24.3384 0.1430
3.0471 4.4893 8000 3.1796 0.4202 24.0366 0.1428
3.0569 4.7699 8500 3.1671 0.4217 23.7377 0.1460

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
41
Safetensors
Model size
355M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.