gpt-neo-125m-finetuned-shakespeare
This model is a fine-tuned version of EleutherAI/gpt-neo-125m on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 4.1126
How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
from transformers import pipeline
generator = pipeline('text-generation', model='2nji/gpt-neo-125m-finetuned-shakespeare')
generator("And all that", do_sample=True, min_length=20)
# [{'generated_text': "And all that in heaven is free: Thou bestow'd on God, so to my house, and in the"}]
Training and evaluation data
This model was finetuned using the the Shakespare_corpus Dataset
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 1.0 | 185 | 4.1844 |
No log | 2.0 | 370 | 4.1248 |
4.0892 | 3.0 | 555 | 4.1126 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 24
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for 2nji/gpt-neo-125m-finetuned-shakespeare
Base model
EleutherAI/gpt-neo-125m