File size: 3,240 Bytes
482a4a8 b8d6931 482a4a8 b8d6931 482a4a8 b8d6931 482a4a8 b8d6931 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
model-index:
- name: ModernBERT-base-mask-finetuned-shakespeare
results: []
datasets:
- 2nji/Shakespeare_Corpus
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernBERT-base-mask-finetuned-shakespeare
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2340
## How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```python
import torch
from transformers import pipeline
from pprint import pprint
pipe = pipeline(
"fill-mask",
model="2nji/ModernBERT-base-mask-finetuned-shakespeare",
torch_dtype=torch.bfloat16,
)
input_text = "Thou [MASK] on [MASK]."
results = pipe(input_text)
pprint(results)
<!-- [[{'score': 0.71875,
'sequence': '[CLS]Thou art on[MASK].[SEP]',
'token': 1445,
'token_str': ' art'},
{'score': 0.1416015625,
'sequence': '[CLS]Thou hast on[MASK].[SEP]',
'token': 16579,
'token_str': ' hast'},
{'score': 0.014892578125,
'sequence': '[CLS]Thou be on[MASK].[SEP]',
'token': 320,
'token_str': ' be'},
{'score': 0.00701904296875,
'sequence': '[CLS]Thou Art on[MASK].[SEP]',
'token': 3975,
'token_str': ' Art'},
{'score': 0.0042724609375,
'sequence': '[CLS]Thou call on[MASK].[SEP]',
'token': 1067,
'token_str': ' call'}],
[{'score': 0.1767578125,
'sequence': "[CLS]Thou[MASK] on't.[SEP]",
'token': 626,
'token_str': "'t"},
{'score': 0.146484375,
'sequence': '[CLS]Thou[MASK] on me.[SEP]',
'token': 479,
'token_str': ' me'},
{'score': 0.0419921875,
'sequence': '[CLS]Thou[MASK] on it.[SEP]',
'token': 352,
'token_str': ' it'},
{'score': 0.0419921875,
'sequence': '[CLS]Thou[MASK] on earth.[SEP]',
'token': 6149,
'token_str': ' earth'},
{'score': 0.03955078125,
'sequence': '[CLS]Thou[MASK] on him.[SEP]',
'token': 779,
'token_str': ' him'}]] -->
```
## Training and evaluation data
This model was finetuned using the the [Shakespare_corpus](https://huggingface.co/datasets/2nji/Shakespeare_Corpus) Dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 197 | 2.3128 |
| No log | 2.0 | 394 | 2.2150 |
| 2.3002 | 3.0 | 591 | 2.2395 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0 |