--- library_name: transformers tags: - generated_from_trainer datasets: - kanishka/babylm2-rewritten-clean-spacy_no-num-adj metrics: - accuracy model-index: - name: opt-babylm2-rewritten-clean-spacy_no-num-adj-earlystop-bpe_seed-211_1e-3 results: - task: name: Causal Language Modeling type: text-generation dataset: name: kanishka/babylm2-rewritten-clean-spacy_no-num-adj type: kanishka/babylm2-rewritten-clean-spacy_no-num-adj metrics: - name: Accuracy type: accuracy value: 0.4782639238952008 --- # opt-babylm2-rewritten-clean-spacy_no-num-adj-earlystop-bpe_seed-211_1e-3 This model was trained from scratch on the kanishka/babylm2-rewritten-clean-spacy_no-num-adj dataset. It achieves the following results on the evaluation set: - Loss: 2.6884 - Accuracy: 0.4783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 211 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:| | 4.0839 | 1.0 | 2225 | 3.8319 | 0.3594 | | 3.4456 | 2.0 | 4450 | 3.3209 | 0.4073 | | 3.1244 | 3.0 | 6675 | 3.1047 | 0.4285 | | 2.9574 | 4.0 | 8900 | 3.0003 | 0.4397 | | 2.8363 | 5.0 | 11125 | 2.9326 | 0.4464 | | 2.7798 | 6.0 | 13350 | 2.8933 | 0.4502 | | 2.7383 | 7.0 | 15575 | 2.8661 | 0.4534 | | 2.706 | 8.0 | 17800 | 2.8462 | 0.4563 | | 2.6843 | 9.0 | 20025 | 2.8286 | 0.4581 | | 2.6625 | 10.0 | 22250 | 2.8218 | 0.4589 | | 2.6482 | 11.0 | 24475 | 2.8130 | 0.4597 | | 2.6327 | 12.0 | 26700 | 2.8075 | 0.4604 | | 2.6196 | 13.0 | 28925 | 2.7995 | 0.4610 | | 2.6254 | 14.0 | 31150 | 2.7951 | 0.4620 | | 2.6119 | 15.0 | 33375 | 2.7756 | 0.4640 | | 2.5659 | 16.0 | 35600 | 2.7508 | 0.4678 | | 2.5146 | 17.0 | 37825 | 2.7269 | 0.4709 | | 2.4598 | 18.0 | 40050 | 2.7063 | 0.4740 | | 2.394 | 19.0 | 42275 | 2.6902 | 0.4770 | | 2.3215 | 19.9913 | 44480 | 2.6884 | 0.4783 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1 - Datasets 3.2.0 - Tokenizers 0.21.0