Built with Axolotl

52a9fbb2-5c0d-43ed-ad44-dd3fba884b3f

This model is a fine-tuned version of fxmarty/tiny-random-GemmaForCausalLM on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 12.1835

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000214
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 140
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0000 1 12.4182
12.4045 0.0008 50 12.3781
12.2527 0.0015 100 12.2209
12.2396 0.0023 150 12.1951
12.2228 0.0031 200 12.1904
12.2151 0.0038 250 12.1887
12.2195 0.0046 300 12.1863
12.2138 0.0054 350 12.1844
12.2165 0.0061 400 12.1840
12.2145 0.0069 450 12.1835
12.2146 0.0077 500 12.1835

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for lesso14/52a9fbb2-5c0d-43ed-ad44-dd3fba884b3f

Adapter
(260)
this model