--- library_name: peft license: other base_model: mistralai/Ministral-8B-Instruct-2410 tags: - llama-factory - lora - generated_from_trainer model-index: - name: Ministral-8B-Instruct-2410-PsyCourse-fold5 results: [] --- # Ministral-8B-Instruct-2410-PsyCourse-fold5 This model is a fine-tuned version of [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) on the course-train-fold1 dataset. It achieves the following results on the evaluation set: - Loss: 0.0312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2581 | 0.0770 | 50 | 0.2417 | | 0.0853 | 0.1539 | 100 | 0.0695 | | 0.0606 | 0.2309 | 150 | 0.0584 | | 0.0581 | 0.3078 | 200 | 0.0543 | | 0.0439 | 0.3848 | 250 | 0.0428 | | 0.0405 | 0.4617 | 300 | 0.0461 | | 0.0428 | 0.5387 | 350 | 0.0445 | | 0.0485 | 0.6156 | 400 | 0.0433 | | 0.0286 | 0.6926 | 450 | 0.0406 | | 0.0287 | 0.7695 | 500 | 0.0394 | | 0.0426 | 0.8465 | 550 | 0.0351 | | 0.0338 | 0.9234 | 600 | 0.0351 | | 0.0301 | 1.0004 | 650 | 0.0366 | | 0.0339 | 1.0773 | 700 | 0.0370 | | 0.0269 | 1.1543 | 750 | 0.0356 | | 0.0276 | 1.2312 | 800 | 0.0345 | | 0.0293 | 1.3082 | 850 | 0.0336 | | 0.0216 | 1.3851 | 900 | 0.0339 | | 0.036 | 1.4621 | 950 | 0.0333 | | 0.0319 | 1.5391 | 1000 | 0.0361 | | 0.0312 | 1.6160 | 1050 | 0.0324 | | 0.0333 | 1.6930 | 1100 | 0.0380 | | 0.0228 | 1.7699 | 1150 | 0.0331 | | 0.0217 | 1.8469 | 1200 | 0.0358 | | 0.0272 | 1.9238 | 1250 | 0.0324 | | 0.0217 | 2.0008 | 1300 | 0.0318 | | 0.0175 | 2.0777 | 1350 | 0.0312 | | 0.021 | 2.1547 | 1400 | 0.0341 | | 0.009 | 2.2316 | 1450 | 0.0392 | | 0.0186 | 2.3086 | 1500 | 0.0348 | | 0.0163 | 2.3855 | 1550 | 0.0395 | | 0.0123 | 2.4625 | 1600 | 0.0359 | | 0.0196 | 2.5394 | 1650 | 0.0349 | | 0.0201 | 2.6164 | 1700 | 0.0352 | | 0.0223 | 2.6933 | 1750 | 0.0329 | | 0.0192 | 2.7703 | 1800 | 0.0324 | | 0.0191 | 2.8472 | 1850 | 0.0324 | | 0.0162 | 2.9242 | 1900 | 0.0336 | | 0.0238 | 3.0012 | 1950 | 0.0342 | | 0.0092 | 3.0781 | 2000 | 0.0381 | | 0.0105 | 3.1551 | 2050 | 0.0409 | | 0.0096 | 3.2320 | 2100 | 0.0408 | | 0.0078 | 3.3090 | 2150 | 0.0412 | | 0.012 | 3.3859 | 2200 | 0.0405 | | 0.0074 | 3.4629 | 2250 | 0.0384 | | 0.0101 | 3.5398 | 2300 | 0.0406 | | 0.0056 | 3.6168 | 2350 | 0.0397 | | 0.0114 | 3.6937 | 2400 | 0.0362 | | 0.0056 | 3.7707 | 2450 | 0.0387 | | 0.0074 | 3.8476 | 2500 | 0.0389 | | 0.0089 | 3.9246 | 2550 | 0.0401 | | 0.0096 | 4.0015 | 2600 | 0.0402 | | 0.0019 | 4.0785 | 2650 | 0.0422 | | 0.0074 | 4.1554 | 2700 | 0.0446 | | 0.0018 | 4.2324 | 2750 | 0.0453 | | 0.0019 | 4.3093 | 2800 | 0.0468 | | 0.002 | 4.3863 | 2850 | 0.0483 | | 0.0045 | 4.4633 | 2900 | 0.0486 | | 0.002 | 4.5402 | 2950 | 0.0480 | | 0.0033 | 4.6172 | 3000 | 0.0479 | | 0.0054 | 4.6941 | 3050 | 0.0484 | | 0.0055 | 4.7711 | 3100 | 0.0482 | | 0.0034 | 4.8480 | 3150 | 0.0482 | | 0.0038 | 4.9250 | 3200 | 0.0481 | ### Framework versions - PEFT 0.12.0 - Transformers 4.46.1 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3