Ministral-8B-Instruct-2410-PsyCourse-fold2
This model is a fine-tuned version of mistralai/Ministral-8B-Instruct-2410 on the course-train-fold1 dataset. It achieves the following results on the evaluation set:
- Loss: 0.0318
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.2583 | 0.0770 | 50 | 0.2418 |
0.0854 | 0.1539 | 100 | 0.0695 |
0.061 | 0.2309 | 150 | 0.0586 |
0.0581 | 0.3078 | 200 | 0.0544 |
0.0443 | 0.3848 | 250 | 0.0431 |
0.0398 | 0.4617 | 300 | 0.0469 |
0.0431 | 0.5387 | 350 | 0.0458 |
0.0482 | 0.6156 | 400 | 0.0436 |
0.029 | 0.6926 | 450 | 0.0388 |
0.0294 | 0.7695 | 500 | 0.0368 |
0.0415 | 0.8465 | 550 | 0.0346 |
0.0341 | 0.9234 | 600 | 0.0348 |
0.0303 | 1.0004 | 650 | 0.0353 |
0.0353 | 1.0773 | 700 | 0.0379 |
0.026 | 1.1543 | 750 | 0.0346 |
0.0289 | 1.2312 | 800 | 0.0334 |
0.0274 | 1.3082 | 850 | 0.0345 |
0.0206 | 1.3851 | 900 | 0.0327 |
0.0362 | 1.4621 | 950 | 0.0332 |
0.0305 | 1.5391 | 1000 | 0.0355 |
0.033 | 1.6160 | 1050 | 0.0325 |
0.035 | 1.6930 | 1100 | 0.0368 |
0.0219 | 1.7699 | 1150 | 0.0333 |
0.0199 | 1.8469 | 1200 | 0.0375 |
0.0279 | 1.9238 | 1250 | 0.0330 |
0.0226 | 2.0008 | 1300 | 0.0322 |
0.0178 | 2.0777 | 1350 | 0.0318 |
0.022 | 2.1547 | 1400 | 0.0345 |
0.0098 | 2.2316 | 1450 | 0.0382 |
0.0209 | 2.3086 | 1500 | 0.0340 |
0.015 | 2.3855 | 1550 | 0.0370 |
0.0139 | 2.4625 | 1600 | 0.0360 |
0.0196 | 2.5394 | 1650 | 0.0336 |
0.0205 | 2.6164 | 1700 | 0.0344 |
0.0229 | 2.6933 | 1750 | 0.0329 |
0.0184 | 2.7703 | 1800 | 0.0330 |
0.0192 | 2.8472 | 1850 | 0.0329 |
0.0176 | 2.9242 | 1900 | 0.0332 |
0.0235 | 3.0012 | 1950 | 0.0345 |
0.0092 | 3.0781 | 2000 | 0.0383 |
0.012 | 3.1551 | 2050 | 0.0417 |
0.0077 | 3.2320 | 2100 | 0.0384 |
0.0068 | 3.3090 | 2150 | 0.0393 |
0.0121 | 3.3859 | 2200 | 0.0383 |
0.0077 | 3.4629 | 2250 | 0.0368 |
0.0115 | 3.5398 | 2300 | 0.0389 |
0.0061 | 3.6168 | 2350 | 0.0399 |
0.0132 | 3.6937 | 2400 | 0.0367 |
0.0066 | 3.7707 | 2450 | 0.0380 |
0.0076 | 3.8476 | 2500 | 0.0387 |
0.0087 | 3.9246 | 2550 | 0.0398 |
0.0094 | 4.0015 | 2600 | 0.0399 |
0.0024 | 4.0785 | 2650 | 0.0416 |
0.0077 | 4.1554 | 2700 | 0.0462 |
0.0018 | 4.2324 | 2750 | 0.0455 |
0.0021 | 4.3093 | 2800 | 0.0469 |
0.0026 | 4.3863 | 2850 | 0.0480 |
0.0043 | 4.4633 | 2900 | 0.0489 |
0.0018 | 4.5402 | 2950 | 0.0479 |
0.0034 | 4.6172 | 3000 | 0.0479 |
0.0037 | 4.6941 | 3050 | 0.0481 |
0.0037 | 4.7711 | 3100 | 0.0480 |
0.0036 | 4.8480 | 3150 | 0.0480 |
0.0027 | 4.9250 | 3200 | 0.0480 |
Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for chchen/Ministral-8B-Instruct-2410-PsyCourse-fold2
Base model
mistralai/Ministral-8B-Instruct-2410