Final logs after training
Browse files- session_logs/logs/events.out.tfevents.1739270730.0ede515a0a3f.3966.0 +3 -0
- session_logs/logs/events.out.tfevents.1739271329.0ede515a0a3f.3966.1 +3 -0
- session_logs/logs/events.out.tfevents.1739271818.0ede515a0a3f.19000.0 +3 -0
- session_logs/logs/events.out.tfevents.1739271998.0ede515a0a3f.19000.1 +3 -0
- session_logs/logs/events.out.tfevents.1739272425.0ede515a0a3f.24829.0 +3 -0
- session_logs/logs/events.out.tfevents.1739272604.0ede515a0a3f.24829.1 +3 -0
- session_logs/logs/events.out.tfevents.1739273101.0ede515a0a3f.31096.0 +3 -0
- session_logs/logs/events.out.tfevents.1739273280.0ede515a0a3f.31096.1 +3 -0
- session_logs/logs/events.out.tfevents.1739273723.0ede515a0a3f.37008.0 +3 -0
- session_logs/logs/events.out.tfevents.1739273902.0ede515a0a3f.37008.1 +3 -0
- session_logs/logs/events.out.tfevents.1739275086.0ede515a0a3f.49309.0 +3 -0
- session_logs/logs/events.out.tfevents.1739275266.0ede515a0a3f.49309.1 +3 -0
- session_logs/lora_finetuning.log +7 -6
- session_logs/lora_finetuning_report.pdf +0 -0
session_logs/logs/events.out.tfevents.1739270730.0ede515a0a3f.3966.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5bd426cf66c67f0de6143ed757ea734966b2c7a97df84a8f586e20b4b6ac307
|
3 |
+
size 9521
|
session_logs/logs/events.out.tfevents.1739271329.0ede515a0a3f.3966.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ad35b7a493e579fca858bd8a1fcda39516b40fe0832e6d3b0a0761f742420746
|
3 |
+
size 354
|
session_logs/logs/events.out.tfevents.1739271818.0ede515a0a3f.19000.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6f2ee4e6611ed942c5fb837eb14820cf025bf6411a5dce74d375b757a7ba5bc9
|
3 |
+
size 6978
|
session_logs/logs/events.out.tfevents.1739271998.0ede515a0a3f.19000.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:43eb0c7dc32e25e4e2f6af7f322ac15b8485aed7746ee811c26919df22949d9f
|
3 |
+
size 354
|
session_logs/logs/events.out.tfevents.1739272425.0ede515a0a3f.24829.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:167b2ba212fbe23c1a92f24a7a68707c14f5937b91ca2c28427d71b93a26f0b2
|
3 |
+
size 6978
|
session_logs/logs/events.out.tfevents.1739272604.0ede515a0a3f.24829.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:866bbd2f68205a780b2889c1830f3889709a58bf5477739698435ad006b150c3
|
3 |
+
size 354
|
session_logs/logs/events.out.tfevents.1739273101.0ede515a0a3f.31096.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:262ad398bae5d0ad17c9043356fc5572989c114fb384df7d8f44d5f8a3757120
|
3 |
+
size 6978
|
session_logs/logs/events.out.tfevents.1739273280.0ede515a0a3f.31096.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:112d708bf2e5d9c67136cb0c7011a42fa75dcafed2dbfded302173373f896478
|
3 |
+
size 354
|
session_logs/logs/events.out.tfevents.1739273723.0ede515a0a3f.37008.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2fcead849bee939d8953c566277e7c74250318cbd6a97a90c8d091b3eb8c13fc
|
3 |
+
size 6978
|
session_logs/logs/events.out.tfevents.1739273902.0ede515a0a3f.37008.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:06364e01635a963fd3079d5ba2c2586c7d9b944c625a0ae0a1c3439a1c59e0d0
|
3 |
+
size 354
|
session_logs/logs/events.out.tfevents.1739275086.0ede515a0a3f.49309.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ec59995c4930b1ab965a764f20e6ed6681eef441ef17d68b09be9e83bf11011
|
3 |
+
size 6978
|
session_logs/logs/events.out.tfevents.1739275266.0ede515a0a3f.49309.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8d3a2422c1f0642daea7ed0b12f8481599a2e5a09bea44f9a1ae065586de5494
|
3 |
+
size 354
|
session_logs/lora_finetuning.log
CHANGED
@@ -1,11 +1,12 @@
|
|
1 |
-
2025-02-
|
2 |
-
2025-02-
|
3 |
-
2025-02-
|
|
|
4 |
Num processes: 1
|
5 |
Process index: 0
|
6 |
Local process index: 0
|
7 |
Device: cuda
|
8 |
, '_n_gpu': 1, '__cached__setup_devices': device(type='cuda', index=0), 'deepspeed_plugin': None}
|
9 |
-
2025-02-
|
10 |
-
2025-02-
|
11 |
-
2025-02-
|
|
|
1 |
+
2025-02-11 11:57:13,327 - Logging initialized for session: 40abea14-9aa3-4fb6-9c26-cbd46aa28aee
|
2 |
+
2025-02-11 11:57:14,072 - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
|
3 |
+
2025-02-11 12:04:07,169 - Using default tokenizer.
|
4 |
+
2025-02-11 12:04:08,168 - Hyperparameters: {'output_dir': './lora_finetuned', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': <IntervalStrategy.STEPS: 'steps'>, 'prediction_loss_only': False, 'per_device_train_batch_size': 1, 'per_device_eval_batch_size': 2, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.3, 'num_train_epochs': 1, 'max_steps': 20, 'lr_scheduler_type': <SchedulerType.LINEAR: 'linear'>, 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 10, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './logs', 'logging_strategy': <IntervalStrategy.STEPS: 'steps'>, 'logging_first_step': False, 'logging_steps': 50, 'logging_nan_inf_filter': True, 'save_strategy': <SaveStrategy.STEPS: 'steps'>, 'save_steps': 100, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': True, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': True, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 10, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': './lora_finetuned', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': AcceleratorConfig(split_batches=False, dispatch_batches=None, even_batches=True, use_seedable_sampler=True, non_blocking=False, gradient_accumulation_kwargs=None, use_configured_state=False), 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': <OptimizerNames.PAGED_ADAMW_8BIT: 'paged_adamw_8bit'>, 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': None, 'hub_model_id': 'Udith-Sandaruwan/Llama-3.1-8B-logs-check', 'hub_strategy': <HubStrategy.EVERY_SAVE: 'every_save'>, 'hub_private_repo': None, 'hub_always_push': False, 'gradient_checkpointing': True, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': 'steps', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': None, 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False, 'distributed_state': Distributed environment: NO
|
5 |
Num processes: 1
|
6 |
Process index: 0
|
7 |
Local process index: 0
|
8 |
Device: cuda
|
9 |
, '_n_gpu': 1, '__cached__setup_devices': device(type='cuda', index=0), 'deepspeed_plugin': None}
|
10 |
+
2025-02-11 12:04:08,168 - Training details: {'Epochs': 1, 'Training Steps': 20, 'Final Loss': None, 'Final Learning Rate': None, 'Total Training Time (s)': '108.77'}
|
11 |
+
2025-02-11 12:04:08,168 - Training metrics: {'epochs': [], 'loss': [], 'learning_rate': [], 'training_time': 108.76663875579834}
|
12 |
+
2025-02-11 12:04:08,168 - Evaluation results: {'meteor_scores': {'meteor': 0.10869565217391305}, 'rouge_scores': {'rouge1': 0.0, 'rouge2': 0.0, 'rougeL': 0.0, 'rougeLsum': 0.0}, 'bleu_scores': {'bleu': 0.0, 'precisions': [0.03571428571428571, 0.005128205128205128, 0.0, 0.0], 'brevity_penalty': 1.0, 'length_ratio': 14.0, 'translation_length': 196, 'reference_length': 14}, 'perplexity': 629805440.0}
|
session_logs/lora_finetuning_report.pdf
CHANGED
Binary files a/session_logs/lora_finetuning_report.pdf and b/session_logs/lora_finetuning_report.pdf differ
|
|