runtime error

Exit code: 1. Reason: M [00:01<00:00, 243MB/s] model.safetensors: 100%|██████████| 268M/268M [00:01<00:00, 242MB/s] Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. tokenizer_config.json: 0%| | 0.00/48.0 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 48.0/48.0 [00:00<00:00, 312kB/s] vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s] vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 45.7MB/s] tokenizer.json: 0%| | 0.00/466k [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 71.2MB/s] Map: 0%| | 0/1000 [00:00<?, ? examples/s] Map: 100%|██████████| 1000/1000 [00:00<00:00, 2741.01 examples/s] Map: 0%| | 0/1000 [00:00<?, ? examples/s] Map: 100%|██████████| 1000/1000 [00:00<00:00, 2996.93 examples/s] Downloading builder script: 0%| | 0.00/4.20k [00:00<?, ?B/s] Downloading builder script: 100%|██████████| 4.20k/4.20k [00:00<00:00, 20.9MB/s] Untrained model predictions: ---------------------------- It was good. - Negative Not a fan, don't recommed. - Negative Better than the first one. - Negative This is not worth watching even once. - Negative This one is a pass. - Negative trainable params: 628,994 || all params: 67,584,004 || trainable%: 0.9307 Traceback (most recent call last): File "/home/user/app/app.py", line 2, in <module> import sentiment_analysis_finetuning File "/home/user/app/sentiment_analysis_finetuning.py", line 103, in <module> training_args = TrainingArguments( TypeError: TrainingArguments.__init__() got an unexpected keyword argument 'evaluation_strategy'

Container logs:

Fetching error logs...