runtime error
Exit code: 1. Reason: 2025-02-13 09:55:32.668308: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-02-13 09:55:32.670542: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2025-02-13 09:55:32.720879: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2025-02-13 09:55:32.721322: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2025-02-13 09:55:33.607488: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Downloading config.json: 0%| | 0.00/452 [00:00<?, ?B/s] Downloading config.json: 100%|██████████| 452/452 [00:00<00:00, 89.6kB/s] Traceback (most recent call last): File "app.py", line 102, in <module> model = AutoModelForQuestionAnswering.from_pretrained('uer/roberta-base-chinese-extractive-qa') File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained return model_class.from_pretrained( File "/home/user/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2950, in from_pretrained raise EnvironmentError( OSError: uer/roberta-base-chinese-extractive-qa does not appear to have a file named pytorch_model.bin but there is a file for TensorFlow weights. Use `from_tf=True` to load this model from those weights.
Container logs:
Fetching error logs...