runtime error
Exit code: 1. Reason: 79.2MB/s] tokenizer_config.json: 0%| | 0.00/25.0 [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 25.0/25.0 [00:00<00:00, 178kB/s] vocab.json: 0%| | 0.00/899k [00:00<?, ?B/s][A vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 26.3MB/s] merges.txt: 0%| | 0.00/456k [00:00<?, ?B/s][A merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 60.9MB/s] tokenizer.json: 0%| | 0.00/1.36M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 35.6MB/s] config.json: 0%| | 0.00/481 [00:00<?, ?B/s][A config.json: 100%|██████████| 481/481 [00:00<00:00, 3.08MB/s] model.safetensors: 0%| | 0.00/499M [00:00<?, ?B/s][A model.safetensors: 13%|█▎ | 67.1M/499M [00:01<00:09, 44.0MB/s][A model.safetensors: 87%|████████▋ | 432M/499M [00:02<00:00, 177MB/s] [A model.safetensors: 100%|██████████| 499M/499M [00:02<00:00, 176MB/s] Some weights of RobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: ['pooler.dense.bias', 'pooler.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Traceback (most recent call last): File "/home/user/app/src/app.py", line 12, in <module> model.load_state_dict(torch.load("models/best_model.pt", map_location=torch.device("cpu"))) File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 1479, in load with _open_file_like(f, "rb") as opened_file: File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 759, in _open_file_like return _open_file(name_or_buffer, mode) File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 740, in __init__ super().__init__(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'models/best_model.pt'
Container logs:
Fetching error logs...