runtime error
Exit code: 1. Reason: ou. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 modeling_t5.py: 0%| | 0.00/2.41k [00:00<?, ?B/s][A modeling_t5.py: 100%|██████████| 2.41k/2.41k [00:00<00:00, 13.7MB/s] A new version of the following files was downloaded from https://huggingface.co/ClueAI/ChatYuan-large-v2: - modeling_t5.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. pytorch_model.bin: 0%| | 0.00/3.13G [00:00<?, ?B/s][A pytorch_model.bin: 2%|▏ | 62.9M/3.13G [00:01<01:22, 37.2MB/s][A pytorch_model.bin: 3%|▎ | 105M/3.13G [00:02<01:24, 36.0MB/s] [A pytorch_model.bin: 37%|███▋ | 1.15G/3.13G [00:05<00:06, 287MB/s][A pytorch_model.bin: 71%|███████ | 2.22G/3.13G [00:06<00:02, 402MB/s][A pytorch_model.bin: 100%|█████████▉| 3.13G/3.13G [00:07<00:00, 419MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 13, in <module> model = AutoModel.from_pretrained(MODEL_PATH, trust_remote_code=True, device_map="auto").eval() File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3706, in from_pretrained ) = cls._load_pretrained_model( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3835, in _load_pretrained_model raise ValueError( ValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for them. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in this format.
Container logs:
Fetching error logs...