runtime error
Exit code: 1. Reason: 79%|ββββββββ | 7.62G/9.69G [00:38<00:09, 225MB/s][A model-00003-of-00003.safetensors: 81%|ββββββββ | 7.86G/9.69G [00:39<00:08, 208MB/s][A model-00003-of-00003.safetensors: 83%|βββββββββ | 8.08G/9.69G [00:41<00:08, 190MB/s][A model-00003-of-00003.safetensors: 86%|βββββββββ | 8.29G/9.69G [00:42<00:07, 191MB/s][A model-00003-of-00003.safetensors: 88%|βββββββββ | 8.50G/9.69G [00:43<00:06, 193MB/s][A model-00003-of-00003.safetensors: 91%|ββββββββββ| 8.86G/9.69G [00:44<00:03, 234MB/s][A model-00003-of-00003.safetensors: 95%|ββββββββββ| 9.23G/9.69G [00:45<00:01, 270MB/s][A model-00003-of-00003.safetensors: 100%|ββββββββββ| 9.69G/9.69G [00:46<00:00, 208MB/s] Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:00<00:00, 13934.56it/s] generation_config.json: 0%| | 0.00/121 [00:00<?, ?B/s][A generation_config.json: 100%|ββββββββββ| 121/121 [00:00<00:00, 502kB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, device_map="auto") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 316, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5161, in from_pretrained dispatch_model(model, **device_map_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 504, in dispatch_model raise ValueError( ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead.
Container logs:
Fetching error logs...