runtime error

Exit code: 1. Reason: ?B/s] config.json: 100%|██████████| 1.73k/1.73k [00:00<00:00, 13.5MB/s] configuration_deepseek.py: 0%| | 0.00/10.6k [00:00<?, ?B/s] configuration_deepseek.py: 100%|██████████| 10.6k/10.6k [00:00<00:00, 63.5MB/s] A new version of the following files was downloaded from https://huggingface.co/deepseek-ai/DeepSeek-R1: - configuration_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. modeling_deepseek.py: 0%| | 0.00/75.8k [00:00<?, ?B/s] modeling_deepseek.py: 100%|██████████| 75.8k/75.8k [00:00<00:00, 235MB/s] A new version of the following files was downloaded from https://huggingface.co/deepseek-ai/DeepSeek-R1: - modeling_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3640, in from_pretrained config.quantization_config = AutoHfQuantizer.merge_quantization_configs( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 181, in merge_quantization_configs quantization_config = AutoQuantizationConfig.from_dict(quantization_config) File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 105, in from_dict raise ValueError( ValueError: Unknown quantization type, got fp8 - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'higgs', 'hqq', 'compressed-tensors', 'fbgemm_fp8', 'torchao', 'bitnet', 'vptq']

Container logs:

Fetching error logs...