runtime error
Exit code: 1. Reason: huggingface.co/api/inference-proxy/togeβ¦ β β β β MAX_NUM_ROWS = 50000 β β β β MAX_NUM_TOKENS = 2048 β β β β MODEL = 'deepseek-ai/DeepSeek-R1' β β β β OLLAMA_BASE_URL = None β β β β OPENAI_BASE_URL = None β β β β os = <module 'os' from β β β β '/usr/local/lib/python3.10/os.py'> β β β β rg = <module 'argilla' from β β β β '/usr/local/lib/python3.10/site-packages/argillaβ¦ β β β β SFT_TASK = 'supervised_fine_tuning' β β β β TEXTCAT_TASK = 'text_classification' β β β β TOKENIZER_ID = 'deepseek-ai/DeepSeek-R1' β β β β VLLM_BASE_URL = None β β β β warnings = <module 'warnings' from β β β β '/usr/local/lib/python3.10/warnings.py'> β β β β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ― β β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ― ValueError: `HUGGINGFACE_BASE_URL` and `MODEL` cannot be set at the same time. Use a model id for serverless inference and a base URL dedicated to Hugging Face Inference Endpoints.
Container logs:
Fetching error logs...