torch==2.8.0 torchvision==0.23.0 torchdata==0.11.0 torchao==0.12.0 flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.8cxx11abiFALSE-cp310-cp310-linux_x86_64.whl accelerate==1.9.0 av==15.0.0 certifi==2025.8.3 charset-normalizer==3.4.2 # cannot use the 1.3.3 yet, as this requires Python >=3.11 # but in Gradio spaces we have Python 3.10 #contourpy==1.3.3 contourpy cycler==0.12.1 decord==0.6.0 diffusers==0.34.0 einops==0.8.1 filelock==3.13.1 fonttools==4.59.0 fsspec==2024.6.1 hf-xet==1.1.5 huggingface-hub==0.34.3 idna==3.10 imageio==2.37.0 imageio-ffmpeg==0.6.0 importlib_metadata==8.7.0 Jinja2==3.1.4 kiwisolver==1.4.8 loguru==0.7.3 MarkupSafe==2.1.5 matplotlib==3.10.5 mpmath==1.3.0 networkx==3.3 ninja==1.11.1.4 numpy==2.1.2 nvidia-ml-py==12.575.51 # for Torch 2.8.0, we need to use the 2.27.3 nvidia-nccl-cu12==2.27.3 #nvidia-nccl-cu12==2.21.5 # The user requested nvidia-nvjitlink-cu12==12.4.127 # torch 2.8.0 depends on nvidia-nvjitlink-cu12==12.8.93 nvidia-nvjitlink-cu12==12.8.93 # The user requested nvidia-nvtx-cu12==12.4.127 # torch 2.8.0 depends on nvidia-nvtx-cu12==12.8.90 nvidia-nvtx-cu12==12.8.90 nvitop==1.5.2 opencv-python-headless==4.12.0.88 packaging==25.0 pandas==2.3.1 pillow==11.0.0 protobuf==6.31.1 psutil==7.0.0 pyparsing==3.2.3 python-dateutil==2.9.0.post0 pytz==2025.2 PyYAML==6.0.2 regex==2025.7.34 requests==2.32.4 safetensors==0.5.3 sentencepiece==0.2.0 setuptools==78.1.1 six==1.17.0 sympy==1.13.3 tokenizers==0.21.4 tqdm==4.67.1 transformers==4.54.1 # The user requested triton==3.1.0 # torch 2.8.0 depends on triton==3.4.0 triton==3.4.0 typing_extensions==4.12.2 tzdata==2025.2 urllib3==2.5.0 wheel==0.45.1 zipp==3.23.0 gradio==5.42.0 sageattention==1.0.6 spaces