HUGS are optimized for a wide-variety of accelerators for ML inference, and support across different accelerator families and providers will continue to grow exponentially.
NVIDIA GPUs are widely used for machine learning and AI applications, offering high performance and specialized hardware for deep learning tasks. NVIDIA’s CUDA platform provides a robust ecosystem for GPU-accelerated computing.
Supported device(s):
AMD GPUs provide strong competition in the AI and machine learning space, offering high-performance computing capabilities with their CDNA architecture. AMD’s ROCm (Radeon Open Compute) platform enables GPU-accelerated computing on Linux systems.
Supported device(s):
Coming soon
Coming soon
< > Update on GitHub