Supported Hardware Providers

HUGS are optimized for a wide-variety of accelerators for ML inference, and support across different accelerator families and providers will continue to grow exponentially.

NVIDIA GPUs

NVIDIA GPUs are widely used for machine learning and AI applications, offering high performance and specialized hardware for deep learning tasks. NVIDIA’s CUDA platform provides a robust ecosystem for GPU-accelerated computing.

Supported device(s):

AMD GPUs

AMD GPUs provide strong competition in the AI and machine learning space, offering high-performance computing capabilities with their CDNA architecture. AMD’s ROCm (Radeon Open Compute) platform enables GPU-accelerated computing on Linux systems.

Supported device(s):

AWS Accelerators (Inferentia/Trainium)

AWS Inferentia2 is a custom-built accelerator designed specifically for high-performance, cost-effective machine learning inference.

Supported device(s):

Google TPUs

Coming soon

< > Update on GitHub