content
stringlengths 0
188k
|
---|
NVIDIA Tensor Core Programming
Introduction
NVIDIA Tensor Cores are dedicated accelerators for general matrix multiplication (GEMM) operations on NVIDIA GPUs since the Volta architecture. Because the artificial intelligence computations are usually dominated by GEMM operations, NVIDIA Tensor Core is critical for accelerating the artificial intelligence applications.
NVIDIA Tensor Core
NVIDIA Tensor Cores are specialized in performing the GEMM operations in mixed precision, i.e., the GEMM input matrices are in lower precision whereas the GEMM output matrix are in high precision. The mixed precision training and inference are the key techniques for accelerating the training and inference of neural networks.
Because NVIDIA Tensor Cores are specifically designed for GEMM, the GEMM throughput using NVIDIA Tensor Core is incredibly much higher than what can be achieved using NVIDIA CUDA Cores which are more suitable for more general parallel programming.
For the NVIDIA Ampere architecture, each SM has 4 Tensor Cores. In particular, NVIDIA A100 GPU has 108 streaming multiprocessors (SMs) which accounts for 432 Tensor Cores in total.
NVIDIA Tensor Cores are fully programmable. The Tensor Core programming API at the warp level has been declared in the mma.h header under the nvcuda::wmma namespace.
NVIDIA Tensor Core Programming
Matrix Multiplication Decomposition
NVIDIA CUDA allows the user to program Tensor Core GEMM operations $D = AB + C$ at the warp level. While each Tensor Core could only perform matrix multiplication of some specific small sizes for different data types, as discussed in my previous article “CUDA Matrix Multiplication”, large GEMM can be divided into multiple small GEMMs and accumulation.
Given a GEMM operation $D = AB + C$, where $D \in \mathbb{R}^{m \times n}$, $A \in \mathbb{R}^{m \times k}$, $B \in \mathbb{R}^{k \times n}$, $C \in \mathbb{R}^{m \times n}$, the matrices could be divided into smaller matrices.
$$A =\begin{bmatrix}A_{1,1}^{d \times d} & A_{1,2}^{d \times d} & \cdots & A_{1,k/d}^{d \times d} \\A_{2,1}^{d \times d} & A_{2,2}^{d \times d} & \cdots & A_{2,k/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\A_{m/d,1}^{d \times d} & A_{m/d,2}^{d \times d} & \cdots & A_{m/d,k/d}^{d \times d} \\\end{bmatrix}$$
$$B =\begin{bmatrix}B_{1,1}^{d \times d} & B_{1,2}^{d \times d} & \cdots & B_{1,n/d}^{d \times d} \\B_{2,1}^{d \times d} & B_{2,2}^{d \times d} & \cdots & B_{2,n/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\B_{k/d,1}^{d \times d} & B_{k/d,2}^{d \times d} & \cdots & B_{k/d,n/d}^{d \times d} \\\end{bmatrix}$$
$$C =\begin{bmatrix}C_{1,1}^{d \times d} & C_{1,2}^{d \times d} & \cdots & C_{1,n/d}^{d \times d} \\C_{2,1}^{d \times d} & C_{2,2}^{d \times d} & \cdots & C_{2,n/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\C_{m/d,1}^{d \times d} & C_{m/d,2}^{d \times d} & \cdots & C_{m/d,n/d}^{d \times d} \\\end{bmatrix}$$
$$D =\begin{bmatrix}D_{1,1}^{d \times d} & D_{1,2}^{d \times d} & \cdots & D_{1,n/d}^{d \times d} \\D_{2,1}^{d \times d} & D_{2,2}^{d \times d} & \cdots & D_{2,n/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\D_{m/d,1}^{d \times d} & D_{m/d,2}^{d \times d} & \cdots & D_{m/d,n/d}^{d \times d} \\\end{bmatrix}$$
Each small matrix in $D$ is computed as multiple small GEMMs and accumulation.
$$D_{i_m,i_n}^{d \times d} = \sum_{i_k=1}^{k/d} A_{i_m,i_k}^{d \times d} B_{i_k,i_n}^{d \times d}$$
In my previous article “CUDA Matrix Multiplication”, I used CUDA Core and CUDA shared memory to perform the above mathematics and each thread block computes one $D_{i_m,i_n}^{d \times d}$. This time instead, I will use Tensor Core to compute exactly the same mathematics where each warp computes one $D_{i_m,i_n}^{d \times d}$. More specifically, each warp computes a $16 \times 16 \times 16$ GEMM resulting in a $16 \times 16$ tile in the $D$ matrix, i.e., $d = 16$.
Matrix Multiplication Implementation Using NVIDIA Tensor Core
In this implementation, we will use Tensor Core to perform GEMM operations using HMMA (half matrix multiplication and accumulation) and IMMA (integer matrix multiplication and accumulation) instructions. In addition, four different types of GEMM which involves transposed matrix multiplications have been implemented and verified.
In this implementation, we will mainly focus on the matrix multiplication part in the GEMM operation by treating the $C = 0$.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599
#include <cassert>#include <chrono>#include <functional>#include <iomanip>#include <iostream>#include <random>#include <utility>#include <vector>#include <cuda_runtime.h>#include <mma.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)template <typename T>void check(T err, const char* const func, const char* const file, int const line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(const char* const file, int const line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, int num_repeats = 100, int num_warmups = 100){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (int i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (int i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}// All the data in the matrices are stored in a column-major order,// which is the consistent with most of the cuBLAS GEMM APIs.// For matrix A of shape M x N, the leading dimension is M.// For matrix A that is transposed and is of shape N x M,// the leading dimension is N.// Matrix A: M x K, or K x N (if transposed).// Matrix B: K x M, or M x K (if transposed).// Matrix C: M x N.// WMMA_FRAG_LAYOUT_A: nvcuda::wmma::row_major if A is// transposed, otherwise nvcuda::wmma::col_major.// WMMA_FRAG_LAYOUT_B: nvcuda::wmma::row_major if B is// transposed, otherwise nvcuda::wmma::col_major.template <typename T1, typename T2, int WMMA_M, int WMMA_N, int WMMA_K, typename WMMA_FRAG_LAYOUT_A, typename WMMA_FRAG_LAYOUT_B>__global__ void wmma_gemm_a_col_major_b_col_major( T1 const* A, T1 const* B, T2* C, uint32_t m, uint32_t n, uint32_t k, uint32_t lda, uint32_t ldb, uint32_t ldc, bool is_A_transpose, bool is_B_transpose, float alpha, float beta){ // Tile using a 2D grid. // Determine the warp 2D index. uint32_t const warpM{(blockIdx.x * blockDim.x + threadIdx.x) / warpSize}; uint32_t const warpN{blockIdx.y * blockDim.y + threadIdx.y}; // Declare the fragments. nvcuda::wmma::fragment<nvcuda::wmma::matrix_a, WMMA_M, WMMA_N, WMMA_K, T1, WMMA_FRAG_LAYOUT_A> a_frag{}; nvcuda::wmma::fragment<nvcuda::wmma::matrix_b, WMMA_M, WMMA_N, WMMA_K, T1, WMMA_FRAG_LAYOUT_B> b_frag{}; nvcuda::wmma::fragment<nvcuda::wmma::accumulator, WMMA_M, WMMA_N, WMMA_K, T2> acc_frag{}; nvcuda::wmma::fragment<nvcuda::wmma::accumulator, WMMA_M, WMMA_N, WMMA_K, T2> c_frag{}; // Make sure the accumulator starts from 0. nvcuda::wmma::fill_fragment(acc_frag, static_cast<T2>(0)); // Loop over K. for (uint32_t ki{0}; ki < k; ki += WMMA_K) { // Determine the first element of the mma matrices on the linear memory. // Matrix A mma matrix uint32_t const matrix_mma_a_row_idx{is_A_transpose ? ki : warpM * WMMA_M}; uint32_t const matrix_mma_a_col_idx{is_A_transpose ? warpM * WMMA_M : ki}; // Matrix B mma matrix uint32_t const matrix_mma_b_row_idx{is_B_transpose ? warpN * WMMA_N : ki}; uint32_t const matrix_mma_b_col_idx{is_B_transpose ? ki : warpN * WMMA_N}; // Bounds checking if (matrix_mma_a_row_idx < (is_A_transpose ? k : m) && matrix_mma_a_col_idx < (is_A_transpose ? m : k) && matrix_mma_b_row_idx < (is_B_transpose ? n : k) && matrix_mma_b_col_idx < (is_B_transpose ? k : n)) { // Determine the memory address of the first element of the mma // matrices. Notice that all the matrices are assumed to be // column-major. Therefore, the indexing is different from the // row-major indexing that we commonly see. T1 const* matrix_mma_a_mptr{A + matrix_mma_a_row_idx + matrix_mma_a_col_idx * lda}; T1 const* matrix_mma_b_mptr{B + matrix_mma_b_row_idx + matrix_mma_b_col_idx * ldb}; // Load the mma matrix inputs. nvcuda::wmma::load_matrix_sync(a_frag, matrix_mma_a_mptr, lda); nvcuda::wmma::load_matrix_sync(b_frag, matrix_mma_b_mptr, ldb); // Perform the matrix multiplication nvcuda::wmma::mma_sync(acc_frag, a_frag, b_frag, acc_frag); } } // Load in the current value of c, scale it by beta, and add this our result // scaled by alpha. uint32_t const matrix_mma_c_row_idx{warpM * WMMA_M}; uint32_t const matrix_mma_c_col_idx{warpN * WMMA_N}; if (matrix_mma_c_row_idx < m && matrix_mma_c_col_idx < n) { T2* matrix_mma_c_mptr{C + matrix_mma_c_row_idx + matrix_mma_c_col_idx * ldc}; nvcuda::wmma::load_matrix_sync(c_frag, matrix_mma_c_mptr, ldc, nvcuda::wmma::mem_col_major); // Let the compiler figure out how to do the elementwise operation. // Such elementwise operation can be scaling, accumulation, // quantization, etc. // https://docs.nvidia.com/cuda/archive/12.0.1/cuda-c-programming-guide/#id40 // Be careful when dealing with the integer types. for (uint32_t i = 0; i < c_frag.num_elements; i++) { c_frag.x[i] = alpha * acc_frag.x[i] + beta * c_frag.x[i]; } // Store the output nvcuda::wmma::store_matrix_sync(matrix_mma_c_mptr, c_frag, ldc, nvcuda::wmma::mem_col_major); }}template <typename T1, typename T2>void launch_wmma_mm(T1 const* A, T1 const* B, T2* C, uint32_t m, uint32_t n, uint32_t k, bool is_A_transpose, bool is_B_transpose, cudaStream_t stream){ // Assume there is no padding in our data. uint32_t const lda{is_A_transpose ? k : m}; uint32_t const ldb{is_B_transpose ? n : k}; uint32_t const ldc{m}; float const alpha{1.0f}; float const beta{0.0f}; constexpr int WMMA_M{16}; constexpr int WMMA_N{16}; constexpr int WMMA_K{16}; constexpr int WARP_SIZE{32}; dim3 gridDim; dim3 blockDim; // blockDim.x must be a multple of warpSize // Block size of 128x4 means we have 16 (4x4) warps, // each warp computes a 16x16 output tile, // and a block computes a 64x64 output tile. // Each block has 4x4 warps, totalling 4x4x32 threads. int const num_warps_x = 4; int const num_warps_y = 4; blockDim.x = num_warps_x * WARP_SIZE; blockDim.y = num_warps_y; // Round up. gridDim.x = (m + (WMMA_M * num_warps_x - 1)) / (WMMA_M * num_warps_x); gridDim.y = (n + WMMA_N * num_warps_y - 1) / (WMMA_N * num_warps_y); // C = A * B if ((!is_A_transpose) && (!is_B_transpose)) { wmma_gemm_a_col_major_b_col_major<T1, T2, WMMA_M, WMMA_N, WMMA_K, nvcuda::wmma::col_major, nvcuda::wmma::col_major> <<<gridDim, blockDim, 0, stream>>>(A, B, C, m, n, k, lda, ldb, ldc, is_A_transpose, is_B_transpose, alpha, beta); } // C = A^T * B else if ((is_A_transpose) && (!is_B_transpose)) { wmma_gemm_a_col_major_b_col_major<T1, T2, WMMA_M, WMMA_N, WMMA_K, nvcuda::wmma::row_major, nvcuda::wmma::col_major> <<<gridDim, blockDim, 0, stream>>>(A, B, C, m, n, k, lda, ldb, ldc, is_A_transpose, is_B_transpose, alpha, beta); } // C = A * B^T else if ((!is_A_transpose) && (is_B_transpose)) { wmma_gemm_a_col_major_b_col_major<T1, T2, WMMA_M, WMMA_N, WMMA_K, nvcuda::wmma::col_major, nvcuda::wmma::row_major> <<<gridDim, blockDim, 0, stream>>>(A, B, C, m, n, k, lda, ldb, ldc, is_A_transpose, is_B_transpose, alpha, beta); } // C = A^T * B^T else { wmma_gemm_a_col_major_b_col_major<T1, T2, WMMA_M, WMMA_N, WMMA_K, nvcuda::wmma::row_major, nvcuda::wmma::row_major> <<<gridDim, blockDim, 0, stream>>>(A, B, C, m, n, k, lda, ldb, ldc, is_A_transpose, is_B_transpose, alpha, beta); } CHECK_LAST_CUDA_ERROR();}// A and B are column-major matrices.template <typename T1, typename T2>void mm_a_col_major_b_col_major(T1 const* A, T1 const* B, T2* C, uint32_t m, uint32_t n, uint32_t k, uint32_t lda, uint32_t ldb, uint32_t ldc, bool is_A_transpose, bool is_B_transpose){ for (uint32_t ni{0}; ni < n; ++ni) { for (uint32_t mi{0}; mi < m; ++mi) { // Compute C[mi, ni] T2 accum{0}; // C = A * B if ((!is_A_transpose) && (!is_B_transpose)) { for (uint32_t ki{0}; ki < k; ++ki) { // A[mi, ki] * B[ki, ni] accum += A[ki * lda + mi] * B[ni * ldb + ki]; } } // C = A^T * B else if ((is_A_transpose) && (!is_B_transpose)) { for (uint32_t ki{0}; ki < k; ++ki) { // A[ki, mi] * B[ki, ni] accum += A[mi * lda + ki] * B[ni * ldb + ki]; } } // C = A * B^T else if ((!is_A_transpose) && (is_B_transpose)) { for (uint32_t ki{0}; ki < k; ++ki) { // A[mi, ki] * B[ni, ki] accum += A[ki * lda + mi] * B[ki * ldb + ni]; } } // C = A^T * B^T else { for (uint32_t ki{0}; ki < k; ++ki) { // A[ki, mi] * B[ni, ki] accum += A[mi * lda + ki] * B[ki * ldb + ni]; } } C[ni * ldc + mi] = accum; } }}template <typename T1, typename T2>void launch_mm(T1 const* A, T1 const* B, T2* C, uint32_t m, uint32_t n, uint32_t k, bool is_A_transpose, bool is_B_transpose){ // Assume there is no padding in our data. uint32_t const lda{is_A_transpose ? k : m}; uint32_t const ldb{is_B_transpose ? n : k}; uint32_t const ldc{m}; mm_a_col_major_b_col_major(A, B, C, m, n, k, lda, ldb, ldc, is_A_transpose, is_B_transpose);}void fill_random_float_values(float* arr, size_t n, std::default_random_engine& e){ std::uniform_real_distribution<float> uniform_dist(-256, 256); for (size_t i{0}; i < n; ++i) { arr[i] = uniform_dist(e); }}void fill_random_int8_values(int8_t* arr, size_t n, std::default_random_engine& e){ std::uniform_int_distribution<int8_t> uniform_dist(-128, 127); for (size_t i{0}; i < n; ++i) { arr[i] = uniform_dist(e); }}void fill_random_int32_values(int32_t* arr, size_t n, std::default_random_engine& e){ std::uniform_int_distribution<int32_t> uniform_dist(-128, 127); for (size_t i{0}; i < n; ++i) { arr[i] = uniform_dist(e); }}void float2half(__half* half_arr, float const* float_arr, size_t n){ for (size_t i{0}; i < n; ++i) { half_arr[i] = __float2half(float_arr[i]); }}template <typename T>float get_avg_abs_diff_ratio(T const* arr_1, T const* arr_2, size_t n){ float sum_abs_diff_ratio{0}; for (size_t i{0}; i < n; ++i) { sum_abs_diff_ratio += std::abs(static_cast<float>(arr_1[i]) - static_cast<float>(arr_2[i])) / std::abs(static_cast<float>(arr_1[i]) + static_cast<float>(arr_2[i])); } return sum_abs_diff_ratio / n;}template <typename T>bool array_equal(T const* arr_1, T const* arr_2, size_t n){ for (size_t i{0}; i < n; ++i) { if (arr_1[i] != arr_2[i]) { return false; } } return true;}void print_test_header(bool is_A_transpose, bool is_B_transpose){ // C = A * B if ((!is_A_transpose) && (!is_B_transpose)) { std::cout << "C = A * B" << std::endl; } // C = A^T * B else if ((is_A_transpose) && (!is_B_transpose)) { std::cout << "C = A^T * B" << std::endl; } // C = A * B^T else if ((!is_A_transpose) && (is_B_transpose)) { std::cout << "C = A * B^T" << std::endl; } // C = A^T * B^T else { std::cout << "C = A^T * B^T" << std::endl; }}int main(){ constexpr int num_repeats{10}; constexpr int num_warmups{10}; uint32_t const matrix_size_m{1024}; uint32_t const matrix_size_n{1024}; uint32_t const matrix_size_k{1024}; std::cout << "Matrix Sizes" << std::endl; std::cout << "M: " << matrix_size_m << std::endl; std::cout << "N: " << matrix_size_n << std::endl; std::cout << "K: " << matrix_size_k << std::endl; std::default_random_engine random_engine(0); cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); // HMMA std::cout << "FP16 HMMA" << std::endl; std::vector<float> matrix_a_float(matrix_size_m * matrix_size_k); std::vector<float> matrix_b_float(matrix_size_k * matrix_size_n); std::vector<__half> matrix_a_half(matrix_size_m * matrix_size_k); std::vector<__half> matrix_b_half(matrix_size_k * matrix_size_n); std::vector<float> matrix_c_float(matrix_size_m * matrix_size_n); std::vector<float> matrix_c_float_reference(matrix_size_m * matrix_size_n); float* h_matrix_a_float{matrix_a_float.data()}; float* h_matrix_b_float{matrix_b_float.data()}; __half* h_matrix_a_half{matrix_a_half.data()}; __half* h_matrix_b_half{matrix_b_half.data()}; float* h_matrix_c_float{matrix_c_float.data()}; float* h_matrix_c_float_reference{matrix_c_float_reference.data()}; fill_random_float_values(h_matrix_a_float, matrix_a_float.size(), random_engine); fill_random_float_values(h_matrix_b_float, matrix_b_float.size(), random_engine); fill_random_float_values(h_matrix_c_float, matrix_c_float.size(), random_engine); fill_random_float_values(h_matrix_c_float_reference, matrix_c_float_reference.size(), random_engine); float2half(h_matrix_a_half, h_matrix_a_float, matrix_a_float.size()); float2half(h_matrix_b_half, h_matrix_b_float, matrix_b_float.size()); half *d_matrix_a_half, *d_matrix_b_half; float* d_matrix_c_float; CHECK_CUDA_ERROR(cudaMalloc(&d_matrix_a_half, matrix_size_m * matrix_size_k * sizeof(half))); CHECK_CUDA_ERROR(cudaMalloc(&d_matrix_b_half, matrix_size_k * matrix_size_n * sizeof(half))); CHECK_CUDA_ERROR(cudaMalloc(&d_matrix_c_float, matrix_size_m * matrix_size_n * sizeof(float))); // Copy data from host to device. CHECK_CUDA_ERROR(cudaMemcpy(d_matrix_a_half, h_matrix_a_half, matrix_a_float.size() * sizeof(__half), cudaMemcpyHostToDevice)); CHECK_CUDA_ERROR(cudaMemcpy(d_matrix_b_half, h_matrix_b_half, matrix_b_float.size() * sizeof(__half), cudaMemcpyHostToDevice)); for (bool is_A_transpose : {true, false}) { for (bool is_B_transpose : {true, false}) { print_test_header(is_A_transpose, is_B_transpose); // Compute matrix multiplication reference output using CPU. launch_mm(h_matrix_a_float, h_matrix_b_float, h_matrix_c_float_reference, matrix_size_m, matrix_size_n, matrix_size_k, is_A_transpose, is_B_transpose); // Compute matrix multiplication reference output using CUDA WMMA. launch_wmma_mm(d_matrix_a_half, d_matrix_b_half, d_matrix_c_float, matrix_size_m, matrix_size_n, matrix_size_k, is_A_transpose, is_B_transpose, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaMemcpy(h_matrix_c_float, d_matrix_c_float, matrix_c_float.size() * sizeof(float), cudaMemcpyDeviceToHost)); float const avg_abs_diff_ratio{get_avg_abs_diff_ratio( h_matrix_c_float, h_matrix_c_float_reference, matrix_c_float.size())}; if (avg_abs_diff_ratio > 0.01) { std::cout << "Got high average absolute diff ratio: " << avg_abs_diff_ratio << std::endl; } // Performance measurement. std::function<void(cudaStream_t)> const function_hmma{std::bind( launch_wmma_mm<__half, float>, d_matrix_a_half, d_matrix_b_half, d_matrix_c_float, matrix_size_m, matrix_size_n, matrix_size_k, is_A_transpose, is_B_transpose, std::placeholders::_1)}; float const latency_hmma{measure_performance( function_hmma, stream, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "HMMA Latency: " << latency_hmma << " ms" << std::endl; } } CHECK_CUDA_ERROR(cudaFree(d_matrix_a_half)); CHECK_CUDA_ERROR(cudaFree(d_matrix_b_half)); CHECK_CUDA_ERROR(cudaFree(d_matrix_c_float)); // IMMA std::cout << "INT8 IMMA" << std::endl; std::vector<int8_t> matrix_a_int8(matrix_size_m * matrix_size_k); std::vector<int8_t> matrix_b_int8(matrix_size_k * matrix_size_n); std::vector<int32_t> matrix_c_int32(matrix_size_m * matrix_size_n); std::vector<int32_t> matrix_c_int32_reference(matrix_size_m * matrix_size_n); int8_t* h_matrix_a_int8{matrix_a_int8.data()}; int8_t* h_matrix_b_int8{matrix_b_int8.data()}; int32_t* h_matrix_c_int32{matrix_c_int32.data()}; int32_t* h_matrix_c_int32_reference{matrix_c_int32_reference.data()}; fill_random_int8_values(h_matrix_a_int8, matrix_a_int8.size(), random_engine); fill_random_int8_values(h_matrix_b_int8, matrix_b_int8.size(), random_engine); fill_random_int32_values(h_matrix_c_int32, matrix_c_int32.size(), random_engine); fill_random_int32_values(h_matrix_c_int32_reference, matrix_c_int32_reference.size(), random_engine); // Profile INT8 IMMA without verifying the correctness. int8_t *d_matrix_a_int8, *d_matrix_b_int8; int32_t* d_matrix_c_int32; CHECK_CUDA_ERROR(cudaMalloc( &d_matrix_a_int8, matrix_size_m * matrix_size_k * sizeof(int8_t))); CHECK_CUDA_ERROR(cudaMalloc( &d_matrix_b_int8, matrix_size_k * matrix_size_n * sizeof(int8_t))); CHECK_CUDA_ERROR(cudaMalloc( &d_matrix_c_int32, matrix_size_m * matrix_size_n * sizeof(int32_t))); CHECK_CUDA_ERROR(cudaMemcpy(d_matrix_a_int8, h_matrix_a_int8, matrix_a_int8.size() * sizeof(int8_t), cudaMemcpyHostToDevice)); CHECK_CUDA_ERROR(cudaMemcpy(d_matrix_b_int8, h_matrix_b_int8, matrix_b_int8.size() * sizeof(int8_t), cudaMemcpyHostToDevice)); for (bool is_A_transpose : {true, false}) { for (bool is_B_transpose : {true, false}) { print_test_header(is_A_transpose, is_B_transpose); // Compute matrix multiplication reference output using CPU. launch_mm(h_matrix_a_int8, h_matrix_b_int8, h_matrix_c_int32_reference, matrix_size_m, matrix_size_n, matrix_size_k, is_A_transpose, is_B_transpose); // Compute matrix multiplication reference output using CUDA WMMA. launch_wmma_mm(d_matrix_a_int8, d_matrix_b_int8, d_matrix_c_int32, matrix_size_m, matrix_size_n, matrix_size_k, is_A_transpose, is_B_transpose, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaMemcpy(h_matrix_c_int32, d_matrix_c_int32, matrix_c_int32.size() * sizeof(int32_t), cudaMemcpyDeviceToHost)); // Integer matrix multiplications from CPU and CUDA should be // bitwise identical. assert(array_equal(h_matrix_c_int32, h_matrix_c_int32_reference, matrix_c_int32.size())); // Performance measurement. std::function<void(cudaStream_t)> const function_imma{ std::bind(launch_wmma_mm<int8_t, int32_t>, d_matrix_a_int8, d_matrix_b_int8, d_matrix_c_int32, matrix_size_m, matrix_size_n, matrix_size_k, is_A_transpose, is_B_transpose, std::placeholders::_1)}; float const latency_imma{measure_performance( function_imma, stream, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "IMMA Latency: " << latency_imma << " ms" << std::endl; } } CHECK_CUDA_ERROR(cudaFree(d_matrix_a_int8)); CHECK_CUDA_ERROR(cudaFree(d_matrix_b_int8)); CHECK_CUDA_ERROR(cudaFree(d_matrix_c_int32)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream));}
All the transposed matrix multiplication implementations did not actually transpose the matrices. Instead, we used the row-major and column-major trick introduced in my previous article “Row-Major VS Column-Major”.
We also observed that for matrix multiplication for matrices stored in column-major order, $C = A^{\top}B$ is the fastest and $C = A B^{\top}$ is the slowest, for GEMM implementations using HMMA and IMMA instructions on an NVIDIA RTX 3090 GPU.
123456789101112131415161718192021222324
$ nvcc mma.cu -o mma --gpu-architecture=compute_86$ ./mmaMatrix SizesM: 1024N: 1024K: 1024FP16 HMMAC = A^T * B^THMMA Latency: 0.177 msC = A^T * BHMMA Latency: 0.169 msC = A * B^THMMA Latency: 0.189 msC = A * BHMMA Latency: 0.177 msINT8 IMMAC = A^T * B^TIMMA Latency: 0.129 msC = A^T * BIMMA Latency: 0.090 msC = A * B^TIMMA Latency: 0.170 msC = A * BIMMA Latency: 0.129 ms
Conclusions
NVIDIA Tensor Cores are programmable and can be used for accelerating computations that are dominated by GEMM operations.
References
NVIDIA Tensor Core Programming
https://leimao.github.io/blog/NVIDIA-Tensor-Core-Programming/
Author
Lei Mao
Posted on
05-18-2023
Updated on
12-27-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Row-Major VS Column-Major
Introduction
In computing, row-major order and column-major order are two ways for storing multidimensional arrays in linear storage such as random access memory. The following figure demonstrated the row-major order and the column-major for a 2D matrix.
In this blog post, I would like to discuss the difference between the row-major order and the column-major order, and their consequence for matrix multiplication performance.
Row-Major VS Column-Major
Given a matrix $A$ of shape $(M, N)$, if it is stored in row-major order, its leading dimension is $N$, and if it is stored in column-major order, its leading dimension is $M$.
To read $A^{\top}$ from the same piece of the memory in which $A$ was stored in row-major order with a leading dimension of $N$, we could just treat the matrix in the memory as if it were stored in column-major order and the leading dimension is still $N$.
To read $A^{\top}$ from the same piece of the memory in which $A$ was stored in column-major order with a leading dimension of $M$, we could just treat the matrix in the memory as if it were stored in row-major order and the leading dimension is still $M$.
For example, we have a matrix $A$,
$$\begin{align}A &=\begin{bmatrix}1 & 2 & 3 \\4 & 5 & 6 \\\end{bmatrix} \\\end{align}$$
If $A$ is stored in row-major order, the matrix values in the linear memory is $[1, 2, 3, 4, 5, 6]$.If $A$ is stored in column-major order, the matrix values in the linear memory is $[1, 4, 2, 5, 3, 6]$.
The transpose of $A$, $A^{\top}$, is
$$\begin{align}A^{\top} &=\begin{bmatrix}1 & 4 \\2 & 5 \\3 & 6 \\\end{bmatrix} \\\end{align}$$
If $A^{\top}$ is stored in row-major order, the matrix values in the linear memory is $[1, 4, 2, 5, 3, 6]$.If $A^{\top}$ is stored in column-major order, the matrix values in the linear memory is $[1, 2, 3, 4, 5, 6]$.
It’s easy to see that $A$ is stored in row-major order is exactly the same as $A^{\top}$ is stored in column-major order in the memory and $A$ is stored in column-major order is exactly the same as $A^{\top}$ is stored in row-major order in the memory.
For a matrix $A$ stored in row-major order, reading rows of $A$ and reading columns of $A^{\top}$ are fast and cache-friendly, whereas reading columns of $A$ and reading rows of $A^{\top}$ are slow and invalids caching.
For a matrix $A$ stored in column-major order, reading columns of $A$ and reading rows of $A^{\top}$ are fast and cache-friendly, whereas reading rows of $A$ and reading columns of $A^{\top}$ are slow and invalids caching.
Matrix Multiplication
The way of storing matrices in memory affects the performance of matrix multiplication on lots of processors, such as CPU and GPU. Usually, depending on whether the matrices for multiplications needs to be mathematically transposed for matrix multiplication, there are four ways of computing matrix multiplication, $C=AB$, $C=A^{\top}B$, $C=AB^{\top}$, and $C=A^{\top}B^{\top}$. Each of them performs better than others depending on the storage orderings of matrices $A$ and $B$, even though the theoretical MACs of the operations remain the same.
$C=AB$
Suppose a matrix $A$ is of shape $(M, K)$ and a matrix $B$ is of shape $(K, N)$, to compute $C = AB$ where $C$ is an matrix of shape $(M, N)$, each element in $C$ is an accumulated sum of one row of size $K$ in the matrix $A$ and one column of size $K$ in the matrix $B$.
Depending on the storage ordering of the two matrices, there are four scenarios.
When $A$ is stored in row-major order and $B$ is stored in column-major order, reading both rows from $A$ and columns from $B$ are fast for matrix multiplication because of the caching mechanism of modern processors, and faster readings results in better performance, given the same amount of math to compute.
Therefore, the matrix multiplication $C=AB$ is more suitable for the situation where $A$ is stored in row-major order and $B$ is stored in column-major order.
$C=A^{\top}B$
Suppose a matrix $A$ is of shape $(K, M)$ and a matrix $B$ is of shape $(K, N)$, to compute $C = A^{\top}B$ where $C$ is an matrix of shape $(M, N)$, each element in $C$ is an accumulated sum of one column of size $K$ in the matrix $A$ and one column of size $K$ in the matrix $B$.
Depending on the storage ordering of the two matrices, there are four scenarios.
When $A$ is stored in column-major order and $B$ is stored in column-major order, reading both columns from $A$ and columns from $B$ are fast for matrix multiplication because of the caching mechanism of modern processors, and faster readings results in better performance, given the same amount of math to compute.
Therefore, the matrix multiplication $C=A^{\top}B$ is more suitable for the situation where $A$ is stored in column-major order and $B$ is stored in column-major order.
$C=AB^{\top}$
Suppose a matrix $A$ is of shape $(M, K)$ and a matrix $B$ is of shape $(N, K)$, to compute $C = AB^{\top}$ where $C$ is an matrix of shape $(M, N)$, each element in $C$ is an accumulated sum of one row of size $K$ in the matrix $A$ and one row of size $K$ in the matrix $B$.
Depending on the storage ordering of the two matrices, there are four scenarios.
When $A$ is stored in row-major order and $B$ is stored in row-major order, reading both rows from $A$ and rows from $B$ are fast for matrix multiplication because of the caching mechanism of modern processors, and faster readings results in better performance, given the same amount of math to compute.
Therefore, the matrix multiplication $C=AB^{\top}$ is more suitable for the situation where $A$ is stored in row-major order and $B$ is stored in row-major order.
$C=A^{\top}B^{\top}$
Suppose a matrix $A$ is of shape $(K, M)$ and a matrix $B$ is of shape $(N, K)$, to compute $C = A^{\top}B^{\top}$ where $C$ is an matrix of shape $(M, N)$, each element in $C$ is an accumulated sum of one column of size $K$ in the matrix $A$ and one row of size $K$ in the matrix $B$.
Depending on the storage ordering of the two matrices, there are four scenarios.
When $A$ is stored in column-major order and $B$ is stored in row-major order, reading both columns from $A$ and rows from $B$ are fast for matrix multiplication because of the caching mechanism of modern processors, and faster readings results in better performance, given the same amount of math to compute.
Therefore, the matrix multiplication $C=A^{\top}B^{\top}$ is more suitable for the situation where $A$ is stored in column-major order and $B$ is stored in row-major order.
Matrix Multiplication Preference
The matrix multiplication preference for different combinations of the storage orders of matrices beings multiplied can be summarized as follows.
Because usually the all matrices in one software framework would use the same storage order, this means only $C = A^{\top}B$ and $C = AB^{\top}$ are preferred for those scenarios.
Optimizations can reduce the performance gap between the optimal matrix multiplication option and the other options, sometimes even to almost zero, depending on the implementation and the processor.
In addition, sometimes it’s not a good idea to physically transpose a matrix in memory just in order to use the most performant matrix multiplication option among the four, because the overhead of transposing a matrix in memory might be much larger than the difference between the four matrix multiplication options especially when they are all well optimized.
Matrix Multiplication Benchmarks
Additionally, we could verify our analysis using C++ single-threaded naive implementations for matrix multiplication.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177
#include <cassert>#include <chrono>#include <cstdint>#include <functional>#include <iomanip>#include <iostream>#include <tuple>#include <utility>#include <vector>template <class T>float measure_performance(std::function<T(void)> bound_function, int num_repeats = 100, int num_warmups = 100){ for (int i{0}; i < num_warmups; ++i) { bound_function(); } std::chrono::steady_clock::time_point time_start{ std::chrono::steady_clock::now()}; for (int i{0}; i < num_repeats; ++i) { bound_function(); } std::chrono::steady_clock::time_point time_end{ std::chrono::steady_clock::now()}; auto time_elapsed{std::chrono::duration_cast<std::chrono::milliseconds>( time_end - time_start) .count()}; float latency{time_elapsed / static_cast<float>(num_repeats)}; return latency;}// A and B are column-major matrices.template <typename T>void mm_a_col_major_b_col_major(T const* A, T const* B, T* C, uint32_t m, uint32_t n, uint32_t k, uint32_t lda, uint32_t ldb, uint32_t ldc, bool is_A_transpose, bool is_B_transpose){ for (uint32_t ni{0}; ni < n; ++ni) { for (uint32_t mi{0}; mi < m; ++mi) { // Compute C[mi, ni] T accum{0}; // A * B if ((!is_A_transpose) && (!is_B_transpose)) { for (uint32_t ki{0}; ki < k; ++ki) { // A[mi, ki] * B[ki, ni] accum += A[ki * lda + mi] * B[ni * ldb + ki]; } } // A^T * B else if ((is_A_transpose) && (!is_B_transpose)) { for (uint32_t ki{0}; ki < k; ++ki) { // A[ki, mi] * B[ki, ni] accum += A[mi * lda + ki] * B[ni * ldb + ki]; } } // A * B^T else if ((!is_A_transpose) && (is_B_transpose)) { for (uint32_t ki{0}; ki < k; ++ki) { // A[mi, ki] * B[ni, ki] accum += A[ki * lda + mi] * B[ki * ldb + ni]; } } // A^T * B^T else { for (uint32_t ki{0}; ki < k; ++ki) { // A[ki, mi] * B[ni, ki] accum += A[mi * lda + ki] * B[ki * ldb + ni]; } } C[ni * ldc + mi] = accum; } }}void print_latency(float latency){ std::cout << std::fixed << std::setprecision(3) << "Latency: " << latency << " ms" << std::endl;}int main(){ constexpr uint32_t num_repeats{10}; constexpr uint32_t num_warmups{10}; constexpr uint32_t M{256}; constexpr uint32_t K{256}; constexpr uint32_t N{256}; std::vector<float> matrix_a(M * K); std::vector<float> matrix_b(K * N); std::vector<float> matrix_c(M * N); float const* A{matrix_a.data()}; float const* B{matrix_b.data()}; float* C{matrix_c.data()}; uint32_t const matrix_a_col_major_ld{M}; uint32_t const matrix_a_row_major_ld{K}; uint32_t const matrix_a_transpose_col_major_ld{matrix_a_row_major_ld}; uint32_t const matrix_a_transpose_row_major_ld{matrix_a_col_major_ld}; uint32_t const matrix_b_col_major_ld{K}; uint32_t const matrix_b_row_major_ld{N}; uint32_t const matrix_b_transpose_col_major_ld{matrix_b_row_major_ld}; uint32_t const matrix_b_transpose_row_major_ld{matrix_b_col_major_ld}; uint32_t const matrix_c_col_major_ld{M}; uint32_t const matrix_c_row_major_ld{N}; uint32_t const matrix_c_transpose_col_major_ld{matrix_c_row_major_ld}; uint32_t const matrix_c_transpose_row_major_ld{matrix_c_col_major_ld}; std::function<void(void)> const mm_a_col_major_b_col_major_a_b{ std::bind(mm_a_col_major_b_col_major<float>, A, B, C, M, N, K, matrix_a_col_major_ld, matrix_b_col_major_ld, matrix_c_col_major_ld, false, false)}; std::function<void(void)> const mm_a_col_major_b_col_major_a_transpose_b{ std::bind(mm_a_col_major_b_col_major<float>, A, B, C, M, N, K, matrix_a_transpose_col_major_ld, matrix_b_col_major_ld, matrix_c_col_major_ld, true, false)}; std::function<void(void)> const mm_a_col_major_b_col_major_a_transpose_b_transpose{std::bind( mm_a_col_major_b_col_major<float>, A, B, C, M, N, K, matrix_a_transpose_col_major_ld, matrix_b_transpose_col_major_ld, matrix_c_col_major_ld, true, true)}; std::function<void(void)> const mm_a_col_major_b_col_major_a_b_transpose{ std::bind(mm_a_col_major_b_col_major<float>, A, B, C, M, N, K, matrix_a_col_major_ld, matrix_b_transpose_col_major_ld, matrix_c_col_major_ld, false, true)}; std::cout << "C = A * B" << std::endl; float const latency_a_b = measure_performance( mm_a_col_major_b_col_major_a_b, num_repeats, num_warmups); print_latency(latency_a_b); std::cout << "C = A^T * B" << std::endl; float const latency_a_transpose_b = measure_performance( mm_a_col_major_b_col_major_a_transpose_b, num_repeats, num_warmups); print_latency(latency_a_transpose_b); std::cout << "C = A * B^T" << std::endl; float const latency_a_b_transpose = measure_performance( mm_a_col_major_b_col_major_a_b_transpose, num_repeats, num_warmups); print_latency(latency_a_b_transpose); std::cout << "C = A^T * B^T" << std::endl; float const latency_a_transpose_b_transpose = measure_performance(mm_a_col_major_b_col_major_a_transpose_b_transpose, num_repeats, num_warmups); print_latency(latency_a_transpose_b_transpose); assert(latency_a_transpose_b == std::min({latency_a_b, latency_a_transpose_b, latency_a_b_transpose, latency_a_transpose_b_transpose})); assert(latency_a_b_transpose == std::max({latency_a_b, latency_a_transpose_b, latency_a_b_transpose, latency_a_transpose_b_transpose}));}
We could see that given matrix $A$ and matrix $B$ are stored in column-major order, as expected, the performance of $C = A^{\top}B$ is the best and the performance of $C = AB^{\top}$ is the worst.
12345678910
$ g++ naive_mm.cpp -o naive_mm$ ./naive_mmC = A * BLatency: 45.400 msC = A^T * BLatency: 32.500 msC = A * B^TLatency: 57.800 msC = A^T * B^TLatency: 48.300 ms
Using multi-threaded optimized matrix multiplication implementations, such as the GEMM functions from the cuBLAS library, can eliminate the difference between the four options.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188
#include <cassert>#include <chrono>#include <cstdint>#include <functional>#include <iomanip>#include <iostream>#include <tuple>#include <utility>#include <vector>#include <cublas_v2.h>#include <cuda_runtime.h>#define CHECK_CUBLAS_ERROR(val) checkCuBlas((val), #val, __FILE__, __LINE__)template <typename T>void checkCuBlas(T err, const char* const func, const char* const file, const int line){ if (err != CUBLAS_STATUS_SUCCESS) { std::cerr << "cuBlas Runtime Error at: " << file << ":" << line << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_CUDA_ERROR(val) checkCuda((val), #val, __FILE__, __LINE__)template <typename T>void checkCuda(T err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkCudaLast(__FILE__, __LINE__)void checkCudaLast(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}float measure_cublas_performance( std::function<cublasStatus_t(void)> bound_cublas_function, cudaStream_t stream, int num_repeats = 100, int num_warmups = 100){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (int i{0}; i < num_warmups; ++i) { CHECK_CUBLAS_ERROR(bound_cublas_function()); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (int i{0}; i < num_repeats; ++i) { CHECK_CUBLAS_ERROR(bound_cublas_function()); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}void print_latency(float latency){ std::cout << std::fixed << std::setprecision(3) << "Latency: " << latency << " ms" << std::endl;}int main(){ constexpr uint32_t num_repeats{100}; constexpr uint32_t num_warmups{100}; constexpr uint32_t M{256}; constexpr uint32_t K{256}; constexpr uint32_t N{256}; float* A{nullptr}; float* B{nullptr}; float* C{nullptr}; CHECK_CUDA_ERROR(cudaMalloc(&A, M * K * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&B, K * N * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&C, M * N * sizeof(float))); uint32_t const matrix_a_col_major_ld{M}; uint32_t const matrix_a_row_major_ld{K}; uint32_t const matrix_a_transpose_col_major_ld{matrix_a_row_major_ld}; uint32_t const matrix_a_transpose_row_major_ld{matrix_a_col_major_ld}; uint32_t const matrix_b_col_major_ld{K}; uint32_t const matrix_b_row_major_ld{N}; uint32_t const matrix_b_transpose_col_major_ld{matrix_b_row_major_ld}; uint32_t const matrix_b_transpose_row_major_ld{matrix_b_col_major_ld}; uint32_t const matrix_c_col_major_ld{M}; uint32_t const matrix_c_row_major_ld{N}; uint32_t const matrix_c_transpose_col_major_ld{matrix_c_row_major_ld}; uint32_t const matrix_c_transpose_row_major_ld{matrix_c_col_major_ld}; cublasHandle_t cublas_handle; cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); CHECK_CUBLAS_ERROR(cublasCreate(&cublas_handle)); CHECK_CUBLAS_ERROR(cublasSetStream(cublas_handle, stream)); float const alpha{1.0}; float const beta{0.0}; // cublasSgemm assumes column-major matrices. std::function<cublasStatus_t(void)> const mm_a_col_major_b_col_major_a_b{ std::bind(cublasSgemm, cublas_handle, CUBLAS_OP_N, CUBLAS_OP_N, M, N, K, &alpha, A, matrix_a_col_major_ld, B, matrix_b_col_major_ld, &beta, C, matrix_c_col_major_ld)}; std::function<cublasStatus_t(void)> const mm_a_col_major_b_col_major_a_transpose_b{ std::bind(cublasSgemm, cublas_handle, CUBLAS_OP_T, CUBLAS_OP_N, M, N, K, &alpha, A, matrix_a_transpose_col_major_ld, B, matrix_b_col_major_ld, &beta, C, matrix_c_col_major_ld)}; std::function<cublasStatus_t(void)> const mm_a_col_major_b_col_major_a_transpose_b_transpose{std::bind( cublasSgemm, cublas_handle, CUBLAS_OP_T, CUBLAS_OP_T, M, N, K, &alpha, A, matrix_a_transpose_col_major_ld, B, matrix_b_transpose_col_major_ld, &beta, C, matrix_c_col_major_ld)}; std::function<cublasStatus_t(void)> const mm_a_col_major_b_col_major_a_b_transpose{std::bind( cublasSgemm, cublas_handle, CUBLAS_OP_N, CUBLAS_OP_T, M, N, K, &alpha, A, matrix_a_col_major_ld, B, matrix_b_transpose_col_major_ld, &beta, C, matrix_c_col_major_ld)}; std::cout << "C = A * B" << std::endl; float const latency_a_b = measure_cublas_performance( mm_a_col_major_b_col_major_a_b, stream, num_repeats, num_warmups); print_latency(latency_a_b); std::cout << "C = A^T * B" << std::endl; float const latency_a_transpose_b = measure_cublas_performance(mm_a_col_major_b_col_major_a_transpose_b, stream, num_repeats, num_warmups); print_latency(latency_a_transpose_b); std::cout << "C = A * B^T" << std::endl; float const latency_a_b_transpose = measure_cublas_performance(mm_a_col_major_b_col_major_a_b_transpose, stream, num_repeats, num_warmups); print_latency(latency_a_b_transpose); std::cout << "C = A^T * B^T" << std::endl; float const latency_a_transpose_b_transpose = measure_cublas_performance( mm_a_col_major_b_col_major_a_transpose_b_transpose, stream, num_repeats, num_warmups); print_latency(latency_a_transpose_b_transpose); CHECK_CUDA_ERROR(cudaFree(A)); CHECK_CUDA_ERROR(cudaFree(B)); CHECK_CUDA_ERROR(cudaFree(C)); CHECK_CUBLAS_ERROR(cublasDestroy(cublas_handle)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream));}
With the highly optimized implementations, there is almost no difference between the four options.
12345678910
$ nvcc cublas_mm.cu -o cublas_mm -lcublas$ ./cublas_mmC = A * BLatency: 0.008 msC = A^T * BLatency: 0.010 msC = A * B^TLatency: 0.009 msC = A^T * B^TLatency: 0.008 ms
References
Row-Major VS Column-Major
https://leimao.github.io/blog/Row-Major-VS-Column-Major/
Author
Lei Mao
Posted on
05-12-2023
Updated on
05-12-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Coalesced Memory Access
Introduction
In CUDA programming, accessing the GPU global memory from a CUDA kernel is usually a factor that will affect the CUDA kernel performance. To reduce global memory IO, we would like to reduce the number of global memory access by coalescing the global memory access and cache the reusable data in the fast shared memory.
In this blog post, I would like to discuss how to coalesce the GPU global memory read and write and use an example to show the performance improvement brought by coalescing both the global memory read and write.
CUDA Matrix Transpose
Implementations
In the following example, I implemented three CUDA kernels for (out-of-place) matrix transpose.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315
#include <algorithm>#include <cassert>#include <chrono>#include <cstdio>#include <functional>#include <iomanip>#include <iostream>#include <random>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, char const* func, char const* file, int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() check_last(__FILE__, __LINE__)void check_last(char const* file, int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, size_t num_repeats = 100, size_t num_warmups = 100){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (size_t i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (size_t i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}constexpr size_t div_up(size_t a, size_t b) { return (a + b - 1) / b; }template <typename T>__global__ void transpose_read_coalesced(T* output_matrix, T const* input_matrix, size_t M, size_t N){ size_t const j{threadIdx.x + blockIdx.x * blockDim.x}; size_t const i{threadIdx.y + blockIdx.y * blockDim.y}; size_t const from_idx{i * N + j}; if ((i < M) && (j < N)) { size_t const to_idx{j * M + i}; output_matrix[to_idx] = input_matrix[from_idx]; }}template <typename T>__global__ void transpose_write_coalesced(T* output_matrix, T const* input_matrix, size_t M, size_t N){ size_t const j{threadIdx.x + blockIdx.x * blockDim.x}; size_t const i{threadIdx.y + blockIdx.y * blockDim.y}; size_t const to_idx{i * M + j}; if ((i < N) && (j < M)) { size_t const from_idx{j * N + i}; output_matrix[to_idx] = input_matrix[from_idx]; }}template <typename T>void launch_transpose_read_coalesced(T* output_matrix, T const* input_matrix, size_t M, size_t N, cudaStream_t stream){ constexpr size_t const warp_size{32}; dim3 const threads_per_block{warp_size, warp_size}; dim3 const blocks_per_grid{static_cast<unsigned int>(div_up(N, warp_size)), static_cast<unsigned int>(div_up(M, warp_size))}; transpose_read_coalesced<<<blocks_per_grid, threads_per_block, 0, stream>>>( output_matrix, input_matrix, M, N); CHECK_LAST_CUDA_ERROR();}template <typename T>void launch_transpose_write_coalesced(T* output_matrix, T const* input_matrix, size_t M, size_t N, cudaStream_t stream){ constexpr size_t const warp_size{32}; dim3 const threads_per_block{warp_size, warp_size}; dim3 const blocks_per_grid{static_cast<unsigned int>(div_up(M, warp_size)), static_cast<unsigned int>(div_up(N, warp_size))}; transpose_write_coalesced<<<blocks_per_grid, threads_per_block, 0, stream>>>(output_matrix, input_matrix, M, N); CHECK_LAST_CUDA_ERROR();}template <typename T, size_t BLOCK_SIZE = 32>__global__ void transpose_read_write_coalesced(T* output_matrix, T const* input_matrix, size_t M, size_t N){ // BLOCK_SIZE + 1 for avoiding the shared memory bank conflicts. // https://leimao.github.io/blog/CUDA-Shared-Memory-Bank/ // Try setting it to BLOCK_SIZE instead of BLOCK_SIZE + 1 to see the // performance drop. __shared__ T buffer[BLOCK_SIZE][BLOCK_SIZE + 1]; size_t const matrix_j{threadIdx.x + blockIdx.x * blockDim.x}; size_t const matrix_i{threadIdx.y + blockIdx.y * blockDim.y}; size_t const matrix_from_idx{matrix_i * N + matrix_j}; // We have two ways to write matrix data to the shared memory. // 1. Write transposed matrix data from the DRAM to the shared memory and // write the non-transposed matrix data from the shared memory to DRAM. // 2. Write non-transposed matrix data from the DRAM to the shared memory // and write the transposed matrix data from the shared memory to DRAM. Both // should result in the same performance, even if there are shared memory // access bank conflicts. if ((matrix_i < M) && (matrix_j < N)) { // The first approach. buffer[threadIdx.x][threadIdx.y] = input_matrix[matrix_from_idx]; // The second approach. // buffer[threadIdx.y][threadIdx.x] = input_matrix[matrix_from_idx]; } // Make sure the buffer in a block is filled. __syncthreads(); size_t const matrix_transposed_j{threadIdx.x + blockIdx.y * blockDim.y}; size_t const matrix_transposed_i{threadIdx.y + blockIdx.x * blockDim.x}; if ((matrix_transposed_i < N) && (matrix_transposed_j < M)) { size_t const to_idx{matrix_transposed_i * M + matrix_transposed_j}; // The first approach. output_matrix[to_idx] = buffer[threadIdx.y][threadIdx.x]; // The second approach. // output_matrix[to_idx] = buffer[threadIdx.x][threadIdx.y]; }}template <typename T>void launch_transpose_read_write_coalesced(T* output_matrix, T const* input_matrix, size_t M, size_t N, cudaStream_t stream){ constexpr size_t const warp_size{32}; dim3 const threads_per_block{warp_size, warp_size}; dim3 const blocks_per_grid{static_cast<unsigned int>(div_up(N, warp_size)), static_cast<unsigned int>(div_up(M, warp_size))}; transpose_read_write_coalesced<T, warp_size> <<<blocks_per_grid, threads_per_block, 0, stream>>>(output_matrix, input_matrix, M, N); CHECK_LAST_CUDA_ERROR();}template <typename T>bool is_equal(T const* data_1, T const* data_2, size_t size){ for (size_t i{0}; i < size; ++i) { if (data_1[i] != data_2[i]) { return false; } } return true;}template <typename T>bool verify_transpose_implementation( std::function<void(T*, T const*, size_t, size_t, cudaStream_t)> transpose_function, size_t M, size_t N){ // Fixed random seed for reproducibility std::mt19937 gen{0}; cudaStream_t stream; size_t const matrix_size{M * N}; std::vector<T> matrix(matrix_size, 0.0f); std::vector<T> matrix_transposed(matrix_size, 1.0f); std::vector<T> matrix_transposed_reference(matrix_size, 2.0f); std::uniform_real_distribution<T> uniform_dist(-256, 256); for (size_t i{0}; i < matrix_size; ++i) { matrix[i] = uniform_dist(gen); } // Create the reference transposed matrix using CPU. for (size_t i{0}; i < M; ++i) { for (size_t j{0}; j < N; ++j) { size_t const from_idx{i * N + j}; size_t const to_idx{j * M + i}; matrix_transposed_reference[to_idx] = matrix[from_idx]; } } T* d_matrix; T* d_matrix_transposed; CHECK_CUDA_ERROR(cudaMalloc(&d_matrix, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaMalloc(&d_matrix_transposed, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); CHECK_CUDA_ERROR(cudaMemcpy(d_matrix, matrix.data(), matrix_size * sizeof(T), cudaMemcpyHostToDevice)); transpose_function(d_matrix_transposed, d_matrix, M, N, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaMemcpy(matrix_transposed.data(), d_matrix_transposed, matrix_size * sizeof(T), cudaMemcpyDeviceToHost)); bool const correctness{is_equal(matrix_transposed.data(), matrix_transposed_reference.data(), matrix_size)}; CHECK_CUDA_ERROR(cudaFree(d_matrix)); CHECK_CUDA_ERROR(cudaFree(d_matrix_transposed)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); return correctness;}template <typename T>void profile_transpose_implementation( std::function<void(T*, T const*, size_t, size_t, cudaStream_t)> transpose_function, size_t M, size_t N){ constexpr int num_repeats{100}; constexpr int num_warmups{10}; cudaStream_t stream; size_t const matrix_size{M * N}; T* d_matrix; T* d_matrix_transposed; CHECK_CUDA_ERROR(cudaMalloc(&d_matrix, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaMalloc(&d_matrix_transposed, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); std::function<void(cudaStream_t)> const transpose_function_wrapped{ std::bind(transpose_function, d_matrix_transposed, d_matrix, M, N, std::placeholders::_1)}; float const transpose_function_latency{measure_performance( transpose_function_wrapped, stream, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "Latency: " << transpose_function_latency << " ms" << std::endl; CHECK_CUDA_ERROR(cudaFree(d_matrix)); CHECK_CUDA_ERROR(cudaFree(d_matrix_transposed)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream));}int main(){ // Unit tests. for (size_t m{1}; m <= 64; ++m) { for (size_t n{1}; n <= 64; ++n) { assert(verify_transpose_implementation<float>( &launch_transpose_write_coalesced<float>, m, n)); assert(verify_transpose_implementation<float>( &launch_transpose_read_coalesced<float>, m, n)); assert(verify_transpose_implementation<float>( &launch_transpose_read_write_coalesced<float>, m, n)); } } // M: Number of rows. size_t const M{12800}; // N: Number of columns. size_t const N{12800}; std::cout << M << " x " << N << " Matrix" << std::endl; std::cout << "Transpose Write Coalesced" << std::endl; profile_transpose_implementation<float>( &launch_transpose_write_coalesced<float>, M, N); std::cout << "Transpose Read Coalesced" << std::endl; profile_transpose_implementation<float>( &launch_transpose_read_coalesced<float>, M, N); std::cout << "Transpose Read and Write Coalesced" << std::endl; profile_transpose_implementation<float>( &launch_transpose_read_write_coalesced<float>, M, N);}
Performance
The performances of the three CUDA kernels were measured using a $12800 \times 12800$ matrix. The reason why we used a square matrix for perfiormance measurement is that we want to compare the performance of global memory coalesced read and coalesced write as fair as possible.
Using -Xptxas -O0, we could disable all the NVCC compiler optimizations for the CUDA kernel. We could see that the kernel with global memory coalesced write is much faster than the kernel with global memory coalesced read, at least for this use case. By enabling both global memory coalesced read and write in the kernel, the kernel performance is the best among all the three kernels.
123456789
$ nvcc transpose.cu -o transpose -Xptxas -O0$ ./transpose12800 x 12800 MatrixTranspose Write CoalescedLatency: 5.220 msTranspose Read CoalescedLatency: 7.624 msTranspose Read and Write CoalescedLatency: 4.804 ms
Using -Xptxas -O3, which is the compiler default, we could enable all the NVCC compiler optimizations for the CUDA kernel. In this case, the CUDA kernel performance order of the three kernels remains the same.
123456789
$ nvcc transpose.cu -o transpose -Xptxas -O3$ ./transpose12800 x 12800 MatrixTranspose Write CoalescedLatency: 2.924 msTranspose Read CoalescedLatency: 5.337 msTranspose Read and Write CoalescedLatency: 2.345 ms
All the measurements were performed on a platform that has an Intel i9-9900K CPU and an NVIDIA RTX 3090 GPU.
Conclusions
In the CUDA kernel implementation, we should try to coalesce both the global memory read and write whenever it’s possible.
References
CUDA Coalesced Memory Access
https://leimao.github.io/blog/CUDA-Coalesced-Memory-Access/
Author
Lei Mao
Posted on
03-19-2023
Updated on
03-19-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Zero Copy Mapped Memory
Introduction
Unified memory is used on NVIDIA embedding platforms, such as NVIDIA Drive series and NVIDIA Jetson series. Since the same memory is used for both the CPU and the integrated GPU, it is possible to eliminate the CUDA memory copy between host and device that normally happens on a system that uses discrete GPU so that the GPU can directly the access the outputs from CPU and the CPU can also directly access the outputs from GPU. In this way, the system performance could be improved significantly in some use cases.
In this blog post, I would like to discuss the CUDA mapped pinned memory versus CUDA non-mapped pinned memory and compare their performance on memory bound kernels.
CUDA Pinned Mapped Memory
CUDA pinned mapped memory enables GPU threads to directly access host memory. For this purpose, it requires mapped pinned (non-pageable, page-locked) memory. On integrated GPUs (i.e., GPUs with the integrated field of the CUDA device properties structure set to 1), mapped pinned memory is always a performance gain because it avoids superfluous copies as integrated GPU and CPU memory are physically the same. On discrete GPUs, mapped pinned memory is advantageous only in certain cases. Because the data is not cached on the GPU, mapped pinned memory should be read or written only once, and the global loads and stores that read and write the memory should be coalesced. Zero copy can be used in place of streams because kernel-originated data transfers automatically overlap kernel execution without the overhead of setting up and determining the optimal number of streams.
CUDA Pinned Memory Non-Mapped VS Mapped
The following implementation compares the latency of a memory-bound kernel and its memory copy between host and device if necessary.
CUDA mapped memory also uses pinned memory. For CUDA pinned memory, we still need to allocate device memory and transfer the data between the host memory and the device memory, whereas for CUDA mapped memory, the device memory allocation and memory transfer, if there is any, are abstracted.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229
#include <cassert>#include <chrono>#include <functional>#include <iomanip>#include <iostream>#include <stdexcept>#include <thread>#include <tuple>#include <utility>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, int num_repeats = 100, int num_warmups = 100){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (int i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (int i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}__global__ void float_addition(float* output, float const* input_1, float const* input_2, uint32_t n){ const uint32_t idx{blockDim.x * blockIdx.x + threadIdx.x}; const uint32_t stride{blockDim.x * gridDim.x}; for (uint32_t i{idx}; i < n; i += stride) { output[i] = input_1[i] + input_2[i]; }}void launch_float_addition_non_mapped_pinned_memory( float* h_output, float const* h_input_1, float const* h_input_2, float* d_output, float* d_input_1, float* d_input_2, uint32_t n, cudaStream_t stream){ CHECK_CUDA_ERROR(cudaMemcpyAsync(d_input_1, h_input_1, n * sizeof(float), cudaMemcpyHostToDevice, stream)); CHECK_CUDA_ERROR(cudaMemcpyAsync(d_input_2, h_input_2, n * sizeof(float), cudaMemcpyHostToDevice, stream)); dim3 const threads_per_block{1024}; dim3 const blocks_per_grid{32}; float_addition<<<blocks_per_grid, threads_per_block, 0, stream>>>( d_output, d_input_1, d_input_2, n); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaMemcpyAsync(h_output, d_output, n * sizeof(float), cudaMemcpyDeviceToHost, stream));}void launch_float_addition_mapped_pinned_memory(float* d_output, float* d_input_1, float* d_input_2, uint32_t n, cudaStream_t stream){ dim3 const threads_per_block{1024}; dim3 const blocks_per_grid{32}; float_addition<<<blocks_per_grid, threads_per_block, 0, stream>>>( d_output, d_input_1, d_input_2, n); CHECK_LAST_CUDA_ERROR();}void initialize_host_memory(float* h_buffer, uint32_t n, float value){ for (int i{0}; i < n; ++i) { h_buffer[i] = value; }}bool verify_host_memory(float* h_buffer, uint32_t n, float value){ for (int i{0}; i < n; ++i) { if (h_buffer[i] != value) { return false; } } return true;}int main(){ constexpr int const num_repeats{10}; constexpr int const num_warmups{10}; constexpr int const n{1000000}; cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); float const v_input_1{1.0f}; float const v_input_2{1.0f}; float const v_output{0.0f}; float const v_output_reference{v_input_1 + v_input_2}; cudaDeviceProp prop; CHECK_CUDA_ERROR(cudaGetDeviceProperties(&prop, 0)); if (!prop.canMapHostMemory) { throw std::runtime_error{"Device does not supported mapped memory."}; } float *h_input_1, *h_input_2, *h_output; float *d_input_1, *d_input_2, *d_output; float *a_input_1, *a_input_2, *a_output; float *m_input_1, *m_input_2, *m_output; CHECK_CUDA_ERROR(cudaMallocHost(&h_input_1, n * sizeof(float))); CHECK_CUDA_ERROR(cudaMallocHost(&h_input_2, n * sizeof(float))); CHECK_CUDA_ERROR(cudaMallocHost(&h_output, n * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&d_input_1, n * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&d_input_2, n * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&d_output, n * sizeof(float))); CHECK_CUDA_ERROR( cudaHostAlloc(&a_input_1, n * sizeof(float), cudaHostAllocMapped)); CHECK_CUDA_ERROR( cudaHostAlloc(&a_input_2, n * sizeof(float), cudaHostAllocMapped)); CHECK_CUDA_ERROR( cudaHostAlloc(&a_output, n * sizeof(float), cudaHostAllocMapped)); CHECK_CUDA_ERROR(cudaHostGetDevicePointer(&m_input_1, a_input_1, 0)); CHECK_CUDA_ERROR(cudaHostGetDevicePointer(&m_input_2, a_input_2, 0)); CHECK_CUDA_ERROR(cudaHostGetDevicePointer(&m_output, a_output, 0)); // Verify the implementation correctness. initialize_host_memory(h_input_1, n, v_input_1); initialize_host_memory(h_input_2, n, v_input_2); initialize_host_memory(h_output, n, v_output); launch_float_addition_non_mapped_pinned_memory( h_output, h_input_1, h_input_2, d_output, d_input_1, d_input_2, n, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); assert(verify_host_memory(h_output, n, v_output_reference)); initialize_host_memory(a_input_1, n, v_input_1); initialize_host_memory(a_input_2, n, v_input_2); initialize_host_memory(a_output, n, v_output); launch_float_addition_mapped_pinned_memory(m_output, m_input_1, m_input_2, n, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); assert(verify_host_memory(a_output, n, v_output_reference)); // Measure latencies. std::function<void(cudaStream_t)> function_non_mapped_pinned_memory{ std::bind(launch_float_addition_non_mapped_pinned_memory, h_output, h_input_1, h_input_2, d_output, d_input_1, d_input_2, n, std::placeholders::_1)}; std::function<void(cudaStream_t)> function_mapped_pinned_memory{ std::bind(launch_float_addition_mapped_pinned_memory, m_output, m_input_1, m_input_2, n, std::placeholders::_1)}; float const latency_non_mapped_pinned_memory{measure_performance( function_non_mapped_pinned_memory, stream, num_repeats, num_warmups)}; float const latency_mapped_pinned_memory{measure_performance( function_mapped_pinned_memory, stream, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "CUDA Kernel With Non-Mapped Pinned Memory Latency: " << latency_non_mapped_pinned_memory << " ms" << std::endl; std::cout << std::fixed << std::setprecision(3) << "CUDA Kernel With Mapped Pinned Memory Latency: " << latency_mapped_pinned_memory << " ms" << std::endl; CHECK_CUDA_ERROR(cudaFree(d_input_1)); CHECK_CUDA_ERROR(cudaFree(d_input_2)); CHECK_CUDA_ERROR(cudaFree(d_output)); CHECK_CUDA_ERROR(cudaFreeHost(h_input_1)); CHECK_CUDA_ERROR(cudaFreeHost(h_input_2)); CHECK_CUDA_ERROR(cudaFreeHost(h_output)); CHECK_CUDA_ERROR(cudaFreeHost(a_input_1)); CHECK_CUDA_ERROR(cudaFreeHost(a_input_2)); CHECK_CUDA_ERROR(cudaFreeHost(a_output)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream));}
Discrete GPU
This is the latency profiling on a desktop that has Intel Core i9-9900K CPU and NVIDIA RTX 3090 GPU.
1234
$ nvcc mapped_memory.cu -o mapped_memory -std=c++14$ ./mapped_memoryCUDA Kernel With Non-Mapped Pinned Memory Latency: 0.964 msCUDA Kernel With Mapped Pinned Memory Latency: 0.631 ms
We could see that for memory-bound kernel, on a platform that uses discrete GPU, separate host memory, and device memory, using mapped pinned memory is almost 30% faster than using non-mapped pinned memory.
Integrated GPU
This is the latency profiling on an NVIDIA Jetson Xavier.
1234
$ nvcc mapped_memory.cu -o mapped_memory -std=c++14$ ./mapped_memoryCUDA Kernel With Non-Mapped Pinned Memory Latency: 2.343 msCUDA Kernel With Mapped Pinned Memory Latency: 0.431 ms
We could see that for memory-bound kernel, on a platform that uses integrated GPU and unified memory, using mapped pinned memory is almost 6x faster than using non-mapped pinned memory. This is because the using mapped memory truly eliminated the memory copy between host and device on unified memory.
Caveats
CUDA zero copy memory disables data cache on GPUs, there might be performance drop for math bound kernels.
References
CUDA Zero Copy Mapped Memory
https://leimao.github.io/blog/CUDA-Zero-Copy-Mapped-Memory/
Author
Lei Mao
Posted on
12-16-2022
Updated on
12-16-2022
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Data Alignment
Introduction
In order to get the best performance, similar to the data alignment requirement in C++, CUDA also requires data alignment.
In this blog post, I would like to quickly discuss the data alignment requirement in CUDA.
Coalesced Access to Global Memory
Global memory resides in device memory and device memory is accessed via 32-, 64-, or 128-byte memory transactions. These memory transactions must be naturally aligned: Only the 32-, 64-, or 128-byte segments of device memory that are aligned to their size (i.e., whose first address is a multiple of their size) can be read or written by memory transactions.
When a warp executes an instruction that accesses global memory, it coalesces the memory accesses of the threads within the warp into one or more of these memory transactions depending on the size of the word accessed by each thread and the distribution of the memory addresses across the threads. In general, the more transactions are necessary, the more unused words are transferred in addition to the words accessed by the threads, reducing the instruction throughput accordingly.
For devices of compute capability 6.0 or higher, the requirements can be summarized quite easily: the concurrent accesses of the threads of a warp will coalesce into a number of transactions equal to the number of 32-byte transactions necessary to service all of the threads of the warp.
Any address of a variable residing in global memory or returned by one of the memory allocation routines from the driver or runtime API, such as cudaMalloc or cudaMallocPitch, is always aligned to at least 256 bytes.
Example
For example, if the each of the thread in a warp that consists of 32 threads wants to read a 4-byte data, and if the 4-byte data from all the threads in the warp (128-byte data) are adjacent to each other and 32-byte aligned, i.e., the the address of the first 4-byte data is a multiple of 32, the memory access is coalesced and GPU will make $\frac{4 \times 32}{32} = 4$ 32-byte memory transactions. Maximum memory transaction throughput is achieved because GPU made the fewest transactions possible.
If the 128-byte data is not 32-byte aligned on the memory, say it is 4-byte aligned instead, one additional 32-byte memory transactions will have to be made, and therefore the memory access throughput becomes $\frac{4}{5} = 80\%$ of the maximum theoretical throughput (speed of light).
Furthermore, if the 4-byte data from all the threads are not adjacent to each other and are scattered sparsely on the memory, it is possible that at most 32 32-byte memory transactions will have to be made, and the throughput becomes only $\frac{4}{32} = 12.5\%$ of the maximum theoretical throughput.
Size and Alignment Requirement
Global memory instructions support reading or writing words of size equal to 1, 2, 4, 8, or 16 bytes. Any access (via a variable or a pointer) to data residing in global memory compiles to a single global memory instruction if and only if the size of the data type is 1, 2, 4, 8, or 16 bytes and the data is naturally aligned (i.e., its address is a multiple of that size).
If this size and alignment requirement is not fulfilled, the access compiles to multiple instructions with interleaved access patterns that prevent these instructions from fully coalescing. It is therefore recommended to use types that meet this requirement for data that resides in global memory.
Reading non-naturally aligned 8-byte or 16-byte words produces incorrect results (off by a few words), so special care must be taken to maintain alignment of the starting address of any value or array of values of these types.
Therefore, working with word of size equal to 1, 2, 4, 8, or 16 bytes is sometimes straightforward because as mentioned above the starting memory address returned by the memory allocation CUDA APIs is always aligned to at least 256 bytes, which is already 1, 2, 4, 8, or 16 byte-aligned. So we could safely save the word sequence, such as a numerical array, matrix, or tensor, into the allocated memory without having to worry about the reading of the words of 8-byte or 16-byte size produces incorrect results. To achieve the best memory access throughput, special attention is paid to the kernel implementation so that the coalesced memory access is also naturally aligned.
But, what if the word size is not 1, 2, 4, 8, or 16 bytes? The starting memory address returned by the memory allocation CUDA APIs will not guarantee that it is naturally aligned, and therefore the memory access throughput would be compromised significantly. There are usually two ways to handle this.
12345678910111213
struct __align__(4) int8_3_4_t{ int8_t x; int8_t y; int8_t z;};struct __align__(16) float3_16_t{ float x; float y; float z;};
Conclusions
Always making the word size equal to 1, 2, 4, 8, or 16 bytes and the data naturally aligned.
Reading words and producing incorrect results rarely happen if the memory allocated is only used for a sequence of words of the same type, because the starting memory address returned by the memory allocation CUDA APIs is always aligned to at least 256 bytes. However, if one single piece of large memory is allocated for multiple sequences of words of different types with or without paddings, special care must be taken to maintain alignment of the starting address of any word or word sequence as it may produce incorrect result (for non-naturally aligned 8-byte or 16-byte words).
References
CUDA Data Alignment
https://leimao.github.io/blog/CUDA-Data-Alignment/
Author
Lei Mao
Posted on
10-18-2022
Updated on
10-18-2022
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA L2 Persistent Cache
Introduction
Starting with CUDA 11.0, devices of compute capability 8.0 and above have the capability to influence persistence of data in the L2 cache. Because L2 cache is on-chip, it potentially provides higher bandwidth and lower latency accesses to global memory.
In this blog post, I created a CUDA example to demonstrate the how to use the L2 persistent cache to accelerate the data traffic.
CUDA L2 Persistent Cache
In this example, I would have a small constant buffer of certain values that will be used for resetting a large streaming buffer. For example, if the constant buffer is of size 4 and has values of [5, 2, 1, 4] and the large streaming buffer to be reset is of size 100, after resetting the large streaming buffer will have values of [5, 2, 1, 4, 5, 2, 1, 4, ...], namely repeating the values of the constant buffer.
Because the streaming buffer is much larger than the constant buffer, each element from the constant buffer is accessed more often than the streaming buffer. Accessing buffer from global memory is very expensive. If we could cache the frequently accessed constant buffer in L2 cache, the access to the frequently accessed constant buffer could be accelerated.
CUDA Data Resetting
For the data resetting CUDA kernel, I created a baseline which launches the kernel without using persistent L2 cache, a variant which launches the kernel using 3 MB persistent L2 cache but has data thrashing when the constant buffer size exceeds 3 MB, and a optimized variant which launches the kernel using 3 MB persistent L2 cache but the data thrashing was eliminated.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246
#include <algorithm>#include <cassert>#include <cstdlib>#include <functional>#include <iomanip>#include <iostream>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, char const* const func, char const* const file, int const line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(char const* const file, int const line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, int num_repeats = 100, int num_warmups = 100){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (int i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (int i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}__global__ void reset_data(int* data_streaming, int const* lut_persistent, size_t data_streaming_size, size_t lut_persistent_size){ size_t const idx{blockDim.x * blockIdx.x + threadIdx.x}; size_t const stride{blockDim.x * gridDim.x}; for (size_t i{idx}; i < data_streaming_size; i += stride) { data_streaming[i] = lut_persistent[i % lut_persistent_size]; }}/** * @brief Reset the data_streaming using lut_persistent so that the * data_streaming is lut_persistent repeatedly. * * @param data_streaming The data for reseting. * @param lut_persistent The values for resetting data_streaming. * @param data_streaming_size The size for data_streaming. * @param lut_persistent_size The size for lut_persistent. * @param stream The CUDA stream. */void launch_reset_data(int* data_streaming, int const* lut_persistent, size_t data_streaming_size, size_t lut_persistent_size, cudaStream_t stream){ dim3 const threads_per_block{1024}; dim3 const blocks_per_grid{32}; reset_data<<<blocks_per_grid, threads_per_block, 0, stream>>>( data_streaming, lut_persistent, data_streaming_size, lut_persistent_size); CHECK_LAST_CUDA_ERROR();}bool verify_data(int* data, int n, size_t size){ for (size_t i{0}; i < size; ++i) { if (data[i] != i % n) { return false; } } return true;}int main(int argc, char* argv[]){ size_t num_megabytes_persistent_data{3}; if (argc == 2) { num_megabytes_persistent_data = std::atoi(argv[1]); } constexpr int const num_repeats{100}; constexpr int const num_warmups{10}; cudaDeviceProp device_prop{}; int current_device{0}; CHECK_CUDA_ERROR(cudaGetDevice(¤t_device)); CHECK_CUDA_ERROR(cudaGetDeviceProperties(&device_prop, current_device)); std::cout << "GPU: " << device_prop.name << std::endl; std::cout << "L2 Cache Size: " << device_prop.l2CacheSize / 1024 / 1024 << " MB" << std::endl; std::cout << "Max Persistent L2 Cache Size: " << device_prop.persistingL2CacheMaxSize / 1024 / 1024 << " MB" << std::endl; size_t const num_megabytes_streaming_data{1024}; if (num_megabytes_persistent_data > num_megabytes_streaming_data) { std::runtime_error( "Try setting persistent data size smaller than 1024 MB."); } size_t const size_persistent(num_megabytes_persistent_data * 1024 * 1024 / sizeof(int)); size_t const size_streaming(num_megabytes_streaming_data * 1024 * 1024 / sizeof(int)); std::cout << "Persistent Data Size: " << num_megabytes_persistent_data << " MB" << std::endl; std::cout << "Steaming Data Size: " << num_megabytes_streaming_data << " MB" << std::endl; cudaStream_t stream; std::vector<int> lut_persistent_vec(size_persistent, 0); for (size_t i{0}; i < lut_persistent_vec.size(); ++i) { lut_persistent_vec[i] = i; } std::vector<int> data_streaming_vec(size_streaming, 0); int* d_lut_persistent; int* d_data_streaming; int* h_lut_persistent = lut_persistent_vec.data(); int* h_data_streaming = data_streaming_vec.data(); CHECK_CUDA_ERROR( cudaMalloc(&d_lut_persistent, size_persistent * sizeof(int))); CHECK_CUDA_ERROR( cudaMalloc(&d_data_streaming, size_streaming * sizeof(int))); CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); CHECK_CUDA_ERROR(cudaMemcpy(d_lut_persistent, h_lut_persistent, size_persistent * sizeof(int), cudaMemcpyHostToDevice)); launch_reset_data(d_data_streaming, d_lut_persistent, size_streaming, size_persistent, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaMemcpy(h_data_streaming, d_data_streaming, size_streaming * sizeof(int), cudaMemcpyDeviceToHost)); assert(verify_data(h_data_streaming, size_persistent, size_streaming)); std::function<void(cudaStream_t)> const function{ std::bind(launch_reset_data, d_data_streaming, d_lut_persistent, size_streaming, size_persistent, std::placeholders::_1)}; float const latency{ measure_performance(function, stream, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "Latency Without Using Persistent L2 Cache: " << latency << " ms" << std::endl; // Start to use persistent cache. cudaStream_t stream_persistent_cache; size_t const num_megabytes_persistent_cache{3}; CHECK_CUDA_ERROR(cudaStreamCreate(&stream_persistent_cache)); CHECK_CUDA_ERROR( cudaDeviceSetLimit(cudaLimitPersistingL2CacheSize, num_megabytes_persistent_cache * 1024 * 1024)); cudaStreamAttrValue stream_attribute_thrashing; stream_attribute_thrashing.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(d_lut_persistent); stream_attribute_thrashing.accessPolicyWindow.num_bytes = num_megabytes_persistent_data * 1024 * 1024; stream_attribute_thrashing.accessPolicyWindow.hitRatio = 1.0; stream_attribute_thrashing.accessPolicyWindow.hitProp = cudaAccessPropertyPersisting; stream_attribute_thrashing.accessPolicyWindow.missProp = cudaAccessPropertyStreaming; CHECK_CUDA_ERROR(cudaStreamSetAttribute( stream_persistent_cache, cudaStreamAttributeAccessPolicyWindow, &stream_attribute_thrashing)); float const latency_persistent_cache_thrashing{measure_performance( function, stream_persistent_cache, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "Latency With Using " << num_megabytes_persistent_cache << " MB Persistent L2 Cache (Potentially Thrashing): " << latency_persistent_cache_thrashing << " ms" << std::endl; cudaStreamAttrValue stream_attribute_non_thrashing{ stream_attribute_thrashing}; stream_attribute_non_thrashing.accessPolicyWindow.hitRatio = std::min(static_cast<double>(num_megabytes_persistent_cache) / num_megabytes_persistent_data, 1.0); CHECK_CUDA_ERROR(cudaStreamSetAttribute( stream_persistent_cache, cudaStreamAttributeAccessPolicyWindow, &stream_attribute_non_thrashing)); float const latency_persistent_cache_non_thrashing{measure_performance( function, stream_persistent_cache, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "Latency With Using " << num_megabytes_persistent_cache << " MB Persistent L2 Cache (Non-Thrashing): " << latency_persistent_cache_non_thrashing << " ms" << std::endl; CHECK_CUDA_ERROR(cudaFree(d_lut_persistent)); CHECK_CUDA_ERROR(cudaFree(d_data_streaming)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream_persistent_cache));}
To avoid data thrashing, the product of accessPolicyWindow.hitRatio and accessPolicyWindow.num_bytes should be less than or equal to the cudaLimitPersistingL2CacheSize. The accessPolicyWindow.hitRatio parameter can be used to specify the fraction of accesses that receive the accessPolicyWindow.hitProp property, which is usually cudaAccessPropertyPersisting. The accessPolicyWindow.num_bytes parameter can be used to specify the number of bytes that the access policy window covers, which is usually the size of the persistent data.
In practice, we could set the accessPolicyWindow.hitRatio to be the ratio of the persistent L2 cache size to the persistent data size. For example, if the the persistent L2 cache size is 3 MB and the persistent data size is 4 MB, we could set the accessPolicyWindow.hitRatio to be 3 / 4 = 0.75.
Run CUDA Data Resetting
We could build and run the example on an NVIDIA Ampere GPU. In my case, I used an NVIDIA RTX 3090 GPU.
12345678910
$ nvcc l2-persistent.cu -o l2-persistent -std=c++14 --gpu-architecture=compute_80$ ./l2-persistentGPU: NVIDIA GeForce RTX 3090L2 Cache Size: 6 MBMax Persistent L2 Cache Size: 4 MBPersistent Data Size: 3 MBSteaming Data Size: 1024 MBLatency Without Using Persistent L2 Cache: 3.071 msLatency With Using 3 MB Persistent L2 Cache (Potentially Thrashing): 2.436 msLatency With Using 3 MB Persistent L2 Cache (Non-Thrashing): 2.443 ms
We could see that when the persistent data size is 3 MB and the persistent L2 cache is 3 MB, the performance of the application is improved by roughly 20%.
Benchmarking
We could also run some mini benchmarking by varying the persistent data size.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
$ ./l2-persistent 1GPU: NVIDIA GeForce RTX 3090L2 Cache Size: 6 MBMax Persistent L2 Cache Size: 4 MBPersistent Data Size: 1 MBSteaming Data Size: 1024 MBLatency Without Using Persistent L2 Cache: 1.754 msLatency With Using 3 MB Persistent L2 Cache (Potentially Thrashing): 1.685 msLatency With Using 3 MB Persistent L2 Cache (Non-Thrashing): 1.674 ms$ ./l2-persistent 2GPU: NVIDIA GeForce RTX 3090L2 Cache Size: 6 MBMax Persistent L2 Cache Size: 4 MBPersistent Data Size: 2 MBSteaming Data Size: 1024 MBLatency Without Using Persistent L2 Cache: 2.158 msLatency With Using 3 MB Persistent L2 Cache (Potentially Thrashing): 1.997 msLatency With Using 3 MB Persistent L2 Cache (Non-Thrashing): 2.002 ms$ ./l2-persistent 3GPU: NVIDIA GeForce RTX 3090L2 Cache Size: 6 MBMax Persistent L2 Cache Size: 4 MBPersistent Data Size: 3 MBSteaming Data Size: 1024 MBLatency Without Using Persistent L2 Cache: 3.095 msLatency With Using 3 MB Persistent L2 Cache (Potentially Thrashing): 2.510 msLatency With Using 3 MB Persistent L2 Cache (Non-Thrashing): 2.533 ms$ ./l2-persistent 4GPU: NVIDIA GeForce RTX 3090L2 Cache Size: 6 MBMax Persistent L2 Cache Size: 4 MBPersistent Data Size: 4 MBSteaming Data Size: 1024 MBLatency Without Using Persistent L2 Cache: 3.906 msLatency With Using 3 MB Persistent L2 Cache (Potentially Thrashing): 3.632 msLatency With Using 3 MB Persistent L2 Cache (Non-Thrashing): 3.706 ms$ ./l2-persistent 5GPU: NVIDIA GeForce RTX 3090L2 Cache Size: 6 MBMax Persistent L2 Cache Size: 4 MBPersistent Data Size: 5 MBSteaming Data Size: 1024 MBLatency Without Using Persistent L2 Cache: 4.120 msLatency With Using 3 MB Persistent L2 Cache (Potentially Thrashing): 4.554 msLatency With Using 3 MB Persistent L2 Cache (Non-Thrashing): 3.920 ms$ ./l2-persistent 6GPU: NVIDIA GeForce RTX 3090L2 Cache Size: 6 MBMax Persistent L2 Cache Size: 4 MBPersistent Data Size: 6 MBSteaming Data Size: 1024 MBLatency Without Using Persistent L2 Cache: 4.194 msLatency With Using 3 MB Persistent L2 Cache (Potentially Thrashing): 4.583 msLatency With Using 3 MB Persistent L2 Cache (Non-Thrashing): 4.255 ms
We could see that even when the persistent data size is larger than the persistent L2 cache, the latency of using persistent L2 cache that has no-thrashing usually does not perform worse than the baseline.
FAQ
Persistent Cache VS Shared Memory?
The persistent cache is different from the shared memory. The persistent cache is visible to all the threads in the GPU, while the shared memory is only visible to the threads in the same block.
For small-sized frequently accessed data, we could also use the shared memory to accelerate the data access. However, the shared memory is limited to 48 to 96 KB per block of threads, depending on the GPU, while the persistent cache is limited to a few MB per GPU.
References
CUDA L2 Persistent Cache
https://leimao.github.io/blog/CUDA-L2-Persistent-Cache/
Author
Lei Mao
Posted on
09-12-2022
Updated on
11-12-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Shared Memory Bank
Introduction
Memory bank is a key concept for CUDA shared memory. To get the best performance out of a CUDA kernel implementation, the user will have to pay attention to memory bank access and avoid memory bank access conflicts.
In this blog post, I would like to quickly discuss memory bank for CUDA shared memory.
Memory Bank
Memory Bank Properties
To achieve high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (banks) that can be accessed simultaneously. Therefore, any memory load or store of $n$ addresses that spans $n$ distinct memory banks can be serviced simultaneously, yielding an effective bandwidth that is $n$ times as high as the bandwidth of a single bank.
However, if multiple addresses of a memory request map to the same memory bank, the accesses are serialized. The hardware splits a memory request that has bank conflicts into as many separate conflict-free requests as necessary, decreasing the effective bandwidth by a factor equal to the number of separate memory requests. The one exception here is when multiple threads in a warp address the same shared memory location, resulting in a broadcast. In this case, multiple broadcasts from different banks are coalesced into a single multicast from the requested shared memory locations to the threads.
Memory Bank Mapping
The memory bank properties were described above. However, how memory addresses map to memory banks is architecture-specific.
On devices of compute capability 5.x or newer, each bank has a bandwidth of 32 bits every clock cycle, and successive 32-bit words are assigned to successive banks. The warp size is 32 threads and the number of banks is also 32, so bank conflicts can occur between any threads in the warp.
To elaborate on this, let’s see how memory addresses map to memory banks using examples. The following program illustrated the idea of 1D and 2D memory address to memory banks mapping for devices of compute capability 5.x or newer.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
#include <iostream>#include <memory>#include <vector>template <typename T>void bank_id_1d_mapping(int bank_size, int num_banks, int N){ for (int i{0}; i < N; ++i) { // bank_size: Bank size in bits. // 8: 8 bits per Byte. int bank_idx = (i * sizeof(T) * 8 / bank_size) % num_banks; std::cout << "Array Idx: " << i << " " << "Bank Idx: " << bank_idx << std::endl; }}template <typename T>void bank_id_2d_mapping(int bank_size, int num_banks, int M, int N){ for (int i{0}; i < M; ++i) { for (int j{0}; j < N; ++j) { int bank_idx = ((i * N + j) * sizeof(T) * 8 / bank_size) % num_banks; std::cout << "Matrix Idx: (" << i << ", " << j << ") " << "Bank Idx: " << bank_idx << std::endl; } }}int main(){ constexpr const int bank_size{32}; // bits constexpr const int num_banks{32}; const int M{4}; const int N{32}; std::cout << "Bank ID Mapping 1D: N = " << N << std::endl; bank_id_1d_mapping<float>(bank_size, num_banks, N); std::cout << "Bank 2D Mapping 1D: M = " << M << " N = " << N << std::endl; bank_id_2d_mapping<float>(bank_size, num_banks, M, N);}
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164
$ g++ memory_bank.cpp -o memory_bank -std=c++14$ ./memory_bankBank ID Mapping 1D: N = 32Array Idx: 0 Bank Idx: 0Array Idx: 1 Bank Idx: 1Array Idx: 2 Bank Idx: 2Array Idx: 3 Bank Idx: 3Array Idx: 4 Bank Idx: 4Array Idx: 5 Bank Idx: 5Array Idx: 6 Bank Idx: 6Array Idx: 7 Bank Idx: 7Array Idx: 8 Bank Idx: 8Array Idx: 9 Bank Idx: 9Array Idx: 10 Bank Idx: 10Array Idx: 11 Bank Idx: 11Array Idx: 12 Bank Idx: 12Array Idx: 13 Bank Idx: 13Array Idx: 14 Bank Idx: 14Array Idx: 15 Bank Idx: 15Array Idx: 16 Bank Idx: 16Array Idx: 17 Bank Idx: 17Array Idx: 18 Bank Idx: 18Array Idx: 19 Bank Idx: 19Array Idx: 20 Bank Idx: 20Array Idx: 21 Bank Idx: 21Array Idx: 22 Bank Idx: 22Array Idx: 23 Bank Idx: 23Array Idx: 24 Bank Idx: 24Array Idx: 25 Bank Idx: 25Array Idx: 26 Bank Idx: 26Array Idx: 27 Bank Idx: 27Array Idx: 28 Bank Idx: 28Array Idx: 29 Bank Idx: 29Array Idx: 30 Bank Idx: 30Array Idx: 31 Bank Idx: 31Bank 2D Mapping 1D: M = 4 N = 32Matrix Idx: (0, 0) Bank Idx: 0Matrix Idx: (0, 1) Bank Idx: 1Matrix Idx: (0, 2) Bank Idx: 2Matrix Idx: (0, 3) Bank Idx: 3Matrix Idx: (0, 4) Bank Idx: 4Matrix Idx: (0, 5) Bank Idx: 5Matrix Idx: (0, 6) Bank Idx: 6Matrix Idx: (0, 7) Bank Idx: 7Matrix Idx: (0, 8) Bank Idx: 8Matrix Idx: (0, 9) Bank Idx: 9Matrix Idx: (0, 10) Bank Idx: 10Matrix Idx: (0, 11) Bank Idx: 11Matrix Idx: (0, 12) Bank Idx: 12Matrix Idx: (0, 13) Bank Idx: 13Matrix Idx: (0, 14) Bank Idx: 14Matrix Idx: (0, 15) Bank Idx: 15Matrix Idx: (0, 16) Bank Idx: 16Matrix Idx: (0, 17) Bank Idx: 17Matrix Idx: (0, 18) Bank Idx: 18Matrix Idx: (0, 19) Bank Idx: 19Matrix Idx: (0, 20) Bank Idx: 20Matrix Idx: (0, 21) Bank Idx: 21Matrix Idx: (0, 22) Bank Idx: 22Matrix Idx: (0, 23) Bank Idx: 23Matrix Idx: (0, 24) Bank Idx: 24Matrix Idx: (0, 25) Bank Idx: 25Matrix Idx: (0, 26) Bank Idx: 26Matrix Idx: (0, 27) Bank Idx: 27Matrix Idx: (0, 28) Bank Idx: 28Matrix Idx: (0, 29) Bank Idx: 29Matrix Idx: (0, 30) Bank Idx: 30Matrix Idx: (0, 31) Bank Idx: 31Matrix Idx: (1, 0) Bank Idx: 0Matrix Idx: (1, 1) Bank Idx: 1Matrix Idx: (1, 2) Bank Idx: 2Matrix Idx: (1, 3) Bank Idx: 3Matrix Idx: (1, 4) Bank Idx: 4Matrix Idx: (1, 5) Bank Idx: 5Matrix Idx: (1, 6) Bank Idx: 6Matrix Idx: (1, 7) Bank Idx: 7Matrix Idx: (1, 8) Bank Idx: 8Matrix Idx: (1, 9) Bank Idx: 9Matrix Idx: (1, 10) Bank Idx: 10Matrix Idx: (1, 11) Bank Idx: 11Matrix Idx: (1, 12) Bank Idx: 12Matrix Idx: (1, 13) Bank Idx: 13Matrix Idx: (1, 14) Bank Idx: 14Matrix Idx: (1, 15) Bank Idx: 15Matrix Idx: (1, 16) Bank Idx: 16Matrix Idx: (1, 17) Bank Idx: 17Matrix Idx: (1, 18) Bank Idx: 18Matrix Idx: (1, 19) Bank Idx: 19Matrix Idx: (1, 20) Bank Idx: 20Matrix Idx: (1, 21) Bank Idx: 21Matrix Idx: (1, 22) Bank Idx: 22Matrix Idx: (1, 23) Bank Idx: 23Matrix Idx: (1, 24) Bank Idx: 24Matrix Idx: (1, 25) Bank Idx: 25Matrix Idx: (1, 26) Bank Idx: 26Matrix Idx: (1, 27) Bank Idx: 27Matrix Idx: (1, 28) Bank Idx: 28Matrix Idx: (1, 29) Bank Idx: 29Matrix Idx: (1, 30) Bank Idx: 30Matrix Idx: (1, 31) Bank Idx: 31Matrix Idx: (2, 0) Bank Idx: 0Matrix Idx: (2, 1) Bank Idx: 1Matrix Idx: (2, 2) Bank Idx: 2Matrix Idx: (2, 3) Bank Idx: 3Matrix Idx: (2, 4) Bank Idx: 4Matrix Idx: (2, 5) Bank Idx: 5Matrix Idx: (2, 6) Bank Idx: 6Matrix Idx: (2, 7) Bank Idx: 7Matrix Idx: (2, 8) Bank Idx: 8Matrix Idx: (2, 9) Bank Idx: 9Matrix Idx: (2, 10) Bank Idx: 10Matrix Idx: (2, 11) Bank Idx: 11Matrix Idx: (2, 12) Bank Idx: 12Matrix Idx: (2, 13) Bank Idx: 13Matrix Idx: (2, 14) Bank Idx: 14Matrix Idx: (2, 15) Bank Idx: 15Matrix Idx: (2, 16) Bank Idx: 16Matrix Idx: (2, 17) Bank Idx: 17Matrix Idx: (2, 18) Bank Idx: 18Matrix Idx: (2, 19) Bank Idx: 19Matrix Idx: (2, 20) Bank Idx: 20Matrix Idx: (2, 21) Bank Idx: 21Matrix Idx: (2, 22) Bank Idx: 22Matrix Idx: (2, 23) Bank Idx: 23Matrix Idx: (2, 24) Bank Idx: 24Matrix Idx: (2, 25) Bank Idx: 25Matrix Idx: (2, 26) Bank Idx: 26Matrix Idx: (2, 27) Bank Idx: 27Matrix Idx: (2, 28) Bank Idx: 28Matrix Idx: (2, 29) Bank Idx: 29Matrix Idx: (2, 30) Bank Idx: 30Matrix Idx: (2, 31) Bank Idx: 31Matrix Idx: (3, 0) Bank Idx: 0Matrix Idx: (3, 1) Bank Idx: 1Matrix Idx: (3, 2) Bank Idx: 2Matrix Idx: (3, 3) Bank Idx: 3Matrix Idx: (3, 4) Bank Idx: 4Matrix Idx: (3, 5) Bank Idx: 5Matrix Idx: (3, 6) Bank Idx: 6Matrix Idx: (3, 7) Bank Idx: 7Matrix Idx: (3, 8) Bank Idx: 8Matrix Idx: (3, 9) Bank Idx: 9Matrix Idx: (3, 10) Bank Idx: 10Matrix Idx: (3, 11) Bank Idx: 11Matrix Idx: (3, 12) Bank Idx: 12Matrix Idx: (3, 13) Bank Idx: 13Matrix Idx: (3, 14) Bank Idx: 14Matrix Idx: (3, 15) Bank Idx: 15Matrix Idx: (3, 16) Bank Idx: 16Matrix Idx: (3, 17) Bank Idx: 17Matrix Idx: (3, 18) Bank Idx: 18Matrix Idx: (3, 19) Bank Idx: 19Matrix Idx: (3, 20) Bank Idx: 20Matrix Idx: (3, 21) Bank Idx: 21Matrix Idx: (3, 22) Bank Idx: 22Matrix Idx: (3, 23) Bank Idx: 23Matrix Idx: (3, 24) Bank Idx: 24Matrix Idx: (3, 25) Bank Idx: 25Matrix Idx: (3, 26) Bank Idx: 26Matrix Idx: (3, 27) Bank Idx: 27Matrix Idx: (3, 28) Bank Idx: 28Matrix Idx: (3, 29) Bank Idx: 29Matrix Idx: (3, 30) Bank Idx: 30Matrix Idx: (3, 31) Bank Idx: 31
Memory Bank Conflicts
Notice that for 2D matrix, assuming the data type bitwidth is 32 bit, if $N$ is a multiple of 32, the elements in the same column of the matrix belongs to the same memory bank. This is where memory bank conflicts can easily happen in the implementation. If the threads in a warp try to access the values in the same column of the matrix, there will be severe memory bank conflicts. Using some other values for $N$, such as 33, can avoid the elements in the same column of the matrix belongs to the same memory bank. So be careful about the stride of memory bank access.
1234567891011121314151617181920212223242526272829303132333435363738394041424344
#include <iostream>#include <memory>#include <vector>template <typename T>void bank_id_1d_mapping(int bank_size, int num_banks, int N){ for (int i{0}; i < N; ++i) { int bank_idx = (i * sizeof(T) * 8 / bank_size) % num_banks; std::cout << "Array Idx: " << i << " " << "Bank Idx: " << bank_idx << std::endl; }}template <typename T>void bank_id_2d_mapping(int bank_size, int num_banks, int M, int N){ for (int i{0}; i < M; ++i) { for (int j{0}; j < N; ++j) { int bank_idx = ((i * N + j) * sizeof(T) * 8 / bank_size) % num_banks; std::cout << "Matrix Idx: (" << i << ", " << j << ") " << "Bank Idx: " << bank_idx << std::endl; } }}int main(){ constexpr const int bank_size{32}; // bits constexpr const int num_banks{32}; const int M{4}; const int N{33}; std::cout << "Bank ID Mapping 1D: N = " << N << std::endl; bank_id_1d_mapping<float>(bank_size, num_banks, N); std::cout << "Bank 2D Mapping 1D: M = " << M << " N = " << N << std::endl; bank_id_2d_mapping<float>(bank_size, num_banks, M, N);}
In practice, the additional column is unused and can be filled with any value. Just to make sure that the algorithm implemented will not produce incorrect results by accidentally using the values that are not supposed to be used in the additional column.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169
$ g++ memory_bank.cpp -o memory_bank -std=c++14$ ./memory_bankBank ID Mapping 1D: N = 33Array Idx: 0 Bank Idx: 0Array Idx: 1 Bank Idx: 1Array Idx: 2 Bank Idx: 2Array Idx: 3 Bank Idx: 3Array Idx: 4 Bank Idx: 4Array Idx: 5 Bank Idx: 5Array Idx: 6 Bank Idx: 6Array Idx: 7 Bank Idx: 7Array Idx: 8 Bank Idx: 8Array Idx: 9 Bank Idx: 9Array Idx: 10 Bank Idx: 10Array Idx: 11 Bank Idx: 11Array Idx: 12 Bank Idx: 12Array Idx: 13 Bank Idx: 13Array Idx: 14 Bank Idx: 14Array Idx: 15 Bank Idx: 15Array Idx: 16 Bank Idx: 16Array Idx: 17 Bank Idx: 17Array Idx: 18 Bank Idx: 18Array Idx: 19 Bank Idx: 19Array Idx: 20 Bank Idx: 20Array Idx: 21 Bank Idx: 21Array Idx: 22 Bank Idx: 22Array Idx: 23 Bank Idx: 23Array Idx: 24 Bank Idx: 24Array Idx: 25 Bank Idx: 25Array Idx: 26 Bank Idx: 26Array Idx: 27 Bank Idx: 27Array Idx: 28 Bank Idx: 28Array Idx: 29 Bank Idx: 29Array Idx: 30 Bank Idx: 30Array Idx: 31 Bank Idx: 31Array Idx: 32 Bank Idx: 0Bank 2D Mapping 1D: M = 4 N = 33Matrix Idx: (0, 0) Bank Idx: 0Matrix Idx: (0, 1) Bank Idx: 1Matrix Idx: (0, 2) Bank Idx: 2Matrix Idx: (0, 3) Bank Idx: 3Matrix Idx: (0, 4) Bank Idx: 4Matrix Idx: (0, 5) Bank Idx: 5Matrix Idx: (0, 6) Bank Idx: 6Matrix Idx: (0, 7) Bank Idx: 7Matrix Idx: (0, 8) Bank Idx: 8Matrix Idx: (0, 9) Bank Idx: 9Matrix Idx: (0, 10) Bank Idx: 10Matrix Idx: (0, 11) Bank Idx: 11Matrix Idx: (0, 12) Bank Idx: 12Matrix Idx: (0, 13) Bank Idx: 13Matrix Idx: (0, 14) Bank Idx: 14Matrix Idx: (0, 15) Bank Idx: 15Matrix Idx: (0, 16) Bank Idx: 16Matrix Idx: (0, 17) Bank Idx: 17Matrix Idx: (0, 18) Bank Idx: 18Matrix Idx: (0, 19) Bank Idx: 19Matrix Idx: (0, 20) Bank Idx: 20Matrix Idx: (0, 21) Bank Idx: 21Matrix Idx: (0, 22) Bank Idx: 22Matrix Idx: (0, 23) Bank Idx: 23Matrix Idx: (0, 24) Bank Idx: 24Matrix Idx: (0, 25) Bank Idx: 25Matrix Idx: (0, 26) Bank Idx: 26Matrix Idx: (0, 27) Bank Idx: 27Matrix Idx: (0, 28) Bank Idx: 28Matrix Idx: (0, 29) Bank Idx: 29Matrix Idx: (0, 30) Bank Idx: 30Matrix Idx: (0, 31) Bank Idx: 31Matrix Idx: (0, 32) Bank Idx: 0Matrix Idx: (1, 0) Bank Idx: 1Matrix Idx: (1, 1) Bank Idx: 2Matrix Idx: (1, 2) Bank Idx: 3Matrix Idx: (1, 3) Bank Idx: 4Matrix Idx: (1, 4) Bank Idx: 5Matrix Idx: (1, 5) Bank Idx: 6Matrix Idx: (1, 6) Bank Idx: 7Matrix Idx: (1, 7) Bank Idx: 8Matrix Idx: (1, 8) Bank Idx: 9Matrix Idx: (1, 9) Bank Idx: 10Matrix Idx: (1, 10) Bank Idx: 11Matrix Idx: (1, 11) Bank Idx: 12Matrix Idx: (1, 12) Bank Idx: 13Matrix Idx: (1, 13) Bank Idx: 14Matrix Idx: (1, 14) Bank Idx: 15Matrix Idx: (1, 15) Bank Idx: 16Matrix Idx: (1, 16) Bank Idx: 17Matrix Idx: (1, 17) Bank Idx: 18Matrix Idx: (1, 18) Bank Idx: 19Matrix Idx: (1, 19) Bank Idx: 20Matrix Idx: (1, 20) Bank Idx: 21Matrix Idx: (1, 21) Bank Idx: 22Matrix Idx: (1, 22) Bank Idx: 23Matrix Idx: (1, 23) Bank Idx: 24Matrix Idx: (1, 24) Bank Idx: 25Matrix Idx: (1, 25) Bank Idx: 26Matrix Idx: (1, 26) Bank Idx: 27Matrix Idx: (1, 27) Bank Idx: 28Matrix Idx: (1, 28) Bank Idx: 29Matrix Idx: (1, 29) Bank Idx: 30Matrix Idx: (1, 30) Bank Idx: 31Matrix Idx: (1, 31) Bank Idx: 0Matrix Idx: (1, 32) Bank Idx: 1Matrix Idx: (2, 0) Bank Idx: 2Matrix Idx: (2, 1) Bank Idx: 3Matrix Idx: (2, 2) Bank Idx: 4Matrix Idx: (2, 3) Bank Idx: 5Matrix Idx: (2, 4) Bank Idx: 6Matrix Idx: (2, 5) Bank Idx: 7Matrix Idx: (2, 6) Bank Idx: 8Matrix Idx: (2, 7) Bank Idx: 9Matrix Idx: (2, 8) Bank Idx: 10Matrix Idx: (2, 9) Bank Idx: 11Matrix Idx: (2, 10) Bank Idx: 12Matrix Idx: (2, 11) Bank Idx: 13Matrix Idx: (2, 12) Bank Idx: 14Matrix Idx: (2, 13) Bank Idx: 15Matrix Idx: (2, 14) Bank Idx: 16Matrix Idx: (2, 15) Bank Idx: 17Matrix Idx: (2, 16) Bank Idx: 18Matrix Idx: (2, 17) Bank Idx: 19Matrix Idx: (2, 18) Bank Idx: 20Matrix Idx: (2, 19) Bank Idx: 21Matrix Idx: (2, 20) Bank Idx: 22Matrix Idx: (2, 21) Bank Idx: 23Matrix Idx: (2, 22) Bank Idx: 24Matrix Idx: (2, 23) Bank Idx: 25Matrix Idx: (2, 24) Bank Idx: 26Matrix Idx: (2, 25) Bank Idx: 27Matrix Idx: (2, 26) Bank Idx: 28Matrix Idx: (2, 27) Bank Idx: 29Matrix Idx: (2, 28) Bank Idx: 30Matrix Idx: (2, 29) Bank Idx: 31Matrix Idx: (2, 30) Bank Idx: 0Matrix Idx: (2, 31) Bank Idx: 1Matrix Idx: (2, 32) Bank Idx: 2Matrix Idx: (3, 0) Bank Idx: 3Matrix Idx: (3, 1) Bank Idx: 4Matrix Idx: (3, 2) Bank Idx: 5Matrix Idx: (3, 3) Bank Idx: 6Matrix Idx: (3, 4) Bank Idx: 7Matrix Idx: (3, 5) Bank Idx: 8Matrix Idx: (3, 6) Bank Idx: 9Matrix Idx: (3, 7) Bank Idx: 10Matrix Idx: (3, 8) Bank Idx: 11Matrix Idx: (3, 9) Bank Idx: 12Matrix Idx: (3, 10) Bank Idx: 13Matrix Idx: (3, 11) Bank Idx: 14Matrix Idx: (3, 12) Bank Idx: 15Matrix Idx: (3, 13) Bank Idx: 16Matrix Idx: (3, 14) Bank Idx: 17Matrix Idx: (3, 15) Bank Idx: 18Matrix Idx: (3, 16) Bank Idx: 19Matrix Idx: (3, 17) Bank Idx: 20Matrix Idx: (3, 18) Bank Idx: 21Matrix Idx: (3, 19) Bank Idx: 22Matrix Idx: (3, 20) Bank Idx: 23Matrix Idx: (3, 21) Bank Idx: 24Matrix Idx: (3, 22) Bank Idx: 25Matrix Idx: (3, 23) Bank Idx: 26Matrix Idx: (3, 24) Bank Idx: 27Matrix Idx: (3, 25) Bank Idx: 28Matrix Idx: (3, 26) Bank Idx: 29Matrix Idx: (3, 27) Bank Idx: 30Matrix Idx: (3, 28) Bank Idx: 31Matrix Idx: (3, 29) Bank Idx: 0Matrix Idx: (3, 30) Bank Idx: 1Matrix Idx: (3, 31) Bank Idx: 2Matrix Idx: (3, 32) Bank Idx: 3
Here is an example of memory conflicts due to inappropriate strides.
References
CUDA Shared Memory Bank
https://leimao.github.io/blog/CUDA-Shared-Memory-Bank/
Author
Lei Mao
Posted on
06-22-2022
Updated on
08-19-2022
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Kernel Execution Overlap
Introduction
In my previous blog post “CUDA Stream”, I have discussed about how CUDA streams helps the CUDA program achieves concurrency. At the end of the article, I also mentioned that in addition to memory transfer and kernel execution overlap, execution overlap between different kernels is also allowed. However, many CUDA programmers wondered why they have not encountered kernel execution overlap before.
In this blog post, I would like to discuss the CUDA kernel execution overlap and why we could or could not see them in practice.
CUDA Kernel Execution Overlap
Computation Resources
CUDA kernel executions can overlap if there are sufficient computation resource to parallelize multiple kernel executions.
In the following example, by changing the value of blocks_per_grid from small to large, we could see that the kernel executions from different CUDA streams changes from full-parallelization, to partial-parallelization, and finally to almost no-parallelization. This is because, when the computation resource allocated for one CUDA kernel becomes larger, the computation resource for additional CUDA kernels becomes smaller.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
#include <cuda_runtime.h>#include <iostream>#include <vector>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}__global__ void float_add_one(float* buffer, uint32_t n){ uint32_t const idx{blockDim.x * blockIdx.x + threadIdx.x}; uint32_t const stride{blockDim.x * gridDim.x}; for (uint32_t i{idx}; i < n; i += stride) { buffer[i] += 1.0F; }}void launch_float_add_one(float* buffer, uint32_t n, dim3 const& threads_per_block, dim3 const& blocks_per_grid, cudaStream_t stream){ float_add_one<<<blocks_per_grid, threads_per_block, 0, stream>>>(buffer, n); CHECK_LAST_CUDA_ERROR();}int main(int argc, char** argv){ size_t const buffer_size{1024 * 10240}; size_t const num_streams{5}; dim3 const threads_per_block{1024}; // Try different values for blocks_per_grid // 1, 2, 4, 8, 16, 32, 1024, 2048 dim3 const blocks_per_grid{32}; std::vector<float*> d_buffers(num_streams); std::vector<cudaStream_t> streams(num_streams); for (auto& d_buffer : d_buffers) { CHECK_CUDA_ERROR(cudaMalloc(&d_buffer, buffer_size * sizeof(float))); } for (auto& stream : streams) { CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); } for (size_t i = 0; i < num_streams; ++i) { launch_float_add_one(d_buffers[i], buffer_size, threads_per_block, blocks_per_grid, streams[i]); } for (auto& stream : streams) { CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); } for (auto& d_buffer : d_buffers) { CHECK_CUDA_ERROR(cudaFree(d_buffer)); } for (auto& stream : streams) { CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); } return 0;}
12
$ nvcc overlap.cu -o overlap$ ./overlap
We observed full-parallelization for blocks_per_grid = 1. However, we could also see that the time spent for finishing all the kernels was long because the GPU was not fully utilized.
When we set blocks_per_grid = 32, only some of the kernel executions were parallelized. However, the GPU was fully utilized and the time spent for finishing all the kernels was much less compared to the blocks_per_grid = 1.
Same as blocks_per_grid = 32, when we set blocks_per_grid = 5120, there was almost no kernel executions parallelized. However, the GPU was still fully utilized and the time spent for finishing all the kernels was much less compared to the blocks_per_grid = 1.
Implicit Synchronization
It is also possible that there is no kernel execution overlap even if there are sufficient computation resources. It could be due to that there are CUDA commands issued by the host thread to the default stream between other CUDA commands from other different streams causing implicit synchronization.
In my opinion, this rarely happens in the single-threaded CUDA programs due to the way CUDA programmers usually writes CUDA programs. However, it will definitely happen for the multi-threaded CUDA programs. To overcome this situation, since CUDA 7, a per-thread default stream compile mode has been created. The user would just have to specify --default-stream per-thread in the NVCC compiler building flags without having to change the existing CUDA program to disable implicit synchronization. To see more details about how to simplify CUDA concurrency using per-thread default stream, please read Mark Harris’s blog post.
As of CUDA 11.4, the default building argument is still legacy. The user would have to manually change it to per-thread in order to use the per-thread default stream. From the CUDA 11.4 NVCC help:
12345678910111213141516
--default-stream {legacy|null|per-thread} (-default-stream) Specify the stream that CUDA commands from the compiled program will be sent to by default. legacy The CUDA legacy stream (per context, implicitly synchronizes with other streams). per-thread A normal CUDA stream (per thread, does not implicitly synchronize with other streams). 'null' is a deprecated alias for 'legacy'. Allowed values for this option: 'legacy','null','per-thread'. Default value: 'legacy'.
Conclusions
If there is no implicit synchronization from the default CUDA stream, partial or no CUDA kernel execution parallelization usually indicate high GPU utilization, and full CUDA kernel execution parallelization usually indicate GPU might have not been fully utilized.
If the no CUDA kernel execution overlap was due to the implicit synchronization from the default CUDA stream, we should probably think of disabling it by enabling the per-thread default stream.
References
CUDA Kernel Execution Overlap
https://leimao.github.io/blog/CUDA-Kernel-Execution-Overlap/
Author
Lei Mao
Posted on
06-10-2022
Updated on
06-10-2022
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Proper CUDA Error Checking
Introduction
Proper CUDA error checking is critical for making the CUDA program development smooth and successful. Missing or incorrectly identifying CUDA errors could cause problems in production or waste lots of time in debugging.
In this blog post, I would like to quickly discuss proper CUDA error checking.
CUDA Error Types
CUDA errors could be separated into synchronous and asynchronous errors, or sticky and non-sticky errors.
Synchronous Error VS Asynchronous Error
CUDA kernel launch is asynchronous, meaning when the host thread reaches the code for kernel launch, say kernel<<<...>>>, the host thread issues an request to execute the kernel on GPU, then the host thread that launches the kernel continues, without waiting for the kernel to complete. The kernel might not begin to execute right away either.
There could be two types of error for CUDA kernel launch, synchronous error and asynchronous error.
Synchronous error happens when the host thread knows the kernel is illegal or invalid. For example, when the thread block size or grid size is too large, a synchronous error is resulted immediately after the kernel launch call, and this error could be captured by CUDA runtime error capturing API calls, such as cudaGetLastError, right after the kernel launch call.
Asynchronous error happens during kernel execution or CUDA runtime asynchronous API execution on GPU. It might take a while to encounter the error and send the error to host thread. For example, For example, it might encounter accessing invalid memory address in the late stage of kernel execution or CUDA runtime asynchronous API cudaMemcpyAsync execution, it will abort the execution and then send the error back to thread. Even if there are CUDA runtime error capturing API calls, such as cudaGetLastError, right after the kernel launch call, at the time when the error reaches host, those CUDA runtime error capturing API calls have been executed and they found no error. It is possible to capture the asynchronous error by explicitly synchronizing using the CUDA kernel launch using CUDA runtime API calls, such as cudaDeviceSynchronize, cudaStreamSynchronize, or cudaEventSynchronize, and checking the returned error from those CUDA kernel launch using CUDA runtime API calls or capturing the error using CUDA runtime error capturing API calls, such as cudaGetLastError. However, explicitly synchronization usually affects performance and therefore is not recommended for using in production unless it is extremely necessary.
Sticky VS Non-Sticky Error
CUDA runtime API returns non-sticky error if there is any, whereas CUDA kernel execution resulted in sticky error if there is any.
A non-sticky error is recoverable, meaning subsequent CUDA runtime API calls could behave normally. Therefore, the CUDA context is not corrupted. For example, when we allocate memory using cudaMalloc, it will return a non-sticky error if the GPU memory is insufficient.
A sticky error is not recoverable, meaning subsequent CUDA runtime API calls will always return the same error. Therefore, the CUDA context is corrupted, unless the application host process is terminated. For example, when the kernel tries to access invalid memory address during kernel execution, it will result in a sticky error which will be captured and returned by all the subsequent CUDA runtime API calls.
CUDA Error Checking Best Practice
In a CUDA program implementation, both development and production code, always check the return value of each CUDA runtime synchronous or asynchronous API call to see if there is any CUDA synchronous error, always run CUDA runtime error capturing API calls, such as cudaGetLastError, after kernel launch calls to see if there is any CUDA synchronous error. Check CUDA asynchronous error in development by synchronization and error checking after kernel launch calls and disable it in production.
Quiz
There is a question on the NVIDIA developer forum. Let’s use it as a quiz. Basically, the user has the following code. All calculations are done on the default stream and one thread. The cudaDeviceSynchronize returns cudaSuccess, but the cudaGetLastError call returns an invalid device function error. How would this happen?
1234567
// do some stuff, launch kernels, etcres = cudaDeviceSynchronize();// check resres = cudaGetLastError();// check res
cudaGetLastError returns the last error that has been produced by any of the runtime calls in the same host thread and resets it to cudaSuccess. cudaDeviceSynchronize is a CUDA runtime API call and it got no error. This means the kernel launch got no asynchronous error. However, there could be errors from CUDA runtime API calls prior to launching the kernel or the kernel launching encountered synchronous error which have not been properly error-checked. The last error that produced by those would not be reset until the cudaGetLastError call, even though before the reset there were cudaSuccess from other CUDA runtime API calls.
For example,
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
#include <cuda_runtime.h>#include <iostream>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; // We don't exit when we encounter CUDA errors in this example. // std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; // We don't exit when we encounter CUDA errors in this example. // std::exit(EXIT_FAILURE); }}int main(){ float* p; // This will produce error. CHECK_CUDA_ERROR(cudaMalloc(&p, 1000000000000000 * sizeof(float))); // This will be successful. CHECK_CUDA_ERROR(cudaMalloc(&p, 10 * sizeof(float))); // This will be successful. CHECK_CUDA_ERROR(cudaFree(p)); // The last error still has not been reset here. // This will produce the same error as // cudaMalloc(&p, 1000000000000000 * sizeof(float)) CHECK_LAST_CUDA_ERROR(); // The last error has been reset here. CHECK_LAST_CUDA_ERROR();}
123456
$ nvcc last_error.cu -o last_error$ ./last_errorCUDA Runtime Error at: last_error.cu:37out of memory cudaMalloc(&p, 1000000000000000 * sizeof(float))CUDA Runtime Error at: last_error.cu:45out of memory
Fundamentally, it was due to that the CUDA program error checking was not following the best practice mentioned previously.
References
Proper CUDA Error Checking
https://leimao.github.io/blog/Proper-CUDA-Error-Checking/
Author
Lei Mao
Posted on
05-25-2022
Updated on
12-15-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Compilation Architecture Macro
Introduction
In C++, macros are often used for controlling the code for compilation for difference use cases. Similarly, in CUDA, it is often necessary to compile the same source code file for different GPU architectures.
In this blog post, I would like to quickly discuss how to use NVCC compilation architecture macro to control the compilation for different GPU architectures.
Half Addition Example
According to the CUDA arithmetic instructions, FP16 add arithmetic instruction could only be performed with compute capability >= 5.3.
In this example, with architecture macro, different FP16 add implementation could be switched for different virtual GPU architectures.
No Architecture Macro
Without using architecture macro, we could not control the device side implementation for different virtual GPU architectures.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134
#include <cmath>#include <cstdint>#include <cuda_fp16.h>#include <functional>#include <iomanip>#include <iostream>#include <vector>#define checkCuda(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_cuda_performance(std::function<T(void)> bound_function, const int num_repeats = 100, const int num_warmups = 100){ cudaEvent_t start; cudaEvent_t stop; cudaEventCreate(&start); cudaEventCreate(&stop); for (int i{0}; i < num_warmups; ++i) { bound_function(); } cudaDeviceSynchronize(); cudaEventRecord(start, 0); for (int i{0}; i < num_repeats; ++i) { bound_function(); } cudaEventRecord(stop, 0); cudaEventSynchronize(stop); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA half addition kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } float time_elapsed{0.0f}; cudaEventElapsedTime(&time_elapsed, start, stop); float latency{time_elapsed / static_cast<float>(num_repeats)}; return latency;}__global__ void half_addition(__half* output, __half const* input_1, __half const* input_2, uint32_t const n){ const uint32_t idx{blockDim.x * blockIdx.x + threadIdx.x}; const uint32_t stride{blockDim.x * gridDim.x}; for (uint32_t i{idx}; i < n; i += stride) { output[i] = __hadd(input_1[i], input_2[i]); }}void launch_half_addition(__half* output, __half const* input_1, __half const* input_2, uint32_t const n){ dim3 threads_per_block{1024}; dim3 blocks_per_grid{32}; half_addition<<<blocks_per_grid, threads_per_block>>>(output, input_1, input_2, n);}int main(){ constexpr uint32_t n{100000}; constexpr float a{1.0f}, b{2.0f}, c{3.0f}; std::vector<__half> input_1(n, __float2half(a)); std::vector<__half> input_2(n, __float2half(b)); std::vector<__half> output(n, __float2half(0.0f)); __half *d_input_1, *d_input_2, *d_output; checkCuda(cudaMalloc(&d_input_1, n * sizeof(__half))); checkCuda(cudaMalloc(&d_input_2, n * sizeof(__half))); checkCuda(cudaMalloc(&d_output, n * sizeof(__half))); checkCuda(cudaMemcpy(d_input_1, input_1.data(), n * sizeof(__half), cudaMemcpyHostToDevice)); checkCuda(cudaMemcpy(d_input_2, input_2.data(), n * sizeof(__half), cudaMemcpyHostToDevice)); launch_half_addition(d_output, d_input_1, d_input_2, n); cudaDeviceSynchronize(); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA half addition kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } checkCuda(cudaMemcpy(output.data(), d_output, n * sizeof(__half), cudaMemcpyDeviceToHost)); for (uint32_t i{0}; i < n; ++i) { if (std::abs(__half2float(output.at(i)) - c) > 1e-8) { std::cerr << "CUDA half addition kernel implementation has error!." << std::endl; std::cout << "Expect " << c << " at position " << i << " but got " << __half2float(output.at(i)) << std::endl; break; } } std::function<void(void)> bound_function{ std::bind(launch_half_addition, d_output, d_input_1, d_input_2, n)}; float latency{measure_cuda_performance(bound_function, 100, 100)}; std::cout << std::fixed << std::setprecision(5) << "Latency: " << latency << " ms" << std::endl;}
Although compiling the FP16 addition program against the virtual GPU architecture compute_52 did not produce compilation error, runtime sanity check shows that the the CUDA kernel has issues. Compiling the same program against the virtual GPU architecture compute_53 is fine. This is expected because FP16 add arithmetic instruction __hadd could only be performed with virtual GPU architecture >= 5.3.
12345678
$ nvcc half_addition_no_macro.cu -o half_addition_no_macro --gpu-architecture=compute_52$ ./half_addition_no_macroCUDA half addition kernel implementation has error!.Expect 3 at position 0 but got 1Latency: 0.00313 ms$ nvcc half_addition_no_macro.cu -o half_addition_no_macro --gpu-architecture=compute_53$ ./half_addition_no_macroLatency: 0.00286 ms
With Architecture Macro
For virtual GPU architecture < 5.3, if we care less about the performance, we could still do the FP16 addition by converting FP16 values to FP32, perform the FP32 addition, and convert the FP32 sum back to FP16. __CUDA_ARCH__ is the architecture macro representing the virtual GPU architecture.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137
#include <cmath>#include <cstdint>#include <cuda_fp16.h>#include <functional>#include <iomanip>#include <iostream>#include <vector>#define checkCuda(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_cuda_performance(std::function<T(void)> bound_function, const int num_repeats = 100, const int num_warmups = 100){ cudaEvent_t start; cudaEvent_t stop; cudaEventCreate(&start); cudaEventCreate(&stop); for (int i{0}; i < num_warmups; ++i) { bound_function(); } cudaDeviceSynchronize(); cudaEventRecord(start, 0); for (int i{0}; i < num_repeats; ++i) { bound_function(); } cudaEventRecord(stop, 0); cudaEventSynchronize(stop); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA half addition kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } float time_elapsed{0.0f}; cudaEventElapsedTime(&time_elapsed, start, stop); float latency{time_elapsed / static_cast<float>(num_repeats)}; return latency;}__global__ void half_addition(__half* output, __half const* input_1, __half const* input_2, uint32_t const n){ const uint32_t idx{blockDim.x * blockIdx.x + threadIdx.x}; const uint32_t stride{blockDim.x * gridDim.x}; for (uint32_t i{idx}; i < n; i += stride) {#if __CUDA_ARCH__ >= 530 output[i] = __hadd(input_1[i], input_2[i]);#else output[i] = __float2half(__half2float(input_1[i]) + __half2float(input_2[i]));#endif }}void launch_half_addition(__half* output, __half const* input_1, __half const* input_2, uint32_t const n){ dim3 threads_per_block{1024}; dim3 blocks_per_grid{32}; half_addition<<<blocks_per_grid, threads_per_block>>>(output, input_1, input_2, n);}int main(){ constexpr uint32_t n{100000}; constexpr float a{1.0f}, b{2.0f}, c{3.0f}; std::vector<__half> input_1(n, __float2half(a)); std::vector<__half> input_2(n, __float2half(b)); std::vector<__half> output(n, __float2half(0.0f)); __half *d_input_1, *d_input_2, *d_output; checkCuda(cudaMalloc(&d_input_1, n * sizeof(__half))); checkCuda(cudaMalloc(&d_input_2, n * sizeof(__half))); checkCuda(cudaMalloc(&d_output, n * sizeof(__half))); checkCuda(cudaMemcpy(d_input_1, input_1.data(), n * sizeof(__half), cudaMemcpyHostToDevice)); checkCuda(cudaMemcpy(d_input_2, input_2.data(), n * sizeof(__half), cudaMemcpyHostToDevice)); launch_half_addition(d_output, d_input_1, d_input_2, n); cudaDeviceSynchronize(); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA half addition kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } checkCuda(cudaMemcpy(output.data(), d_output, n * sizeof(__half), cudaMemcpyDeviceToHost)); for (uint32_t i{0}; i < n; ++i) { if (std::abs(__half2float(output.at(i)) - c) > 1e-8) { std::cerr << "CUDA half addition kernel implementation has error!." << std::endl; std::cout << "Expect " << c << " at position " << i << " but got " << __half2float(output.at(i)) << std::endl; break; } } std::function<void(void)> bound_function{ std::bind(launch_half_addition, d_output, d_input_1, d_input_2, n)}; float latency{measure_cuda_performance(bound_function, 100, 100)}; std::cout << std::fixed << std::setprecision(5) << "Latency: " << latency << " ms" << std::endl;}
In this implementation, when the virtual GPU architecture is compute_52, __float2half(__half2float(input_1[i]) + __half2float(input_2[i])) will be used for compilation; when the virtual GPU architecture is compute_53, __hadd(input_1[i], input_2[i]) will be used for compilation.
123456
$ nvcc half_addition_with_macro.cu -o half_addition_with_macro --gpu-architecture=compute_52$ ./half_addition_with_macroLatency: 0.00305 ms$ nvcc half_addition_with_macro.cu -o half_addition_with_macro --gpu-architecture=compute_53$ ./half_addition_with_macroLatency: 0.00292 ms
Caveats
This macro can be used in the implementation of GPU functions for determining the virtual architecture for which it is currently being compiled. The host code (the non-GPU code) must not depend on it. This means the __CUDA_ARCH__ macro could only live inside the functions decorated with __device__.
In the following example, we could see that the __CUDA_ARCH__ macro is useless inside a host function.
123456789101112
#include <iostream>void test_host_function(){#if __CUDA_ARCH__ >= 530 std::cout << "__CUDA_ARCH__ >= 530" << std::endl;#else std::cout << "__CUDA_ARCH__ < 530" << std::endl;#endif}int main() { test_host_function(); }
123456
$ nvcc host.cu -o host --gpu-architecture=compute_52$ ./host__CUDA_ARCH__ < 530$ nvcc host.cu -o host --gpu-architecture=compute_53$ ./host__CUDA_ARCH__ < 530
References
CUDA Compilation Architecture Macro
https://leimao.github.io/blog/CUDA-Compilation-Architecture-Macro/
Author
Lei Mao
Posted on
05-01-2022
Updated on
05-01-2022
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Multi-Thread Single-Stream VS Single-Thread Multi-Stream CUDA
Introduction
In CUDA programming, to achieve the maximum utilization of GPU, we will often use multiple CUDA streams in the implementation. Then we have a question. Do we implement the CUDA program in a multi-thread fashion and each thread uses one CUDA stream or a single-thread fashion and the thread uses multiple CUDA streams?
In this blog post, I implemented a high-performance addition program and compared the performance between multi-thread single-stream CUDA and single-thread multi-stream CUDA.
High-Performance Addition
In this example, I implemented the array addition using CPU and CUDA. We could adjust the array size, number of additions to perform, number of threads, and number of CUDA streams per thread, and measure the performance latency.
All the tests were performed on an x86-64 Ubuntu 20.04 LTS desktop with Intel i9-9900K CPU and NVIDIA RTX 2080 TI GPU.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255
#include <algorithm>#include <cassert>#include <chrono>#include <cstddef>#include <cstdio>#include <cstdlib>#include <iomanip>#include <iostream>#include <memory>#include <thread>#include <vector>#include <cuda_runtime.h>cudaError_t checkCuda(cudaError_t status){#if defined(NDEBUG) || defined(_NDEBUG) if (status != cudaSuccess) { std::cerr << "CUDA Runtime Error: " << cudaGetErrorString(status) << std::endl; std::exit(EXIT_FAILURE); }#endif return status;}__global__ void add_n_kernel(int* ptr, const unsigned int n, const size_t size){ const size_t idx = blockIdx.x * blockDim.x + threadIdx.x; const size_t stride = blockDim.x * gridDim.x; for (size_t i = idx; i < size; i += stride) { for (unsigned int j = 0; j < n; j++) { ptr[i] += 1; } }}void cpu_add_n(int* ptr, const unsigned int n, const size_t size){ for (size_t i = 0; i < size; i++) { for (unsigned int j = 0; j < n; j++) { ptr[i] += 1; } }}void cuda_add_n(int* h_data, int* d_data, const unsigned int n, const size_t size, const unsigned int num_streams, cudaStream_t* streams){ const size_t block_size{256}; const size_t stream_size{size / num_streams}; size_t grid_size = 1; if (stream_size / block_size != 0) { grid_size = stream_size / block_size; } const size_t stream_bytes{stream_size * sizeof(int)}; for (unsigned int i = 0; i < num_streams - 1; i++) { const size_t offset = i * stream_size; checkCuda(cudaMemcpyAsync(d_data + offset, h_data + offset, stream_bytes, cudaMemcpyHostToDevice, streams[i])); add_n_kernel<<<grid_size, block_size, 0, streams[i]>>>(d_data + offset, n, stream_size); checkCuda(cudaMemcpyAsync(h_data + offset, d_data + offset, stream_bytes, cudaMemcpyDeviceToHost, streams[i])); } const size_t stream_size_remain = size - (num_streams - 1) * stream_size; const size_t stream_bytes_remain = stream_size_remain * sizeof(int); const size_t offset = (num_streams - 1) * stream_size; checkCuda(cudaMemcpyAsync(d_data + offset, h_data + offset, stream_bytes_remain, cudaMemcpyHostToDevice, streams[num_streams - 1])); add_n_kernel<<<grid_size, block_size, 0, streams[num_streams - 1]>>>( d_data + offset, n, stream_size_remain); checkCuda(cudaMemcpyAsync(h_data + offset, d_data + offset, stream_bytes_remain, cudaMemcpyDeviceToHost, streams[num_streams - 1])); return;}void thread_add_n(int* h_data, int* d_data, const unsigned int n, const size_t size, const unsigned int num_streams, cudaStream_t* streams){ // CPU add if (num_streams == 0) { cpu_add_n(h_data, n, size); } // CUDA add else { cuda_add_n(h_data, d_data, n, size, num_streams, streams); } return;}// Multithread add_n// Each thread uses n streamvoid multithread_add_n(int* h_data, int* d_data, const unsigned int n, const size_t size, const unsigned int num_threads, const unsigned int num_streams_per_thread, const bool verbose, const unsigned int num_tests){ const unsigned int num_streams{num_threads * num_streams_per_thread}; std::vector<cudaStream_t> streams(num_streams); for (unsigned int i = 0; i < streams.size(); i++) { checkCuda(cudaStreamCreate(&streams.at(i))); } float duration_total = 0; for (int k = 0; k < num_tests; k++) { std::vector<std::thread> threads; const size_t thread_size{size / num_threads}; std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now(); for (unsigned int i = 0; i < num_threads - 1; i++) { const size_t offset = i * thread_size; threads.emplace_back(thread_add_n, h_data + offset, d_data + offset, n, thread_size, num_streams_per_thread, streams.data() + i * num_streams_per_thread); } const size_t thread_size_remain = size - (num_threads - 1) * thread_size; const size_t offset = (num_threads - 1) * thread_size; threads.emplace_back(thread_add_n, h_data + offset, d_data + offset, n, thread_size_remain, num_streams_per_thread, streams.data() + (num_threads - 1) * num_streams_per_thread); for (unsigned int i = 0; i < num_streams; i++) { checkCuda(cudaStreamSynchronize(streams.at(i))); } for (unsigned int i = 0; i < num_threads; i++) { threads.at(i).join(); } std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now(); duration_total += std::chrono::duration_cast<std::chrono::microseconds>(end - begin) .count(); } for (unsigned int i = 0; i < streams.size(); i++) { checkCuda(cudaStreamDestroy(streams.at(i))); } if (verbose) { std::cout << "Average Latency: " << std::setprecision(2) << std::fixed << duration_total / 1000 / num_tests << " ms" << std::endl; } return;}bool verify_add_n(const std::vector<int>& vector, const std::vector<int>& vector_original, const unsigned int n){ if (vector.size() != vector_original.size()) { return false; } for (size_t i = 0; i < vector.size(); i++) { if (vector.at(i) - vector_original.at(i) != n) { return false; } } return true;}int main(int argc, char* argv[]){ size_t size{10000000}; // 10 ** 7 unsigned int n{100}; unsigned int num_threads{1}; unsigned int num_streams_per_thread{16}; if (argc == 5) { size = atoi(argv[1]); n = atoi(argv[2]); num_threads = atoi(argv[3]); num_streams_per_thread = atoi(argv[4]); } std::cout << "Array Size: " << size << std::endl; std::cout << "Number of Additions: " << n << std::endl; std::cout << "Number of Threads: " << num_threads << std::endl; std::cout << "Number of Streams Per Thread: " << num_streams_per_thread << std::endl; // Set CUDA device checkCuda(cudaSetDevice(0)); // Create a vector and initialize it with zeros std::vector<int> vector(size, 0); std::vector<int> vector_clone{vector}; int* h_data; int* d_data; const size_t bytes = size * sizeof(int); // Create pinned memory checkCuda(cudaMallocHost((void**)&h_data, bytes)); checkCuda(cudaMalloc((void**)&d_data, bytes)); checkCuda(cudaMemcpy((void*)h_data, (void*)vector.data(), bytes, cudaMemcpyHostToHost)); multithread_add_n(h_data, d_data, n, size, num_threads, num_streams_per_thread, false, 1); assert(verify_add_n(vector, vector_clone, n) && "The add_n implementation is incorrect."); // Warm up multithread_add_n(h_data, d_data, n, size, num_threads, num_streams_per_thread, false, 100); // Measure latency multithread_add_n(h_data, d_data, n, size, num_threads, num_streams_per_thread, true, 1000); checkCuda(cudaFree(d_data)); checkCuda(cudaFreeHost(h_data)); // Reserved for cuda-memcheck cudaDeviceReset();}
To build the application, please run the following command in the terminal.
123
$ docker run -it --rm --gpus all --privileged -v $(pwd):/mnt -w /mnt nvcr.io/nvidia/cuda:11.4.1-cudnn8-devel-ubuntu20.04$ cd /mnt/$ nvcc -o add add.cu -lpthread
The application consumes $4$ arguments, array size, number of additions to perform, number of threads, and the number of CUDA streams per thread.
For example, ./add 100 10 8 1 means running the application for an array of size 100, performing addition 10 times, distributed across 8 threads and each thread uses 1 CUDA stream.
123456
$ ./add 100 10 8 1Array Size: 100Number of Additions: 10Number of Threads: 8Number of Streams Per Thread: 1Average Latency: 0.14 ms
Similarly, ./add 100 10 8 0 means running the application for an array of size 100, performing addition 10 times, distributed across 8 threads using CPU only.
123456
$ ./add 100 10 8 0Array Size: 100Number of Additions: 10Number of Threads: 8Number of Streams Per Thread: 0Average Latency: 0.10 ms
Math-Bound VS Memory-Bound
In my previous blog post “Math-Bound VS Memory-Bound Operations”, we have discussed math-bound and memory-bound operations. In our particular program, we could adjust the operation to be math-bound or memory-bound by adjusting the number of additions.
From the performance measured, we could see that GPU is extremely good at performing math-bound operations, whereas for memory-bound operations GPU did not show significant advantages.
123456789101112131415161718192021222324
$ ./add 10000000 100 16 0Array Size: 10000000Number of Additions: 100Number of Threads: 16Number of Streams Per Thread: 0Average Latency: 176.47 ms$ ./add 10000000 100 16 1Array Size: 10000000Number of Additions: 100Number of Threads: 16Number of Streams Per Thread: 1Average Latency: 10.93 ms$ ./add 10000000 1 16 0Array Size: 10000000Number of Additions: 1Number of Threads: 16Number of Streams Per Thread: 0Average Latency: 2.90 ms$ ./add 10000000 1 16 1Array Size: 10000000Number of Additions: 1Number of Threads: 16Number of Streams Per Thread: 1Average Latency: 12.20 ms
In fact, even performing addition $100$ times does not make the operation math-bound on GPU. The time spent on executing the kernel is only $1.48%$, whereas the rest of the time were spent on memory copy.
1234567891011121314151617181920212223242526272829
$ nvprof ./add 10000000 100 16 1Array Size: 10000000Number of Additions: 100Number of Threads: 16Number of Streams Per Thread: 1==37708== NVPROF is profiling process 37708, command: ./add 10000000 100 16 1Average Latency: 18.62 ms==37708== Profiling application: ./add 10000000 100 16 1==37708== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 49.61% 488.55ms 1616 302.32us 193.19us 557.79us [CUDA memcpy DtoH] 48.91% 481.69ms 1616 298.08us 197.07us 578.39us [CUDA memcpy HtoD] 1.48% 14.574ms 1616 9.0180us 7.2960us 30.082us add_n_kernel(int*, unsigned int, unsigned long) API calls: 92.45% 15.6360s 3232 4.8379ms 201.53us 24.250ms cudaMemcpyAsync 6.70% 1.13320s 1616 701.24us 6.5280us 23.786ms cudaLaunchKernel 0.57% 97.205ms 1 97.205ms 97.205ms 97.205ms cudaMalloc 0.21% 35.013ms 1 35.013ms 35.013ms 35.013ms cudaDeviceReset 0.06% 9.6712ms 1616 5.9840us 401ns 716.81us cudaStreamSynchronize 0.00% 467.63us 1 467.63us 467.63us 467.63us cudaFree 0.00% 347.04us 1 347.04us 347.04us 347.04us cuDeviceTotalMem 0.00% 224.06us 101 2.2180us 205ns 100.11us cuDeviceGetAttribute 0.00% 166.07us 32 5.1890us 1.0530us 36.000us cudaStreamDestroy 0.00% 121.61us 32 3.8000us 884ns 68.764us cudaStreamCreate 0.00% 32.067us 1 32.067us 32.067us 32.067us cuDeviceGetName 0.00% 4.7420us 1 4.7420us 4.7420us 4.7420us cuDeviceGetPCIBusId 0.00% 3.6380us 1 3.6380us 3.6380us 3.6380us cudaSetDevice 0.00% 1.6830us 3 561ns 259ns 1.0390us cuDeviceGetCount 0.00% 1.0850us 2 542ns 206ns 879ns cuDeviceGet 0.00% 455ns 1 455ns 455ns 455ns cuDeviceGetUuid
If we increase the number of additions to $1000000$ for which the CPU can hardly handle, GPU could still perform extremely well. The operation has also become math-bound, since the time spent on executing the kernel is now $97.43%$.
1234567891011121314151617181920212223242526272829
$ nvprof ./add 10000000 1000000 16 1Array Size: 10000000Number of Additions: 1000000Number of Threads: 16Number of Streams Per Thread: 1==49064== NVPROF is profiling process 49064, command: ./add 10000000 1000000 16 1Average Latency: 263.62 ms==49064== Profiling application: ./add 10000000 1000000 16 1==49064== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 97.43% 25.7634s 1616 15.943ms 14.440ms 26.953ms add_n_kernel(int*, unsigned int, unsigned long) 1.29% 341.34ms 1616 211.23us 197.22us 465.84us [CUDA memcpy HtoD] 1.28% 339.55ms 1616 210.12us 195.46us 467.99us [CUDA memcpy DtoH] API calls: 94.19% 219.036s 3232 67.771ms 204.40us 331.30ms cudaMemcpyAsync 5.75% 13.3678s 1616 8.2721ms 7.3610us 323.28ms cudaLaunchKernel 0.04% 93.263ms 1 93.263ms 93.263ms 93.263ms cudaMalloc 0.02% 35.650ms 1 35.650ms 35.650ms 35.650ms cudaDeviceReset 0.00% 6.2746ms 1616 3.8820us 399ns 191.04us cudaStreamSynchronize 0.00% 205.59us 1 205.59us 205.59us 205.59us cuDeviceTotalMem 0.00% 121.86us 101 1.2060us 118ns 51.089us cuDeviceGetAttribute 0.00% 114.43us 32 3.5750us 857ns 68.004us cudaStreamCreate 0.00% 112.33us 1 112.33us 112.33us 112.33us cudaFree 0.00% 64.705us 32 2.0220us 1.2410us 10.551us cudaStreamDestroy 0.00% 20.017us 1 20.017us 20.017us 20.017us cuDeviceGetName 0.00% 4.6830us 1 4.6830us 4.6830us 4.6830us cuDeviceGetPCIBusId 0.00% 2.9030us 1 2.9030us 2.9030us 2.9030us cudaSetDevice 0.00% 990ns 3 330ns 165ns 638ns cuDeviceGetCount 0.00% 668ns 2 334ns 133ns 535ns cuDeviceGet 0.00% 205ns 1 205ns 205ns 205ns cuDeviceGetUuid
Multi-Thread Single-Stream VS Single-Thread Multi-Stream
Here we tried to compare the performance between multi-thread single-stream CUDA and single-thread multi-stream CUDA. Concretely, we compared the addition performance for the following two case:
From the performance latency we measured, we could see that for the math-bound operations there is no significant performance difference between the two cases, whereas for the memory-bound operations the single-thread multi-stream implementation is faster.
123456789101112131415161718192021222324252627282930313233343536
$ ./add 100000000 1 16 1Array Size: 100000000Number of Additions: 1Number of Threads: 16Number of Streams Per Thread: 1Average Latency: 64.67 ms$ ./add 100000000 1 1 16Array Size: 100000000Number of Additions: 1Number of Threads: 1Number of Streams Per Thread: 16Average Latency: 70.82 ms$ ./add 10000000 1 16 1Array Size: 10000000Number of Additions: 1Number of Threads: 16Number of Streams Per Thread: 1Average Latency: 10.94 ms$ ./add 10000000 1 1 16Array Size: 10000000Number of Additions: 1Number of Threads: 1Number of Streams Per Thread: 16Average Latency: 9.08 ms$ ./add 10000000 1000000 16 1Array Size: 10000000Number of Additions: 1000000Number of Threads: 16Number of Streams Per Thread: 1Average Latency: 242.83 ms$ ./add 10000000 1000000 1 16Array Size: 10000000Number of Additions: 1000000Number of Threads: 1Number of Streams Per Thread: 16Average Latency: 250.37 ms
Summary
All the experiment results are summarized below.
Conclusion
The latency difference between multi-thread single-stream CUDA and single-thread multi-stream CUDA is small.
Multi-Thread Single-Stream VS Single-Thread Multi-Stream CUDA
https://leimao.github.io/blog/Multi-Thread-Single-Stream-VS-Single-Thread-Multi-Stream-CUDA/
Author
Lei Mao
Posted on
10-18-2021
Updated on
05-12-2022
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Use Shared Memory in Templated Kernels in CUDA Programming
Introduction
Using shared memory in CUDA could potentially increase the performance of your program. However, when I tried to use shared memory in templated CUDA kernels, I got weird errors from compiler. It turns out that CUDA does not directly allow shared memory usage in template functions. After searching for a while, I found the motivations behind and some solutions to get around this problem.
Problem
Let’s say we want to use shared memory in the following templated kernel function.
123456789
template <typename T>__global__ void kernel(T * d_out, const T * d_in, const unsigned int n){ // Declare shared memory extern __shared__ T s_data[]; // Do things in the kernel // ...}
When you compile the program, you will definitely get the following error.
12
$ make/home/leimao/Workspace/GPU_Algorithms_CUDA/reduce/reduce.cu(37): error: declaration is incompatible with previous "s_data"
Problem Causes
The problem root is actually simple. In order to use shared memory, we have to use the keyword extern in our kernel function to declare a variable outside the current scope. It has no problem at all when the kernel function is not templated. However, if your kernel function is templated, there is a chance that you will use different types for the templated the kernel functions, and the extern variable you declared will have conflicting types. Therefore it is not allowed to use shared memory with template type directly in the kernel function.
Solutions
Use CUDPP Header
One solution is to use the SharedMemory struct defined in the open source CUDPP library. You could simply copy the sharedmem.h file to your source directory, and use the following code to declare shared memory. Then everything compiles!
12345678910111213141516
#include "sharedmem.h"template <typename T>__global__ void kernel(T * d_out, const T * d_in, const unsigned int n){ // Declare shared memory // The follow dynamic allocated memory does not work in templated kernels // extern __shared__ T s_data[]; // To get around SharedMemory<T> smem; T * s_data = smem.getPointer(); // Do things in the kernel // ...}
How does it work? Let us check the source code.
12345678910111213141516171819202122232425262728293031
template <typename T>struct SharedMemory{ //! @brief Return a pointer to the runtime-sized shared memory array. //! @returns Pointer to runtime-sized shared memory array __device__ T* getPointer() { extern __device__ void Error_UnsupportedType(); // Ensure that we won't compile any un-specialized types Error_UnsupportedType(); return (T*)0; } // TODO: Use operator overloading to make this class look like a regular array};// Following are the specializations for the following types.// int, uint, char, uchar, short, ushort, long, ulong, bool, float, and double// One could also specialize it for user-defined types.template <>struct SharedMemory <int>{ __device__ int* getPointer() { extern __shared__ int s_int[]; return s_int; } };template <>struct SharedMemory <unsigned int>{ __device__ unsigned int* getPointer() { extern __shared__ unsigned int s_uint[]; return s_uint; } };// ...
We can easily see from the source code that basically SharedMemory for different types of the shared memory have different variable names! No conflicts anymore. It is implemented using C++ template specialization so that for different types the variable name could be different.
Use Pointer Casting
The other simple solution is to use pointer typecasting.
12345678910111213141516
#include "sharedmem.h"template <typename T>__global__ void kernel(T * d_out, const T * d_in, const unsigned int n){ // Declare shared memory // The follow dynamic allocated memory does not work in templated kernels // extern __shared__ T s_data[]; // To get around extern __shared__ char smem[]; T * s_data = reinterpret_cast<T *>(smem); // Do things in the kernel // ...}
This solution essentially uses the same pointer variable name for memory but casting the pointer type to different types later on.
References
Use Shared Memory in Templated Kernels in CUDA Programming
https://leimao.github.io/blog/CUDA-Shared-Memory-Templated-Kernel/
Author
Lei Mao
Posted on
05-04-2019
Updated on
05-04-2019
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Block and Grid
Introduction
I just started to learn CUDA and read this useful blog post “An Even Easier Introduction to CUDA” from NVIDIA. However, I found the images of “Block” and “Grid” in the original blog post was not quite matching with the code in the blog post. So I think I need to express it in a better way.
Basic Code
This is the piece of CUDA code that I copied from the blog post.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
#include <iostream>#include <math.h>// Kernel function to add the elements of two arrays__global__void add(int n, float *x, float *y){ int index = blockIdx.x * blockDim.x + threadIdx.x; int stride = blockDim.x * gridDim.x; for (int i = index; i < n; i += stride) y[i] = x[i] + y[i];}int main(void){ int N = 1<<20; float *x, *y; // Allocate Unified Memory – accessible from CPU or GPU cudaMallocManaged(&x, N*sizeof(float)); cudaMallocManaged(&y, N*sizeof(float)); // initialize x and y arrays on the host for (int i = 0; i < N; i++) { x[i] = 1.0f; y[i] = 2.0f; } // Run the kernel int blockSize = 256; int numBlocks = (N + blockSize - 1) / blockSize; // add<<<1, blockSize>>>(N, x, y); add<<<numBlocks, blockSize>>>(N, x, y); // Wait for GPU to finish before accessing on host cudaDeviceSynchronize(); // Check for errors (all values should be 3.0f) float maxError = 0.0f; for (int i = 0; i < N; i++) maxError = fmax(maxError, fabs(y[i]-3.0f)); std::cout << "Max error: " << maxError << std::endl; // Free memory cudaFree(x); cudaFree(y); return 0;}
Block and Grid
I found the figure 1 in the NVIDIA blog post did not quite reflect how the add function was conducted in parallel. So I have made my versions.
Block
A block consists many threads. In our case, block_dim == block_size == num_threads = 256.
In the above figure, each small rectangle is a basic element in the array. When there is only one block, the parallel process could be imagined as block_dim pointers moving asynchronously. That is why you see the index are moving with a stride of block_dim in the following add function when there is only one block.
12345678
__global__void add(int n, float *x, float *y){ int index = threadIdx.x; int stride = blockDim.x; for (int i = index; i < n; i += stride) y[i] = x[i] + y[i];}
Grid
Similarly, a grid consists many blocks. In our case, grid_dim == grid_size = 4096.
In the above figure, each small rectangle is a block in the grid. The parallel process could be imagined as block_dim _ grid_dim pointers moving asynchronously. That is why you see the index are moving with a stride of block_dim _ grid_dim in the following add function.
12345678
__global__void add(int n, float *x, float *y){ int index = blockIdx.x * blockDim.x + threadIdx.x; int stride = blockDim.x * gridDim.x; for (int i = index; i < n; i += stride) y[i] = x[i] + y[i];}
Final Remarks
I personally feel it is easier to understand the concept of block and grid with the CUDA code using my figures instead of the one in the original blog post, although that figure was also correct if you think of that a grid wraps a bunch of blocks, a block wraps a bunch of threads, and a thread wraps a bunch of basic array elements.
CUDA Block and Grid
https://leimao.github.io/blog/CUDA-Concept-Block-Grid/
Author
Lei Mao
Posted on
03-12-2019
Updated on
03-12-2019
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
May 01, 2024
Accelerating Llama3 FP8 Inference with Triton Kernels
by
Adnan Hoque, Less Wright, Chih Chieh Yang
1.0 Summary
We present an optimized Triton FP8 GEMM (General Matrix-Matrix Multiply) kernel TK-GEMM, which leverages SplitK parallelization. For small batch size inference, TK-GEMM delivers up to 1.94x over the base Triton matmul implementation, 1.87x speedup over cuBLAS FP8 and 1.71x over cuBLAS FP16 for Llama3-70B inference problem sizes on NVIDIA H100 GPUs.
Figure 1. TK-GEMM Speedup over PyTorch (calling cuBLAS) for Llama3-70B Attention Layer Matrix Shapes (N=K=8192)
In this blog, we will cover how we designed an optimized kernel using Triton for FP8 inference and tuned it for Lama3-70B inference. We will cover FP8 (8-bit floating point), a new datatype supported by Hopper generation GPUs (SM90), the key SM90 features that Triton supports, and how we modified the parallelization to be able to maximize memory throughput for memory-bound (inference) problem sizes.
We also dedicate a section on CUDA graphs, an important technology that will help materialize kernel level speedups and enable developers who want to use Triton kernels in production settings to get additional performance gain.
Repo and code available at: https://github.com/pytorch-labs/applied-ai
2.0 FP8 Datatype
The FP8 datatype was introduced jointly by Nvidia, Arm and Intel and serves as a successor to 16-bit floating point types. With half the bit count, it has the potential to provide significant throughput improvements over its predecessors for Transformer networks. The FP8 datatype consists of 2 formats:
E4M3 (4-bit exponent and 3-bit mantissa). Able to store +/ 448 and nan.
E5M2 (5-bit exponent and 2-bit mantissa). Able to store +/- 57,334, nan and inf.
Above: BF16, FP16, FP8 E4M3 and FP8 E5M2.
To show precision differences, the closest representation to 0.3952 is shown in each format.
Image Credit: Nvidia
We use E4M3 in inference and forward pass training due its higher precision and E5M2 in training backward pass due to its higher dynamic range. Nvidia has designed their H100 FP8 Tensor Core to provide a peak of 3958 TFLOPS, 2x the FLOPS of the FP16 Tensor Core.
We designed our Triton kernel with these hardware innovations in mind and in the rest of the blog we will discuss methods to leverage and verify that these features are indeed being utilized by the Triton compiler.
3.0 Triton Hopper Support and FP8 Tensor Core Instruction
The Hopper GPU architecture has added the following new features that we can expect will accelerate FP8 GEMM.
Triton currently takes advantage of one of these features, the wgmma instruction, whereas PyTorch (calling cuBLAS) leverages all 3 which makes these speedups even more impressive. To fully take advantage of the Hopper FP8 Tensor Core, the wgmma is necessary even though the older mma.sync instruction is still supported.
The key difference between the mma and wgmma instructions is that instead of 1 CUDA warp being responsible for an output shard, an entire warp group, 4 CUDA warps, asynchronously contributes to an output shard.
To see what this instruction looks like in practice, and to verify that our Triton Kernel is indeed utilizing this feature we analyzed the PTX and SASS assembly using nsight compute.
Figure 2. PTX Assembly
This instruction is further lowered into a QGMMA instruction in SASS.
Figure 3. SASS Assembly
Both instructions tell us that we are multiplying two FP8 E4M3 input tensors and accumulating in F32, which confirms that the TK-GEMM Kernel is utilizing the FP8 Tensor Core and the lowering is being done correctly.
4.0 SplitK Work Decomposition
Figure 4. TK-GEMM vs Base Triton GEMM TFLOPS for M = 1-64
The base Triton FP8 GEMM implementation does not perform well for the small M regime, where for a matrix multiplication of A (MxN) x B (NxK), M < N, K. To optimize for this type matrix profile we applied a SplitK work decomposition instead of the Data Parallel decomposition found in the base Triton kernel. This greatly improved latencies for the small M regime.
For background, SplitK launches additional thread blocks along the k dimension to calculate partial output sums. The partial results from each thread block are then summed using an atomic reduction. This allows for finer grained work decomposition with resultant performance improvements. More details on SplitK are available in our arxiv paper.
After carefully tuning the other relevant hyperparameters for our kernel such as tile sizes, number of warps and the number of pipeline stages to Llama3-70B problem sizes we were able to produce up to 1.94x speedup over the Triton base implementation. For a more comprehensive introduction to hyperparameter tuning, see our blog.
Above: NCU profiler times for TK-GEMM under varying batch sizes, and compared with PyTorch (calling cuBLAS) FP8 and FP16.
Note that starting at M=32, the cuBLAS FP8 kernel starts to outperform TK-GEMM. For M >= 32, we suspect that hyperparameters we found are not optimal, and thus another set of experiments is required to determine the optimal parameters for the mid-sized M regime.
5.0 CUDA Graphs to Enable End-to-End Speedup
To be able to realize these speedups in an end-to-end setting, we must take into account both the kernel execution time (GPU duration) as well as the wall time (CPU+GPU) duration. Triton kernels, which are handwritten (as opposed to torch compile generated) are known to suffer from high-kernel launch latencies. If we use torch profiler to trace the TK-GEMM kernel we can see the call stack on the CPU side to pinpoint exactly what is causing the slowdown.
Figure 5. CPU Launch Overhead: 2.413ms
From above, we see that the majority of the wall time of our optimized kernel is dominated by JIT (Just-in-Time) compilation overhead. To combat this we can use CUDA graphs.
Figure 6. CUDA Graphs Visualization
Image Credit: PyTorch
The key idea is instead of multiple kernel launches, we instead can create and instantiate a graph (1 time cost) and then submit that instance of the graph for execution. To illustrate this point we simulate a Llama3-70B Attention layer, As shown in the below figure generated using nsight systems, the time between each GEMM is 165us compared to the 12us spent on the actual matmul due the CPU kernel launch overhead. This means that 92% of the time of the time in an Attention layer the GPU is idle and not doing any work.
Figure 7. Simulated Llama3-70B Attention Layer with TK-GEMM
To show the impact of CUDA graphs, we then created a graph of the TK-GEMM kernel in the toy Attention layer and replayed the graph. Below, we can see that the gaps between kernel executions are reduced to 6.65us.
Figure 8. Simulated Llama3-70B Attention Layer with TK-GEMM and CUDA Graphs
In practice, this optimization would result in a 6.4x speedup of a single attention layer in Llama3-70B, over naively using TK-GEMM in a model without CUDA graphs.
6.0 Potential Future Optimization Paths
Figure 9. TMA Hardware Unit
Image Credit: Nvidia
The Nvidia H100 features a TMA hardware unit. The dedicated TMA unit frees up registers and threads to do other work, as address generation is completely handled by the TMA. For memory bound problem sizes, this can provide even further gain when Triton enables support for this feature.
Figure 10. Tensor Core Utilization (Arrows Indicate Degrees of Freedom)
To identify how well we are utilizing the Tensor Core, we can analyze the roofline chart. Notice that we are in the memory-bound region as expected for small M. To improve kernel latency we can either increase the arithmetic intensity, which with a fixed problem size can only be achieved through exploiting data locality and other loop optimizations or increasing the memory throughput. This requires either a more optimal parallel algorithm specialized for the FP8 datatype as well as the type of problem size characteristics we expect to see in FP8 inference.
Figure 11. DRAM Throughput Circled, 1.65TB/s vs Peak 3.35TB/s on H100 (M=16, N=8192, K=8192)
Lastly, we can see that we are only achieving around 50% of peak DRAM throughput on the NVIDIA H100. High performance GEMM kernels typically achieve around 70-80% of peak throughput. This means that there is still a lot of room to improve and the techniques mentioned above (loop unrolling, optimized parallelization) are needed for additional gain.
7.0 Future Work
For future research, we would like to explore CUTLASS 3.x and CuTe to leverage more direct control over Hopper features especially in terms of obtaining direct TMA control and exploring pingpong architectures, which have shown promising results for FP8 GEMM.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
September 04, 2024
CUDA-Free Inference for LLMs
by
Adnan Hoque, Less Wright, Raghu Ganti and Mudhakar Srivatsa
In this blog, we discuss the methods we used to achieve FP16 inference with popular LLM models such as Meta’s Llama3-8B and IBM’s Granite-8B Code, where 100% of the computation is performed using OpenAI’s Triton Language.
For single token generation times using our Triton kernel based models, we were able to approach 0.76-0.78x performance relative to the CUDA kernel dominant workflows for both Llama and Granite on Nvidia H100 GPUs, and 0.62-0.82x on Nvidia A100 GPUs.
Why explore using 100% Triton? Triton provides a path for enabling LLMs to run on different types of GPUs - NVIDIA, AMD, and in the future Intel and other GPU based accelerators. It also provides a higher layer of abstraction in Python for programming GPUs and has allowed us to write performant kernels faster than authoring them using vendor specific APIs. In the rest of this blog, we will share how we achieve CUDA-free compute, micro-benchmark individual kernels for comparison, and discuss how we can further improve future Triton kernels to close the gaps.
Figure 1. Inference throughput benchmarks with Triton and CUDA variants of Llama3-8B and Granite-8B, on NVIDIA H100 and A100
Settings: batch size = 2, input sequence length = 512, output sequence length = 256
2.0 Composition of a Transformer Block
We start with a breakdown of the computations that happen in Transformer-based models. The figure below shows the “kernels” of a typical Transformer block.
Figure 2. Transformer Block by core kernels
The core operations for a Llama3 architecture are summarized in this list:
Each of these operations is computed on the GPU through the execution of one (or multiple) kernels. While the specifics of each of these kernels can vary across different transformer models, the core operations remain the same. For example, IBM’s Granite 8B Code model uses bias in the MLP layer, different from Llama3. Such changes do require modifications to the kernels. A typical model is a stack of these transformer blocks wired together with embedding layers.
3.0 Model Inference
Typical model architecture code is shared with a python model.py file that is launched by PyTorch. In the default PyTorch eager execution mode, these kernels are all executed with CUDA. To achieve 100% Triton for end-to-end Llama3-8B and Granite-8B inference we need to write and integrate handwritten Triton kernels as well as leverage torch.compile (to generate Triton ops). First, we replace smaller ops with compiler generated Triton kernels, and second, we replace more expensive and complex computations (e.g. matrix multiplication and flash attention) with handwritten Triton kernels.
Torch.compile generates Triton kernels automatically for RMSNorm, RoPE, SiLU and Element Wise Multiplication. Using tools like Nsight Systems we can observe these generated kernels; they appear as tiny dark green kernels in-between the matrix multiplications and attention.
Figure 3. Trace of Llama3-8B with torch.compile, showing CUDA kernels being used for matrix multiplications and flash attention
For the above trace, we note that the two major ops that make up 80% of the E2E latency in a Llama3-8B style model are matrix multiplication and attention kernels and both remain CUDA kernels. Thus to close the remaining gap, we replace both matmul and attention kernels with handwritten Triton kernels.
4.0 Triton SplitK GEMM Kernel
For the matrix multiplications in the linear layers, we wrote a custom FP16 Triton GEMM (General Matrix-Matrix Multiply) kernel that leverages a SplitK work decomposition. We have previously discussed this parallelization in other blogs as a way to accelerate the decoding portion of LLM inference.
5.0 GEMM Kernel Tuning
To achieve optimal performance we used the exhaustive search approach to tune our SplitK GEMM kernel. Granite-8B and Llama3-8B have linear layers with the following shapes:
Figure 4. Granite-8B and Llama3-8B Linear Layer Weight Matrix Shapes
Each of these linear layers have different weight matrix shapes. Thus, for optimal performance the Triton kernel must be tuned for each of these shape profiles. After tuning for each linear layer we were able to achieve 1.20x E2E speedup on Llama3-8B and Granite-8B over the untuned Triton kernel.
6.0 Flash Attention Kernel
We evaluated a suite of existing Triton flash attention kernels with different configurations, namely:
We evaluated the text generation quality of each of these kernels, first, in eager mode and then (if we were able to torch.compile the kernel with standard methods) compile mode. For kernels 2-5, we noted the following:
Figure 5. Table of combinations we tried with different Flash Attention Kernels
The above table summarizes what we observed out-of-the box. With some effort we expect that kernels 2-5 can be modified to meet the above criteria. However, this also shows that having a kernel that works for benchmarking is often only the start of having it usable as an end to end production kernel.
We chose to use the AMD flash attention kernel in our subsequent tests as it can be compiled via torch.compile and produces legible output in both eager and compiled mode.
To satisfy torch.compile compatibility with the AMD flash attention kernel, we had to define it as a torch custom operator. This process is explained in detail here. The tutorial link discusses how to wrap a simple image crop operation. However, we note that wrapping a more complex flash attention kernel follows a similar process. The two step approach is as follows:
After defining the Triton flash kernel as a custom op, we were able to successfully compile it for our E2E runs.
Figure 6. Trace of Llama3-8B with torch.compile, after swapping in Triton matmul and Triton flash attention kernels
From Figure 5, we note that now, after integrating both the SplitK matrix multiplication kernel, the torch op wrapped flash attention kernel, and then running torch.compile, we are able to achieve a forward pass that uses 100% Triton computation kernels.
7.0 End-to-End Benchmarks
We performed end-to-end measurements on NVIDIA H100s and A100s (single GPU) with Granite-8B and Llama3-8B models. We performed our benchmarks with two different configurations.
The Triton kernel configuration uses:
The CUDA Kernel configuration uses:
We found the following throughput and inter-token latencies for both eager and torch compiled modes, with typical inference settings:
Figure 7. Granite-8B and Llama3-8B Single Token Generation Latency on H100 and A100,
(batch size = 2, input sequence length = 512, output sequence length = 256)
To summarize, the Triton models can get up to 78% of the performance of the CUDA models on the H100 and up to 82% on the A100.
The performance gap can be explained by the kernel latencies we observe for matmul and flash attention, which are discussed in the next section.
8.0 Microbenchmarks
Figure 8. Triton and CUDA Kernel Latency Comparison (Llama3-8B on NVIDIA H100)
Input was an arbitrary prompt (bs=1, prompt = 44 seq length), decoding latency time
From the above, we note the following:
Triton matmul kernels are 1.2-1.4x slower than CUDA
AMDs Triton Flash Attention kernel is 1.6x slower than CUDA SDPA
These results highlight the need to further improve the performance of kernels that are core primitives like GEMM and Flash Attention. We leave this as future research, as recent works (e.g. FlashAttention-3, FlexAttention) provide ways to leverage the underlying hardware better as well as Triton pathways that we hope to be able to build on to produce greater speedups. To illustrate this, we compared FlexAttention with SDPA and AMD’s Triton Flash kernel.
We are working to verify E2E performance with FlexAttention. For now, initial microbenchmarks with Flex show promise for longer context lengths and decoding problem shapes, where the query vector is small:
Figure 9. FlexAttention Kernel Benchmarks on NVIDIA H100 SXM5 80GB
(batch=1, num_heads=32, seq_len=seq_len, head_dim=128)
9.0 Future Work
For future work we plan to explore ways to further optimize our matmuls that leverage the hardware better, such as this blog we published on utilizing TMA for H100, as well as different work decompositions (persistent kernel techniques like StreamK etc.) to get greater speedups for our Triton-based approach. For flash attention, we plan to explore FlexAttention and FlashAttention-3 as the techniques used in these kernels can be leveraged to help further close the gap between Triton and CUDA.
We also note that our prior work has shown promising results for FP8 Triton GEMM kernel performance versus cuBLAS FP8 GEMM, thus in a future post we will explore E2E FP8 LLM inference.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
April 19, 2023
Accelerating Large Language Models with Accelerated Transformers
by
Lucas Pasqualin, Driss Guessous, Christian Puhrsch, Bertrand Maher, Michael Gschwind
TL;DR. We show how to use Accelerated PyTorch 2.0 Transformers and the newly introduced torch.compile() method to accelerate Large Language Models on the example of nanoGPT, a compact open-source implementation of the GPT model from Andrej Karpathy. Using the new scaled dot product attention operator introduced with Accelerated PT2 Transformers, we select the flash_attention custom kernel and achieve faster training time per batch (measured with Nvidia A100 GPUs), going from a ~143ms/batch baseline to ~113 ms/batch. In addition, the enhanced implementation using the SDPA operator offers better numerical stability. Finally, further optimizations are achieved using padded inputs, which when combined with flash attention lead to ~87ms/batch.
Recent times have seen exponential adoption of large language models (LLMs) and Generative AI in everyday life. Tightly coupled with these ever-growing models is the ever-growing training cost - in terms of both time and hardware utilization. The PyTorch team has tackled these challenges head on with Accelerated PyTorch 2 Transformers (previously known as “Better Transformer”) and JIT Compilation in PyTorch 2.0.
In this blog post, we explore training optimizations gained by utilizing custom kernel implementations of SDPA - also known as scaled dot product attention - a critical layer in transformer models. The custom kernel for SDPA replaces several discrete sequential operations with one globally optimized kernel which avoids allocating a large amount of intermediate CUDA memory. This approach offers a number of advantages, including but not limited to: higher performance computation of SDPA by reducing memory bandwidth bottleneck, reduced memory footprint to support larger batch sizes, and finally added numerical stability by prescaling input tensors. These optimizations are demonstrated on nanoGPT, an open-source implementation of GPT from Andrej Karpathy.
Background
Scaled dot product attention is the fundamental building block of multihead attention, as introduced in “Attention is All You Need”, and has a wide range of applications in LLM and Generative AI models.
Figure 1: The Transformer model architecture based on “Attention is All You Need”. With the new PyTorch SDPA operator, Multi-Head Attention is efficiently implemented by a linear layer for the in-projection, the SDPA operator, and a linear layer for the out-projection.
With the new scaled_dot_product_attention operator, multihead attention can be implemented in just 3 steps: in projection with a linear layer, SDPA, and out projection with a linear layer.
# In Projection
# variable descriptions:
# q,k,v = Query, Key, Value tensors
# bsz = batch size
# num_heads = Numner of heads for Multihead Attention
# tgt_len = Target length
# src_len = Source Length
# head_dim: Head Dimension
q, k, v = _in_projection(query, key, value, q_proj_weight, k_proj_weight, v_proj_weight, b_q, b_k, b_v)
q = q.view(bsz, num_heads, tgt_len, head_dim)
k = k.view(bsz, num_heads, src_len, head_dim)
v = v.view(bsz, num_heads, src_len, head_dim)
# Scaled Dot Product Attention
attn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)
# Out Projection
attn_output = attn_output.permute(2, 0, 1, 3).contiguous().view(bsz * tgt_len, embed_dim)
attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
attn_output = attn_output.view(tgt_len, bsz, attn_output.size(1))
PyTorch 2. supports multiple different kernels optimized for specific use cases, with specific requirements. A kernel picker picks the best kernel for a particular combination of input parameters. If no optimized “custom kernel” for a particular combination of input parameters can be identified, the kernel picker selects a general kernel that can handle all input combinations.
While future releases may extend this set of operators, PyTorch 2.0 launches with 3 implementations for the SDPA operator:
Note that both optimized kernels (two and three listed above), support a key padding mask and limit the supported attention mask to causal attention. Accelerated PyTorch 2.0 Transformers today only support the causal mask when it is specified using the is_causal boolean. When a mask is specified, the general-purpose kernel will be selected because it is too expensive to analyze the contents of a provided mask to determine if it is the causal mask. Additional explanations on the constraints for each kernel can be found in the Accelerated PT2 Transformer blog.
Enabling Accelerated Transformers with nanoGPT
The SDPA operator being a critical component of the GPT model, we identified the open source nanoGPT model as an excellent candidate for both demonstrating the ease of implementation and benefits of PyTorch 2.0’s Accelerated Transformers. The following demonstrates the exact process by which Accelerated Transformers was enabled on nanoGPT.
This process largely revolves around replacing the existing SDPA implementation with the newly added F.scaled_dot_product_attention operator from functional.py. This process can be easily adapted to enable the operator in many other LLMs. Alternatively, users can instead choose to call F.multi_head_attention_forward() or utilize the nn.MultiHeadAttention module directly where applicable. The following code snippets are adapted from Karpathy’s nanoGPT repository.
Step 1: Identify the existing SDPA implementation
In the case of nanoGPT, SDPA is implemented in the model’s CausalSelfAttention class. The original implementation at time of writing is adapted below for this post.
Step 2: Replace with Torch’s scaled_dot_product_attention
At this point we can note the following:
Swapping out the SDPA implementation for torch’s scaled_dot_product_attention and removing the now redundant code yields the following implementation.
Alternatively, the original mask can be passed into the attn_mask field however due to the mentioned kernel constraints that would limit the implementation to only support the generic sdpa_math kernel.
Step 3 (Bonus): Faster matmuls with padding
On top of the performance improvements from SDPA, our analysis yielded a nice ancillary win. In Andrej’s words “The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase the vocab size from 50257 to 50304 (nearest multiple of 64).”
The vocab size determines the dimensions of matmuls in the output layer of GPT, and these are so large that they were taking a majority of the time for the entire training loop! We discovered that they were achieving performance significantly below the peak throughput achievable on the A100 GPU, and guessed from NVIDIA’s matmul documentation that 64-element alignment would yield better results. Indeed, padding these matmuls achieves nearly a 3x speedup! The underlying cause is that unaligned memory accesses significantly reduce efficiency. A deeper analysis can be found in this Twitter thread.
With this optimization we were able to further reduce training time from ~113 ms (using flash attention) to ~87 ms per batch.
Results
The figure below demonstrates the performance gained using Pytorch custom kernels. Here are the exact figures:
All code was run on an 8 x NVIDIA Corporation A100 server with 80 GB HBM [A100 SXM4 80GB], and for the purpose of this experiment dropout was set to 0.
Figure 2: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.
Enhancing Numerical Model Stability
In addition to being faster, PyTorch’s implementation offers increased numerical stability by avoiding loss of precision in many execution scenarios. There is a great explanation here, but essentially the PyTorch implementation scales the Query and Key matrices before multiplication, which is said to be more stable and avoid loss of precision. Because of the merged custom kernel architecture of SDPA, this scaling does not introduce additional overhead in the computation of the attention result. In comparison, an implementation from the individual computational components would require separate pre-scaling at additional cost. For an additional explanation, see Appendix A.
Improved Memory Consumption
Yet another large advantage of using the torch SDPA kernels is the reduced memory footprint, which allows for the utilization of larger batch sizes. The following chart compares the best validation loss after one hour of training for both flash attention and the baseline implementations of causal attention. As can be seen, the maximum batch size achieved with the baseline causal attention implementation (on 8 x NVIDIA Corporation A100 server with 80 GB HBM) was 24, significantly less then the maximum achieved with flash attention, which was 39.
Figure 3: Using Flash Attention enables the usage of larger batch sizes, allowing users to achieve lower validation loss after one hour of training (smaller is better).
Conclusion
Accelerated PyTorch 2 Transformers were designed to make the training and production deployment of state-of-the-art transformer models affordable and integrated with PyTorch 2.0 model JIT compilation. The newly introduced PyTorch SDPA operator provides improved performance for training Transformer models and is particularly valuable for the expensive Large Language Model training. In this post we demonstrate a number of optimizations on the exemplary nanoGPT model including:
Appendix A: Analyzing Attention Numeric Stability
In this section we provide a more in depth explanation of the previously mentioned enhanced numerical stability which is gained by prescaling SDPA’s input vectors. The following is a simplified version of nanoGPT’s mathematical implementation of SDPA. The important thing to note here is that the query undergoes matrix multiplication without being scaled.
# nanoGPT implementation of SDPA
# notice q (our query vector) is not scaled !
att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
att = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))
att = F.softmax(att, dim=-1)
# Dropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att)
y_nanogpt = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
The following is the equivalent mathematical implementation in torch’s scaled_dot_product_attention.
# PyTorch implementation of SDPA
embed_size = q.size(-1)
scaling_factor = math.sqrt(math.sqrt(embed_size))
q = q / scaling_factor # notice q _is_ scaled here !
# same as above, but with scaling factor
att = q @ (k.transpose(-2, -1) / scaling_factor)
att = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))
att = F.softmax(att0, dim=-1)
# Dropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att)
y_scale_before = att @ v
Mathematically both approaches should be equivalent, however our experimentation shows that in practice we receive different results from each approach.
Using the approach above, we verified y_scale_before matches the expected output from using the scaled_dot_product_attention method while y_nanogpt does not.
The torch.allclose method was used to test equivalence. Specifically, we showed that:
y_sdpa = torch.nn.functional._scaled_dot_product_attention(
q,
k,
v,
attn_mask=self.bias[:,:,:T,:T] != 0,
dropout_p=0.0,
need_attn_weights=False,
is_causal=False,
)
torch.allclose(y_sdpa, y_nanogpt) # False, indicating fp issues
torch.allclose(y_sdpa, y_scale_before) # True, as expected
Appendix B: Reproducing Experiment Results
Researchers seeking to reproduce these results should start with the following commit from Andrej’s nanoGPT repository - b3c17c6c6a363357623f223aaa4a8b1e89d0a465. This commit was used as the baseline when measuring the per batch speed improvements. For results which include padded vocabulary optimizations (which yielded the most significant improvements to batch speed), use the following commit - 77e7e04c2657846ddf30c1ca2dd9f7cbb93ddeab. From either checkout, selecting kernels for experimentation is made trivial with the use of the torch.backends API.
The desired kernel can be selected via a context manager:
with torch.backends.cuda.sdp_kernel (
enable_math = False,
enable_flash = False,
enable_mem_efficient = True
):
train(model)
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
July 22, 2024
Deep Dive on the Hopper TMA Unit for FP8 GEMMs
by
Adnan Hoque, Less Wright, Chih-Chieh Yang
Abstract
The Hopper (H100) GPU architecture, billed as the “first truly asynchronous GPU”, includes a new, fully asynchronous hardware copy engine for bulk data movement between global and shared memory called Tensor Memory Accelerator (TMA). While CUTLASS has built-in support for TMA via its asynchronous pipeline paradigm, Triton exposes TMA support via an experimental API.
In this post, we provide a deeper dive into the details of how TMA works, for developers to understand the new async copy engine. We also show the importance of leveraging TMA for H100 kernels by building a TMA enabled FP8 GEMM kernel in Triton, which delivers from 1.4-2.2x performance gains over cuBLAS FP16 for small-to-medium problem sizes. Finally, we showcase key implementation differences between Triton and CUTLASS that may account for reports of performance regressions with TMA in Triton. We open source our implementation for reproducibility and review at https://github.com/pytorch-labs/applied-ai/tree/main/kernels
Figure 1. The throughput in TFLOPs of various Triton and cuBLAS FP8 and FP16 kernels, for M=M, N=4096, K=4096. The red line is the Triton TMA, which showcases the advantages of leveraging TMA.
TMA Background
TMA is an H100 hardware addition that allows applications to asynchronously and bi-directionally transfer 1D-5D tensors between GPU global and shared memory. In addition, TMA can also transfer the same data to not just the calling SM’s shared memory, but to other SM’s shared memory if they are part of the same Thread Block Cluster. This is termed ‘multicast’.
TMA is very lightweight as only a single thread is needed to kick off a TMA transfer. By moving data directly from GMEM (global) to SMEM (shared), this avoids earlier GPU requirements of using registers for moving data between different memory spaces.
Figure 2. A100-style data movement vs H100 with TMA. TMA hardware eliminates the need for a large amount of threads and registers participating in bulk data transfers. (Image credit Nvidia)
A single thread can issue large data movement instructions, allowing the majority of a given thread block to continue working on other instructions while data is in-flight. Combined with asynchronous pipelining, this allows memory transfers to be easily hidden and ensure the majority of any given thread block cluster can focus on computational task.
This lightweight invocation for data movement enables the creation of warp-group specialized kernels, where warp-groups take on different roles, namely producers and consumers. Producers elect a leader thread that fires off TMA requests, which are then asynchronously coordinated with the consumer (MMA) warp-groups via an arrival barrier. Consumers then process the data using warp-group MMA, and signal back to the producers when they have finished reading from the SMEM buffer and the cycle repeats.
Further, within threadblock clusters, producers can lower their max register requirements since they are only issuing TMA calls, and effectively transfer additional registers to MMA consumers, which helps to alleviate register pressure for consumers.
In addition, TMA handles the address computation for the shared memory destination where the data requested should be placed. This is why calling threads (producers) can be so lightweight.
To ensure maximum read access speed, TMA can lay out the arriving data based on swizzling instructions, to ensure the arriving data can be read as fast as possible by consumers, as the swizzling pattern helps avoid shared memory bank conflicts.
Finally for TMA instructions that are outgoing, or moving data from SMEM to GMEM, TMA can also include reduction operations (add/min/max) and bitwise (and/or) operations.
TMA usage in Triton
Pre-Hopper Load:
offs_m = pid_m*block_m + tl.arange(0, block_m)
offs_n = pid_n*block_n + tl.arange(0, block_n)
offs_k = tl.arange(0, block_k)
a_ptrs = a_ptr + (offs_am[:, None]*stride_am + offs_k[None, :]*stride_ak)
b_ptrs = b_ptr + (offs_k[:, None]*stride_bk + offs_bn[None, :]*stride_bn)
a = tl.load(a_ptrs)
b = tl.load(b_ptrs)
Figure 3. Traditional style bulk load from global to shared memory in Triton
In the above Triton example showing a pre-Hopper load, we see how the data for tensors a and b are loaded by each thread block computing global offsets (a_ptrs, b_ptrs) from their relevant program_id (pid_m, pid_n, k) and then making a request to move blocks of memory into shared memory for a and b.
Now let’s examine how to perform a load using TMA in Triton.
The TMA instruction requires a special data structure called a tensor map, in contrast to the above where we directly pass pointers to global memory. To build the tensor map, we first create a TMA descriptor on the CPU. The descriptor handles the creation of the tensor map by using the cuTensorMapEncode API. The tensor map holds metadata such as the global and shared memory layout of the tensor and serves as a compressed representation of the structure of the multi-dimensional tensor stored in global memory.
Figure 4. TMA address generation via a copy descriptor (Image credit: Nvidia)
The TMA descriptor holds the tensor’s key properties:
The TMA descriptor is created on the host before the kernel, and then moved to device by passing the descriptor to a torch tensor. Thus, in Triton, the GEMM kernel receives a global pointer to the tensor map.
Triton Host Code
desc_a = np.empty(TMA_SIZE, dtype=np.int8)
desc_b = np.empty(TMA_SIZE, dtype=np.int8)
desc_c = np.empty(TMA_SIZE, dtype=np.int8)
triton.runtime.driver.active.utils.fill_2d_tma_descriptor(a.data_ptr(), m, k, block_m, block_k, a.element_size(), desc_a)
triton.runtime.driver.active.utils.fill_2d_tma_descriptor(b.data_ptr(), n, k, block_n, block_k, b.element_size(), desc_b)
triton.runtime.driver.active.utils.fill_2d_tma_descriptor(c.data_ptr(), m, n, block_m, block_n, c.element_size(), desc_c)
desc_a = torch.tensor(desc_a, device='cuda')
desc_b = torch.tensor(desc_b, device='cuda')
desc_c = torch.tensor(desc_c, device='cuda')
This is the code that is used to set up the descriptors in the kernel invoke function.
Triton Device Code
Offsets/Pointer Arithmetic:
offs_am = pid_m * block_m
offs_bn = pid_n * block_n
offs_k = 0
Load:
a = tl._experimental_descriptor_load(a_desc_ptr, [offs_am, offs_k], [block_m, block_k], tl.float8e4nv)
b = tl._experimental_descriptor_load(b_desc_ptr, [offs_bn, offs_k], [block_n, block_k], tl.float8e4nv)
Store:
tl._experimental_descriptor_store(c_desc_ptr, accumulator, [offs_am, offs_bn])
We no longer need to calculate a pointer array for both load and store functions in the kernel. Instead, we pass a single descriptor pointer, the offsets, block size and the input datatype. This simplifies address calculation and reduces register pressure, as we no longer have to do complex pointer arithmetic in software and dedicate CUDA cores for address computation.
TMA Performance Analysis
Below, we discuss the PTX instructions for different load mechanisms on Hopper.
PTX for Loading Tile (cp.async) - H100 no TMA
add.s32 %r27, %r100, %r8;
add.s32 %r29, %r100, %r9;
selp.b32 %r30, %r102, 0, %p18;
@%p1 cp.async.cg.shared.global [ %r27 + 0 ], [ %rd20 + 0 ], 0x10, %r30;
@%p1 cp.async.cg.shared.global [ %r29 + 0 ], [ %rd21 + 0 ], 0x10, %r30;
cp.async.commit_group ;
Here, we observe the older cp.async instruction responsible for global memory copies. From the traces below we can see that both loads bypass the L1 cache. A major difference in the newer TMA load is that before tiles from A and B were ready to be consumed by the Tensor Core we would need to execute an ldmatrix instruction that operated on data contained in register files. On Hopper, the data can now be directly reused from shared memory.
Figure 5. H100 Memory Chart showing GMEM Throughput = 910.22 GB/s (Triton GEMM without TMA) for M=128, N=4096, K=4096
By leveraging TMA through the Triton API changes we mentioned above, we can investigate the PTX that Triton generates for a single 2D tile load with TMA.
PTX for Loading Tile (cp.async.bulk.tensor) - H100 using TMA
bar.sync 0;
shr.u32 %r5, %r4, 5;
shfl.sync.idx.b32 %r66, %r5, 0, 31, -1;
elect.sync _|%p7, 0xffffffff;
add.s32 %r24, %r65, %r67;
shl.b32 %r25, %r66, 7;
@%p8
cp.async.bulk.tensor.2d.shared::cluster.global.mbarrier::complete_tx::bytes [%r24], [%rd26, {%r25,%r152}], [%r19];
The cp.async.bulk.tensor.2d.shared TMA instruction is passed the destination address in shared memory, a pointer to the tensor map, the tensor map coordinates and a pointer to the mbarrier object, respectively.
Figure 6. H100 Memory Chart GMEM Throughput =1.45 TB/s (Triton GEMM with TMA) for M=128, N=4096, K=4096
For optimal performance we tuned the TMA GEMM kernel extensively. Amongst other parameters such as tile sizes, number of warps and number of pipeline stages, the biggest increase in memory throughput was observed when we increased the TMA_SIZE (descriptor size) from 128 to 512. From the above NCU profiles, we can see that the final tuned kernel has increased global memory transfer throughput from 910 GB/s to 1.45 TB/s, a 59% increase in GMEM throughput, over the non-TMA Triton GEMM kernel.
Comparison of CUTLASS and Triton FP8 GEMM and TMA Implementation - Kernel Architecture
Figure 7. Triton vs CUTLASS Ping-Pong FP8 GEMM TFLOPs, M=M, N=4096, K=4096
The above chart shows the performance of a CUTLASS Ping-Pong GEMM kernel against Triton. The Ping-Pong kernel leverages TMA differently than Triton. It makes use of all of its HW and SW software capabilities, while Triton currently does not. Specifically, CUTLASS supports the below TMA features that help explain the performance gaps in pure GEMM performance:.
TMA Multicast
Warp Specialization
Tensor Map (TMA Descriptor) Prefetch
To put the performance numbers in perspective, below we show a ‘speed-up’ chart highlighting the latency differences on a percentage basis:
Figure 8: % Speedup of CUTLASS Ping-Pong vs Triton FP8 with TMA.
This speedup is purely kernel throughput, not including E2E launch overhead which we will discuss below.
TMA Descriptor movement - a key difference between Triton and CUTLASS with E2E performance implications
As noted previously, creation of a 2D+ dimensional TMA descriptor takes place on the host and is then transferred to the device. However, this transfer process takes place very differently depending on the implementation.
Here we showcase the differences between how Triton transfers TMA descriptors compared with CUTLASS.
Recall, TMA transfers require a special data structure, a tensor map to be created on CPU through the cuTensorMap API, which for an FP8 GEMM Kernel means creating three descriptors, one for each A, B and C. We see below that for both the Triton and CUTLASS Kernels the same CPU procedures are invoked.
Figure 7. Calls to cuTensorMapEncodeTiled (Both Triton and CUTLASS use this path)
However, for Triton, each descriptor is transferred in its own distinct copy kernel, which adds a significant amount of overhead and serves as a barrier to use this kernel in an end-to-end use inference scenario.
Figure 8. Three H2D Copy Kernels are launched before the kernel execution, for A, B and C
These copies are not observed in the CUTLASS implementation, due to the way that TMA descriptors are passed to the kernel. We can see from the PTX below that with Cutlass, tensor maps are passed-by-value to the kernel.
.entry _ZN7cutlass13device_kernelIN49_GLOBAL__N__8bf0e19b_16_scaled_mm_c3x_cu_2bec3df915cutlass_3x_gemmIaNS_6half_tENS1_14ScaledEpilogueEN4cute5tupleIJNS5_1CILi64EEENS7_ILi128EEES9_EEENS6_IJNS7_ILi2EEENS7_ILi1EEESC_EEENS_4gemm32KernelTmaWarpSpecializedPingpongENS_8epilogue18TmaWarpSpecializedEE10GemmKernelEEEvNT_6ParamsE(
.param .align 64 .b8 _ZN7cutlass13device_kernelIN49_GLOBAL__N__8bf0e19b_16_scaled_mm_c3x_cu_2bec3df915cutlass_3x_gemmIaNS_6half_tENS1_14ScaledEpilogueEN4cute5tupleIJNS5_1CILi64EEENS7_ILi128EEES9_EEENS6_IJNS7_ILi2EEENS7_ILi1EEESC_EEENS_4gemm32KernelTmaWarpSpecializedPingpongENS_8epilogue18TmaWarpSpecializedEE10GemmKernelEEEvNT_6ParamsE_param_0[1024]
mov.b64 %rd110, _ZN7cutlass13device_kernelIN49_GLOBAL__N__8bf0e19b_16_scaled_mm_c3x_cu_2bec3df915cutlass_3x_gemmIaNS_10bfloat16_tENS1_14ScaledEpilogueEN4cute5tupleIJNS5_1CILi64EEES8_NS7_ILi256EEEEEENS6_IJNS7_ILi1EEESB_SB_EEENS_4gemm24KernelTmaWarpSpecializedENS_8epilogue18TmaWarpSpecializedEE10GemmKernelEEEvNT_6ParamsE_param_0;
add.s64 %rd70, %rd110, 704;
cvta.param.u64 %rd69, %rd70;
cp.async.bulk.tensor.2d.global.shared::cta.bulk_group [%rd69, {%r284, %r283}], [%r1880];
Figure 9. CUTLASS kernel PTX showing pass-by-value
By directly passing the TMA Descriptor as opposed to passing a global memory pointer, the CUTLASS kernel avoids the three extra H2D copy kernels and instead these copies are included in the single device kernel launch for the GEMM.
Because of the difference in how descriptors are moved to the device, the kernel latencies including the time to prepare the tensors to be consumed by the TMA is drastically different. For M=1-128, N=4096, K=4096 the CUTLASS pingpong kernel has an average latency of 10us Triton TMA kernels complete in an average of 4ms. This is a factor of ~3330x slower and appears to be directly linked to the 3 independent kernel launches for TMA descriptor transfer by Triton.
Cuda graphs may be one way to reduce this, but given the overhead created by the H2D copies the current Triton implementation when measured end to end is not competitive. A rework of how the Triton compiler manages TMA descriptors would likely resolve this gap. We thus focused on comparing the actual compute kernel throughput and not E2E in our data above.
Results Summary
Figure 10. Triton FP8 TMA GEMM TFLOPs Comparison
Figure 11. Triton FP8 TMA GEMM TFLOPs Comparison Table
The above chart and table summarize the gain we’ve been able to achieve on a single NVIDIA H100 for FP8 GEMM, by leveraging the TMA Hardware Unit, over non-TMA Triton kernels and high performance CUDA (cuBLAS) kernels. The key point to note is this kernel’s superior scaling (with the batch size) properties over the competition. The problem sizes we benchmarked on are representative of the matrix shapes found in small-to-medium batch size LLM inference. Thus, TMA GEMM kernel performance in the mid-M regime (M=32 to M=128) will be critical for those interested in leveraging this kernel for FP8 LLM deployment use cases, as the FP8 compressed data type can allow larger matrices to fit in GPUs memory.
To summarize our analysis, the TMA implementation in Triton and CUTLASS differ in terms of full featureset support (multicast, prefetch etc.) and how the TMA Descriptor is passed to the GPU kernel. If this descriptor is passed in a manner that more closely matches the CUTLASS kernel (pass-by-value), the extraneous H2D copies could be avoided and thus the E2E performance would be greatly improved.
Future Work
For future research, we plan to improve upon these results, by working with the community to incorporate the CUTLASS architecture of TMA loads into Triton as well as investigating the Cooperative Kernel for FP8 GEMM, a modified strategy to the Ping-Pong Kernel.
In addition, once features like thread block clusters and TMA atomic operations are enabled in Triton, we may be able to get further speedups by leveraging the SplitK strategy in the TMA GEMM Kernel, as atomic operations on Hopper can be performed in Distributed Shared Memory (DSMEM) as opposed to L2 Cache. We also note the similarities of NVIDIA Hopper GPUs with other AI hardware accelerators like Google’s TPU and IBM’s AIU which are dataflow architectures. On Hopper, data can now “flow” from GMEM to a network of connected SMs due to the additions of TMA, which we discussed extensively in this blog, and DSMEM, which we plan to cover in a future post.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
October 30, 2024
Triton Kernel Compilation Stages
by
Sara Kokkila-Schumacher*, Brian Vaughan*, Raghu Ganti*, and Less Wright+ (*IBM Research, +Meta)
The Triton open-source programming language and compiler offers a high-level, python-based approach to create efficient GPU code. In this blog, we highlight the underlying details of how a triton program is compiled and the intermediate representations. For an introduction to Triton, we refer readers to this blog.
Triton Language and Compilation
The Triton programming language supports different types of modern GPUs and follows a blocked programming approach. As an example, we will follow the Triton vector add tutorial with minor modifications. The vector addition kernel and helper function is defined as:
import torch
import triton
import triton.language as tl
@triton.jit
def add_kernel(x_ptr, # *Pointer* to first input vector.
y_ptr, # *Pointer* to second input vector.
output_ptr, # *Pointer* to output vector.
n_elements,
BLOCK_SIZE: tl.constexpr,
):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
output = x + y
tl.store(output_ptr + offsets, output, mask=mask)
def add(x: torch.Tensor, y: torch.Tensor):
output = torch.empty_like(x)
assert x.is_cuda and y.is_cuda and output.is_cuda
n_elements = output.numel()
grid = lambda meta: (triton.cdiv(n_elements, meta['BLOCK_SIZE']), )
triton_kernel=add_kernel[grid](x, y, output, n_elements, BLOCK_SIZE=1024)
torch.cuda.synchronize()
# Save compilation stages - some of the stages identified here are specific to NVIDIA devices:
with open('triton_IR.txt', 'w') as f:
print(triton_kernel.asm['ttir'], file=f)
with open('triton_TTGIR.txt', 'w') as f:
print(triton_kernel.asm['ttgir'], file=f)
with open('triton_LLVMIR.txt', 'w') as f:
print(triton_kernel.asm['llir'], file=f)
with open('triton_PTX.ptx', 'w') as f:
print(triton_kernel.asm['ptx'], file=f)
with open('triton_cubin.txt', 'w') as f:
print(triton_kernel.asm['cubin'], file=f)
return output
torch.manual_seed(0)
size = 98432
x = torch.rand(size, device='cuda')
y = torch.rand(size, device='cuda')
output_torch = x + y
output_triton = add(x, y)
print(output_torch)
print(output_triton)
print(f'The maximum difference between torch and triton is '
f'{torch.max(torch.abs(output_torch - output_triton))}')
The Triton vector add kernel includes the @triton.jit decorator. The Triton compiler will compile functions marked by @triton.jit, which lowers the function through multiple compilation stages. The helper function add allocates the output tensor, computes the appropriate GPU grid size, and additionally saves the intermediate compilation stages.
Focusing on the compilation process, the Triton kernel is lowered to device specific assembly through a series of stages outlined in the following figure.
The kernel is compiled by first walking the abstract syntax tree (AST) of the decorated python function to create the Triton Intermediate Representation (Triton-IR). The Triton-IR is an unoptimized, machine independent intermediate representation. It introduces tile-level programming requirements and is based on the open-source LLVM compiler project. Next the Triton compiler optimizes and converts the Triton-IR into the stages Triton-GPU IR (Triton-TTGIR) and then LLVM-IR. Both the Triton-IR and Triton-GPUIR representations are written as MLIR dialects, where MLIR is a subproject of LLVM that aims to improve compilation for heterogeneous hardware.
For the Triton vector add tutorial kernel, the example Triton IR snippet is:
module {
tt.func public @add_kernel(%arg0: !tt.ptr<f32> {tt.divisibility = 16 : i32} loc("/u/saraks/triton_blog/01-vector-add.py":28:0), %arg1: !tt.ptr<f32> {tt.divisibility = 16 : i32} loc("/u/saraks/triton_blog/01-vector-add.py":28:0), %arg2: !tt.ptr<f32> {tt.divisibility = 16 : i32} loc("/u/saraks/triton_blog/01-vector-add.py":28:0), %arg3: i32 {tt.divisibility = 16 : i32} loc("/u/saraks/triton_blog/01-vector-add.py":28:0)) attributes {noinline = false} {
%c1024_i32 = arith.constant 1024 : i32 loc(#loc1)
%0 = tt.get_program_id x : i32 loc(#loc2)
%1 = arith.muli %0, %c1024_i32 : i32 loc(#loc3)
%2 = tt.make_range {end = 1024 : i32, start = 0 : i32} : tensor<1024xi32> loc(#loc4)
%3 = tt.splat %1 : i32 -> tensor<1024xi32> loc(#loc5)
%4 = arith.addi %3, %2 : tensor<1024xi32> loc(#loc5)
%5 = tt.splat %arg3 : i32 -> tensor<1024xi32> loc(#loc6)
%6 = arith.cmpi slt, %4, %5 : tensor<1024xi32> loc(#loc6)
%7 = tt.splat %arg0 : !tt.ptr<f32> -> tensor<1024x!tt.ptr<f32>> loc(#loc7)
%8 = tt.addptr %7, %4 : tensor<1024x!tt.ptr<f32>>, tensor<1024xi32> loc(#loc7)
%9 = tt.load %8, %6 : tensor<1024x!tt.ptr<f32>> loc(#loc8)
%10 = tt.splat %arg1 : !tt.ptr<f32> -> tensor<1024x!tt.ptr<f32>> loc(#loc9)
%11 = tt.addptr %10, %4 : tensor<1024x!tt.ptr<f32>>, tensor<1024xi32> loc(#loc9)
%12 = tt.load %11, %6 : tensor<1024x!tt.ptr<f32>> loc(#loc10)
%13 = arith.addf %9, %12 : tensor<1024xf32> loc(#loc11)
%14 = tt.splat %arg2 : !tt.ptr<f32> -> tensor<1024x!tt.ptr<f32>> loc(#loc12)
%15 = tt.addptr %14, %4 : tensor<1024x!tt.ptr<f32>>, tensor<1024xi32> loc(#loc12)
tt.store %15, %13, %6 : tensor<1024x!tt.ptr<f32>> loc(#loc13)
tt.return loc(#loc14)
} loc(#loc)
} loc(#loc)
Notice that the main functions in the Triton kernel are now represented as:
At the Triton IR stage, the %arg0: !tt.ptr<f32> and the following tensor references show that the intermediate representation is already specialized by the data type.
We ran this example on a Tesla V100-SXM2-32GB GPU with CUDA Version 12.2, Python version 3.11.9, and PyTorch 2.4.1 with the default version of Triton that is installed with PyTorch. On this device, the simple vector addition has the following Triton GPU IR snippet with lines omitted for clarity:
#blocked = #triton_gpu.blocked<{sizePerThread = [4], threadsPerWarp = [32], warpsPerCTA = [4], order = [0]}>
module attributes {"triton_gpu.num-ctas" = 1 : i32, "triton_gpu.num-warps" = 4 : i32, triton_gpu.target = "cuda:70", "triton_gpu.threads-per-warp" = 32 : i32} {
tt.func public @add_kernel(%arg0: !tt.ptr<f32> {tt.divisibility = 16 : i32}
⋮
%9 = tt.load %8, %6 : tensor<1024x!tt.ptr<f32>, #blocked> loc(#loc8)
⋮
%12 = tt.load %11, %6 : tensor<1024x!tt.ptr<f32>, #blocked> loc(#loc10)
%13 = arith.addf %9, %12 : tensor<1024xf32, #blocked> loc(#loc11)
⋮
tt.store %15, %13, %6 : tensor<1024x!tt.ptr<f32>, #blocked> loc(#loc13)
⋮
} loc(#loc)
} loc(#loc)
At this stage, some of the hardware specific information is included. For example, the compute capability is included along with details on how the tensors are distributed to cores and warps or for AMD GPUs on wavefronts. In this example, the tensors are represented as a #blocked layout. In this encoding, each warp owns a contiguous portion of the tensor. Currently, other possible memory optimizations include layouts such as slice (restructures and distributes a tensor along a dimension), dot_op(optimized layout for block matrix product), shared(indicates GPU shared memory), nvidia_mma (produced by NVIDIA tensor cores), amd_mfma (produced by AMD MFMA matrix core), and amd_wmma (produced by AMD WMMA matrix core). As announced at the recent Triton conference, this layout representation will transition to a new linear layout to unify layouts within and across backends. The stage from Triton-GPUIR to LLVM-IR converts the Triton-GPUIR to LLVM’s representation. At this time, Triton has third-party backend support for NVIDIA and AMD devices, but other device support is under active development by the open-source community.
A small subset of the LLVM-IR vector add arguments shown below for illustration:
%19 = extractvalue { i32, i32, i32, i32 } %18, 0, !dbg !16
%39 = extractvalue { i32, i32, i32, i32 } %38, 0, !dbg !18
%23 = bitcast i32 %19 to float, !dbg !16
%43 = bitcast i32 %39 to float, !dbg !18
%56 = fadd float %23, %43, !dbg !19
After some pointer arithmetic and an inline assembly call to retrieve the data from global memory, the vector elements are extracted and cast to the correct type. Finally they are added together and later written to global memory through an inline assembly expression.
The final stages of the Triton compilation process lower the LLVM-IR to a device specific binary. For the example vector add, on an NVIDIA GPU, the next intermediate is PTX (Parallel Thread Execution). The low-level PTX syntax specifies the execution at the thread level of NVIDIA devices, starting with the CUDA 1.0 release. For an in-depth guide on PTX, see NVIDIA’s documentation. In the vector add, the kernel parameters are passed from the host to the kernel, addresses are assigned and mov instructions facilitate the thread-level data access, ultimately representing the element addition calls with add.f32 such as the example below:
add.f32 %f17, %f1, %f9// add type float32, output register, input register for x, input register for y
The Triton compiler orchestrates the final stage with different hardware backends managing how the assembly code is compiled into binary. The Triton kernel is now ready for use.
Summary
Triton provides a high-level abstraction to program and compile kernels for different types of hardware. In this post, we highlight the different stages of the Triton code representations and Triton compiler. For details on including custom Triton kernels or accelerating different workloads with Triton kernels, check out the PyTorch Triton tutorial, the blog posts on Triton GPTQ kernels, Llama3 FP8 Inference with Triton, and CUDA-Free Inference for LLMs, or the PyTorch 2.2 Section on Triton code generation.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
May 22, 2023
Out of the box acceleration and memory savings of 🤗 decoder models with PyTorch 2.0
by
Felix Marty, Younes Belkada, Hamid Shojanazeri, Driss Guessous
As part of PyTorch 2.0 release, an accelerated implementation of the attention mechanism as part of the “Better Transformer” project (and known in PyTorch as Accelerated Transformers) has been added natively into PyTorch as torch.nn.functional.scaled_dot_product_attention. This implementation leverages fused kernels from FlashAttention and Memory-efficient attention, and supports both training and inference.
We also release a notebook showcasing an example of this integration here
After seeing 20-30% speedups at inference for diffusion models, we went ahead and implemented an integration with 🤗 Transformers models through the 🤗 Optimum library. Similar to the previous integration for encoder models, the integration replaces modules from Transformers with efficient implementations that use torch.nn.functional.scaled_dot_product_attention. The usage is as follow:
from optimum.bettertransformer import BetterTransformer
from transformers import AutoModelForCausalLM
with torch.device(“cuda”):
model = AutoModelForCausalLM.from_pretrained(“gpt2-large”, torch_dtype=torch.float16)
model = BetterTransformer.transform(model)
# do your inference or training here
# if training and want to save the model
model = BetterTransformer.reverse(model)
model.save_pretrained(“fine_tuned_model”)
model.push_to_hub(“fine_tuned_model”)
Summarizing our findings below about torch.nn.functional.scaled_dot_product_attention:
You may be surprised by the wide range of memory savings and speedups. In this blog post, we discuss our benchmarks, where this feature shines and upcoming improvements in future PyTorch releases.
In the next release of transformers you will just need to install the proper version of optimum and run:
model = model.to_bettertransformer()
To convert your model using the BetterTransformer API. You can already try this feature out by installing transformers from source.
Benchmark and usage with 🤗 Transformers
torch.nn.functional.scaled_dot_product_attention is usable with any architecture that uses standard attention, and namely replaces the boiler-plate code:
# native scaled_dot_product_attention is equivalent to the following:
def eager_sdpa(query, key, value, attn_mask, dropout_p, is_causal, scale):
scale_factor = 1 / math.sqrt(Q.size(-1)) if scale is None else scale
attn_mask = torch.ones(L, S, dtype=torch.bool).tril(diagonal=0) if is_causal else attn_mask
attn_mask = attn_mask.masked_fill(not attn_mask, -float('inf')) if attn_mask.dtype==torch.bool else attn_mask
attn_weight = torch.softmax((Q @ K.transpose(-2, -1) * scale_factor) + attn_mask, dim=-1)
attn_weight = torch.dropout(attn_weight, dropout_p)
return attn_weight @ V
In the 🤗 Optimum integration with Transformers models, the following architectures are supported for now: gpt2, gpt-neo, gpt-neox, gptj, t5, bart, codegen, pegasus, opt, LLaMA, blenderbot, m2m100. You can expect this list to be extended in the near future!
To validate the benefits from the native scaled dot-product attention, we ran inference and training benchmarks, whose results are presented below.
Inference benchmark on a single A10G GPU, AWS g5.4xlarge instance
Training benchmark on a single A10G GPU, AWS g5.4xlarge instance
Training benchmark on a single A100-SXM4-80GB, Nvidia DGX
Out of this benchmark, the most interesting finding is that native SDPA allows for the usage of longer sequence lengths and batch sizes without running into out of memory issues. Moreover, up to 20% speedups can be seen during inference, and even larger during training.
As seen on the training benchmarks, it appears that smaller head dimension brings higher speedups and memory savings, which we will discuss in the following section.
The implementation supports multi-GPU settings as well, thanks to 🤗 Accelerate library by passing device_map=”auto” to the from_pretrained method. Here are some results for training on two A100-SXM4-80GB.
Training benchmark on two A100-SXM4-80GB, Nvidia DGX, using 🤗 Accelerate library for distributed training
Note that some kernels support only the sm_80 compute capability (which is the one from A100 GPUs), which limits usability on a wide range of hardware, notably if the head dimension is not a power of two. For example, as of PyTorch 2.0.0 during training, opt-2.7b (headim=80) and gpt-neox-20b (headdim=96) can not dispatch to a kernel using flash attention, unless run on an A100 GPU. Better kernels may be developed in the future: https://github.com/pytorch/pytorch/issues/98140#issuecomment-1518101895
Flash Attention, Memory-efficient attention & math differences
The native scaled_dot_product_attention relies on three possible backend implementations: flash attention, memory-efficient attention, and the so-called math implementation which provides a hardware-neutral fallback for all PyTorch platforms.
When fused kernels are available for a given problem size, flash-attention or memory-efficient attention will be used, effectively allowing for a lower memory footprint, as in the memory-efficient attention case O(N) memory allocations are done on the GPU global memory instead of the classic O(N^2) for the traditional eager attention implementation. With flash attention, a reduced number of memory accesses (read and writes) is expected, hence both giving speedups and memory savings.
The “math” implementation is simply an implementation using the PyTorch’s C++ API. Interesting to note in this implementation is that the query and key tensors are scaled individually for numerical stability, thus launching two aten::div operations instead of possibly only one in an eager implementation that does not contain this optimization for numerical stability.
Head dimension influence on speedups, memory savings
Benchmarking torch.nn.functional.scaled_dot_product_attention, we notice a decrease in the speedup / memory gains as the head dimension increases. This is an issue for some architectures like EleutherAI/gpt-neo-2.7B, that has a relatively large head dimension of 128, or EleutherAI/gpt-j-6B (and derived models as PygmalionAI/pygmalion-6b) that has a head dimension of 256 (that actually currently do not dispatch on fused kernels as the head dimension is too large).
This trend can be seen in the figures below, where torch.nn.scaled_dot_production is benchmarked standalone versus the above eager implementation. Moreover, we use the torch.backends.cuda.sdp_kernel context manager to force the usage of respectively math, flash attention, and memory-efficient attention implementation.
Using memory-efficient attention SDP kernel (forward-only), A100
Using math (without dropout), A100
Using flash attention SDP kernel (without dropout), A100
Using memory-efficient attention SDP kernel (without dropout), A100
We see that for the same problem size, be it for inference-only or training, the speedup decreases with higher head dimension, e.g. from 3.4x for headdim=8 to 1.01x for headdim=128 using flash attention kernel.
The reduced memory saving is expected with larger head dimensions. Recall the standard attention computation:
Due to the intermediate computations, the global memory footprint is 2 * N * N + N * d in this standard step by step computation. Memory-efficient attention proposes to iteratively update the softmax renormalization constant and moving its computation at the very end, allowing for only a constant output memory allocation N * d.
Thus, the memory saving ratio is 2 * N / d + 1, which decreases with larger head dimension.
In flash attention, the tradeoff is between the head dimension d and the shared memory size M of a GPU streaming multiprocessor, with a total number of memory accesses of O(N² * d²/M). Thus, the memory accesses scale quadratically in the head dimension, contrary to the standard attention that scales linearly. The reason is that in flash attention, for larger head dimension d, the key and value K, V need to be split into more blocks to fit into shared memory, and in turn each block needs to load the full query Q and output O.
Thus, the highest speedups for flash attention are in a regime where the ratio d² / M is small enough.
Current limitations as of PyTorch 2.0.0
Absence of a scale argument
As of PyTorch 2.0.0, torch.nn.functional.scaled_dot_product_attention has no scale argument and uses the default square root of the hidden size sqrt(d_k).
However, some architectures as OPT or T5 do not use a scaling in the attention, which as of Pytorch 2.0.0 forces it to artificially rescale before the scaled_dot_product_attention call. This introduces an unnecessary overhead, as an additional multiplication is necessary, on top of unneeded divisions in the attention.
A fix for this issue has been merged in PyTorch repository.
Support of flash attention / memory-efficient attention with custom mask
As of PyTorch 2.0.0, when passing a custom attention mask, flash attention and memory-efficient attention can not be used. In this case, scaled_dot_product_attention automatically dispatches to the C++ implementation.
However, as we have seen, some architectures require a custom attention mask, as T5 that uses positional bias. Moreover, in the case of a batch size larger than one where some inputs may be padded, a custom attention mask also needs to be passed. For this latter case, an alternative would be to use NestedTensor, which SDPA supports.
This limited support for custom masks thus limits the benefits from SDPA in these specific cases, although we can hope for an extended support in the future.
Note that xformers, from which PyTorch’s SDPA partially takes inspiration, currently supports arbitrary attention masks: https://github.com/facebookresearch/xformers/blob/658ebab39545f180a6075385b3897921623d6c3b/xformers/ops/fmha/cutlass.py#L147-L156 . HazyResearch implementation of flash attention also supports an equivalent implementation of padding, as a cumulative sequence length array is used along with packed query/key/values - similar in essence to NestedTensor.
In conclusion
Using torch.nn.functional.scaled_dot_product_attention is a free-lunch optimization, both making your code more readable, uses less memory, and is in most common cases faster.
Although the implementation in PyTorch 2.0.0 has still minor limitations, inference and training already massively benefit from SDPA in most cases. We encourage you to use this native implementation be it to train or deploy your PyTorch models, and for 🤗 Transformers models as a one-line transformation!
In the future, we would like to adapt the API to enable users to use SDPA in encoder-based models as well.
We thank Benjamin Lefaudeux, Daniel Haziza and Francisco Massa for their advice on the head dimension influence, as well as Michael Gschwind, Christian Puhrsch and Driss Guessous for their feedback on the blog post!
Benchmark reproduction
The benchmark presented in this post was done using torch==2.0.0, transformers==4.27.4, accelerate==0.18.0 and optimum==1.8.0.
The benchmarks can be easily reproduced using the scripts for inference, training for 🤗 Transformers models, and standalone SDPA.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
June 20, 2024
Accelerating Neural Network Training with Semi-Structured (2:4) Sparsity
by
Jesse Cai, Daniel Haziza, Supriya Rao
Over the past year, we’ve added support for semi-structured (2:4) sparsity into PyTorch. With just a few lines of code, we were able to show a 10% end-to-end inference speedup on segment-anything by replacing dense matrix multiplications with sparse matrix multiplications.
However, matrix multiplications are not unique to neural network inference - they happen during training as well. By expanding on the core primitives we used earlier to accelerate inference, we were also able to accelerate model training. We wrote a replacement nn.Linear layer, SemiSparseLinear, that is able to achieve a 1.3x speedup across the forwards + backwards pass of the linear layers in the MLP block of ViT-L on a NVIDIA A100.
End-to-end, we see a wall time reduction of 6% for a DINOv2 ViT-L training, with virtually no accuracy degradation out of the box (82.8 vs 82.7 on ImageNet top-1 accuracy).
We compare 2 strategies for training a ViT model for 125k iterations on 4x NVIDIA A100s: either fully dense (blue), or sparse for 70% of the training, then dense (orange). Both achieve similar results on the benchmarks, but the sparse variant trains 6% faster. For both experiments, we evaluate the intermediate checkpoints with and without sparsity.
As far as we are aware, this is the first OSS implementation of accelerated sparse training and we’re excited to provide a user API in torchao. You can try accelerating your own training runs with just a few lines of code:
# Requires torchao and pytorch nightlies and CUDA compute capability 8.0+
import torch
from torchao.sparsity.training import (
SemiSparseLinear,
swap_linear_with_semi_sparse_linear,
)
model = torch.nn.Sequential(torch.nn.Linear(1024, 4096)).cuda().half()
# Specify the fully-qualified-name of the nn.Linear modules you want to swap
sparse_config = {
"seq.0": SemiSparseLinear
}
# Swap nn.Linear with SemiSparseLinear, you can run your normal training loop after this step
swap_linear_with_semi_sparse_linear(model, sparse_config)
How does this work?
The general idea behind sparsity is simple: skip calculations involving zero-valued tensor elements to speed up matrix multiplication. However, simply setting weights to zero isn’t enough, as the dense tensor still contains these pruned elements and dense matrix multiplication kernels will continue to process them, incurring the same latency and memory overhead. To achieve actual performance gains, we need to replace dense kernels with sparse kernels that intelligently bypass calculations involving pruned elements.
These kernels work on sparse matrices, which remove the pruned elements and store the specified elements in a compressed format. There are many different sparse formats, but we’re particularly interested in semi-structured sparsity, also known as 2:4 structured sparsity or fine-grained structured sparsity or more generally N:M structured sparsity.
2:4 sparse compressed representation. Original Source
A 2:4-sparse matrix is a matrix where at most 2 elements are non-zero for every 4 elements, as illustrated in the image above. Semi-structured sparsity is attractive because it exists in a goldilocks spot of performance and accuracy:
Illustration of 2:4 (sparse) matrix multiplication on NVIDIA GPUs. Original source
Accelerating inference with semi-structured sparsity is straightforward. Since our weights are fixed during inference, we can prune and compress the weight ahead of time (offline) and store the compressed sparse representation instead of our dense tensor.
Then, instead of dispatching to dense matrix multiplication we dispatch to sparse matrix multiplication, passing in the compressed sparse weight instead of the normal dense one. For more information about accelerating models for inference using 2:4 sparsity, please refer to our tutorial.
Extending sparse inference acceleration to training
In order to use sparsity to reduce the training time of our models, we need to consider when the mask is calculated, as once we store the compressed representation the mask is fixed.
Training with a fixed mask applied to an existing trained dense model (also known as pruning) does not degrade accuracy, but this requires two training runs - one to obtain the dense model and another to make it sparse, offering no speedups.
Instead we’d like to train a sparse model from scratch (dynamic sparse training), but training from scratch with a fixed mask will lead to a significant drop in evaluations, as the sparsity mask would be selected at initialization, when the model weights are essentially random.
To maintain the accuracy of the model when training from scratch, we prune and compress the weights at runtime, so that we can calculate the optimal mask at each step of the training process.
Conceptually you can think of our approach as an approximate matrix multiplication technique, where we `prune_and_compress` and dispatch to `sparse_GEMM` in less time than a `dense_GEMM` call would take. This is difficult because the native pruning and compression functions are too slow to show speedups.
Given the shapes of our ViT-L training matrix multiplications (13008x4096x1024), we measured the runtime of a dense and sparse GEMM respectively at 538us and 387us. In other words, the pruning and compression step of the weight matrix must run in less than 538-387=151us to have any efficiency gain. Unfortunately, the compression kernel provided in cuSPARSELt already takes 380us (without even considering the pruning step!).
Given the max NVIDIA A100 memory IO (2TB/s), and considering that a prune and compress kernel would be memory bound, we could theoretically prune and compress our weight (4096x1024x2 bytes=8MB) in 4us (8MB / 2TB/s)! And in fact, we were able to write a kernel that prunes and compresses a matrix into 2:4-sparse format, and runs in 36 us (10x faster than the compression kernel in cuSPARSELt), making the entire GEMM (including the sparsification) faster. Our kernel is available for use in PyTorch.
Our custom sparsification kernel, which includes pruning + compression, is ~30% faster across a linear layer forward+backward. Benchmarks run on a NVIDIA A100-80GB GPU.
Writing a performant runtime sparsification kernel
There were multiple challenges we faced in order to implement a performant runtime sparsification kernel, which we will explore below.
1) Handling the backwards pass
For the backwards pass, we need to calculate dL/dX and dL/dW for the gradient update and the subsequent layer, which means we need to calculate xWT and xTW respectively.
Overview of runtime sparsification for training acceleration (FW + BW pass)
However this is problematic, because the compressed representation cannot be transposed, since there’s no guarantee that the tensor is 2:4 sparse in both directions.
Both matrices are valid 2:4 matrices. However, the right one is no longer a valid 2:4 matrix once transposed because one column contains more than 2 elements
Therefore, we prune a 4x4 tile, instead of a 1x4 strip. We greedily preserve the largest values, ensuring that we take at most 2 values for each row / column. While this approach is not guaranteed to be optimal, as we sometimes only preserve 7 values instead of 8, it efficiently calculates a tensor that is 2:4 sparse both row-wise and column-wise.
We then compress both the packed tensor and the packed transpose tensor, storing the transpose tensor for the backwards pass. By calculating both the packed and packed transpose tensor at the same time, we avoid a secondary kernel call in the backwards pass.
Our kernel prunes the weight matrix in registers, and writes the compressed values in global memory. It also prunes at the same time W.t, which is needed for the backward pass, minimizing the memory IO
There’s some additional transpose trickery needed to handle the backwards pass - the underlying hardware only supports operations where the first matrix is sparse. For weight sparsification during inference, when we need to calculate xWT we rely on transpose properties to swap the order of the operands.
During inference, we use torch.compile to fuse the outer transpose into subsequent pointwise ops in order to avoid paying a performance penalty.
However in the case of the backwards pass of training, we have no subsequent pointwise op to fuse with. Instead, we fuse the transposition into our matrix multiplication by taking advantage of cuSPARSELt’s ability to specify the row / column layout of the result matrix.
2) Kernel tiling for efficient memory-IO
In order for our kernel to be as efficient as possible, we want to coalesce our reads / writes, as we found that memory IO to be the main bottleneck. This means that within a CUDA thread, we want to read/write chunks of 128 bytes at a time, so that multiple parallel reads/writes can be coalesced into a single request by the GPU memory controller.
Therefore, instead of a thread handling a single 4x4 tile, which is only 4x4x2 = 32 bytes, we decided that each thread will handle 4 4x4 tiles (aka an 8x8 tile), which allows us to operate 8x8x2 =128 byte chunks.
3) Sorting elements in a 4x4 tile without warp-divergence
For each individual 4x4 tile within our thread we calculate a bitmask that specifies which elements to prune and which elements to keep. To do this we sort all 16 elements and greedily preserve elements, so long as they do not break our 2:4 row / col constraint. This preserves only the weights with the largest values.
Crucially we observe that we are only ever sorting a fixed number of elements, so by using a branchless sorting network, we can avoid warp divergence.
For clarity, the transposed packed tensor and metadata are omitted. Sorting network diagram taken from Wikipedia.
Warp divergence occurs when we have conditional execution inside across a thread block. In CUDA, work items in the same work group (thread block) are dispatched at the hardware level in batches (warps). If we have conditional execution, such that some work-items in the same batch run different instructions, then they are masked when the warp is dispatched, or dispatched sequentially.
For example, if we have some code like if (condition) do(A) else do(B), where condition is satisfied by all the odd-numbered work items, then the total runtime of this conditional statement is do(A) + do(B), since we would dispatch do(A) for all odd-numbered work-items, masking out even-numbered work-items, and do(B) for all even numbered work-items, masking out odd-numbered work-items. This answer provides more information about warp divergence.
4) Writing the compressed matrices and metadata
Once the bitmask has been computed, the weight data has to be written back in a compressed format in global memory. This is not trivial, because the data needs to stay in registers, and it’s not possible to index registers (eg C[i++] = a prevents us from storing C in registers). Furthermore, we found that nvcc was using many more registers than we expected, which caused register spilling and impacted global performance. We write this compressed matrix to global memory in Column-Major format to make the writes more efficient.
We also need to write the cuSPARSELt metadata as well. This metadata layout is quite similar to the one from the open-source CUTLASS library and is optimized for being loaded efficiently through shared-memory in the GEMM kernel with the PTX ldmatrix instruction.
However, this layout is not optimized to be written efficiently: the first 128 bits of the metadata tensor contains metadata about the first 32 columns of the rows 0, 8, 16 and 24. Recall that each thread handles an 8x8 tile, which means that this information is scattered across 16 threads.
We rely on a series of warp-shuffle operations, once for the original and transposed representation respectively to write the metadata. Fortunately, this data represents less than 10% of the total IO, so we can afford to not fully coalesce the writes.
DINOv2 Sparse Training: Experimental Setup and Results
For our experiments, the ViT-L model is trained on ImageNet for 125k steps using the DINOv2 method. All our experiments were run on 4x AMD EPYC 7742 64-core CPUs and 4x NVIDIA A100-80GB GPUs. During sparse training, the model is trained with 2:4 sparsity enabled for the first part of the training, where only half of the weights are enabled. This sparsity mask on the weights is dynamically recomputed at every step, as weights are continuously updated during the optimization. For the remaining steps, the model is trained densely, producing a final model without 2:4 sparsity (except the 100% sparse training setup), which is then evaluated.
During the sparse training steps, in the backward pass we obtain a dense gradient for the sparse weights. For the gradient descent to be sound, we should also sparsify this gradient before using it in the optimizer to update the weights. Instead of doing that, we use the full dense gradient to update the weights - we found this to work better in practice: this is the STE (Straight Through Estimator) strategy. In other words, we update all the parameters at every step, even the ones we don’t use.
Conclusion and Future Work
In this blog post, we’ve shown how to accelerate neural network training with semi-structured sparsity and explained some of the challenges we faced. We were able to achieve a 6% end to end speedup on DINOv2 training with a small 0.1 pp accuracy drop.
There are several areas of expansion for this work:
If you are interested in these problems, please feel free to open an issue / PR in torchao, a community we’re building for architecture optimization techniques like quantization and sparsity. Additionally, if you have general interest in sparsity please reach out in CUDA-MODE (#sparsity)
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
August 07, 2024
FlexAttention: The Flexibility of PyTorch with the Performance of FlashAttention
by
Team PyTorch: Driss Guessous, Yanbo Liang, Joy Dong, Horace He
In theory, Attention is All You Need. In practice, however, we also need optimized attention implementations like FlashAttention.
Although these fused attention implementations have substantially improved performance and enabled long contexts, this efficiency has come with a loss of flexibility. You can no longer try out a new attention variant by writing a few PyTorch operators - you often need to write a new custom kernel! This operates as a sort of “software lottery” for ML researchers - if your attention variant doesn’t fit into one of the existing optimized kernels, you’re doomed to slow runtime and CUDA OOMs.
For some examples of attention variants, we have Causal, Relative Positional Embeddings, Alibi, Sliding Window Attention, PrefixLM, Document Masking/Sample Packing/Jagged Tensors, Tanh Soft-Capping, PagedAttention, etc. Even worse, folks often want combinations of these! Sliding Window Attention + Document Masking + Causal + Context Parallelism? Or what about PagedAttention + Sliding Window + Tanh Soft-Capping?
The left picture below represents the state of the world today - some combinations of masking + biases + setting have existing kernels implemented. But the various options lead to an exponential number of settings, and so overall we end up with fairly spotty support. Even worse, new attention variants researchers come up with will have zero support.
To solve this hypercube problem once and for all, we introduce FlexAttention, a new PyTorch API.
With FlexAttention, we hope that trying new attention variants will only be limited by your imagination.
You can find many FlexAttention examples at the Attention Gym: https://github.com/pytorch-labs/attention-gym. If you have any cool applications, feel free to submit an example!
PS: We also find this API very exciting since it leverages a lot of existing PyTorch infra in a fun way - more on that in the end.
FlexAttention
Here is the classic attention equation:
In code form:
Q, K, V: Tensor[batch_size, num_heads, sequence_length, head_dim]
score: Tensor[batch_size, num_heads, sequence_length, sequence_length] = (Q @ K) / sqrt(head_dim)
probabilities = softmax(score, dim=-1)
output: Tensor[batch_size, num_heads, sequence_length, head_dim] = probabilities @ V
FlexAttention allows for an user-defined function score_mod:
In code form:
Q, K, V: Tensor[batch_size, num_heads, sequence_length, head_dim]
score: Tensor[batch_size, num_heads, sequence_length, sequence_length] = (Q @ K) / sqrt(head_dim)
modified_scores: Tensor[batch_size, num_heads, sequence_length, sequence_length] = score_mod(score)
probabilities = softmax(modified_scores, dim=-1)
output: Tensor[batch_size, num_heads, sequence_length, head_dim] = probabilities @ V
This function allows you to modify the attention scores prior to softmax. Surprisingly, this ends up being sufficient for the vast majority of attention variants (examples below)!
Concretely, the expected signature for score_mod is somewhat unique.
def score_mod(score: f32[], b: i32[], h: i32[], q_idx: i32[], kv_idx: i32[])
return score # noop - standard attention
In other words, score is a scalar pytorch tensor that represents the dot product of a query token and a key token. The rest of the arguments tell you which dot product you’re currently computing - b (current element in batch), h (current head), q_idx (position in query), kv_idx (position in key/value tensors).
To apply this function, we could implement it as
for b in range(batch_size):
for h in range(num_heads):
for q_idx in range(sequence_length):
for kv_idx in range(sequence_length):
modified_scores[b, h, q_idx, kv_idx] = score_mod(scores[b, h, q_idx, kv_idx], b, h, q_idx, kv_idx)
Of course, this is not how FlexAttention is implemented under the hood. Leveraging torch.compile, we automatically lower your function into a single fused FlexAttention kernel - guaranteed or your money back!
This API ends up being surprisingly expressive. Let’s look at some examples.
Score Mod Examples
Full Attention
Let’s first do “full attention”, or standard bidirectional attention. In this case, score_mod is a no-op - it takes as input the scores and then returns them as is..
def noop(score, b, h, q_idx, kv_idx):
return score
And to use it end to end (including both forwards and backwards):
from torch.nn.attention.flex_attention import flex_attention
flex_attention(query, key, value, score_mod=noop).sum().backward()
Relative Position Encodings
One common attention variant is the “relative position encoding”. Instead of encoding the absolute distance in the queries and keys, relative position encoding adjusts scores based on the “distance” between the queries and keys.
def relative_positional(score, b, h, q_idx, kv_idx):
return score + (q_idx - kv_idx)
Note that unlike typical implementations, this does not need to materialize a SxS tensor. Instead, FlexAttention computes the bias values “on the fly” within the kernel, leading to significant memory and performance improvements.
ALiBi Bias
Source: Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
ALiBi was introduced in Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, and claims to have beneficial properties for length extrapolation at inference. Notably, MosaicML has pointed to “lack of kernel support” as the main reason why they eventually switched from ALiBi to rotary embeddings.
Alibi is similar to relative positional encodings with one exception - it has a per-head factor that is typically precomputed.
alibi_bias = generate_alibi_bias() # [num_heads]
def alibi(score, b, h, q_idx, kv_idx):
bias = alibi_bias[h] * (kv_idx - q_idx)
return score + bias
This demonstrates one interesting piece of flexibility torch.compile provides - we can load from alibi_bias even though it wasn’t explicitly passed in as an input! The generated Triton kernel will calculate the correct loads from the alibi_bias tensor and fuse it. Note that you could regenerate alibi_bias and we still wouldn’t need to recompile.
Soft-capping
Soft-capping is a technique used in Gemma2 and Grok-1 that prevents logits from growing excessively large. In FlexAttention, it looks like:
softcap = 20
def soft_cap(score, b, h, q_idx, kv_idx):
score = score / softcap
score = torch.tanh(score)
score = score * softcap
return score
Note that we also automatically generate the backwards pass from the forwards pass here. Also, although this implementation is semantically correct, we likely want to use a tanh approximation in this case for performance reasons. See attention-gym for more details.
Causal Mask
Although bidirectional attention is the simplest, the original Attention is All You Need paper and the vast majority of LLMs use attention in a decoder-only setting where each token can only attend to the tokens prior to it. Folks often think of this as a lower-triangular mask, but with the score_mod API it can be expressed as:
def causal_mask(score, b, h, q_idx, kv_idx):
return torch.where(q_idx >= kv_idx, score, -float("inf"))
Basically, if the query token is “after” the key token, we keep the score. Otherwise, we mask it out by setting it to -inf, thus ensuring it won’t participate in the softmax calculation.
However, masking is special compared to other modifications - if something is masked out, we can completely skip its computation! In this case, a causal mask has about 50% sparsity, so not taking advantage of the sparsity would result in a 2x slowdown. Although this score_mod is sufficient to implement causal masking correctly, getting the performance benefits of sparsity requires another concept - mask_mod.
Mask Mods
To take advantage of sparsity from masking, we need to do some more work. Specifically, by passing a mask_mod to create_block_mask, we can create a BlockMask. FlexAttention can then use BlockMask to take advantage of the sparsity!
The signature of mask_mod is very similar to score_mod - just without the score. In particular
# returns True if this position should participate in the computation
mask_mod(b, h, q_idx, kv_idx) => bool
Note that score_mod is strictly more expressive than mask_mod. However, for masking, it’s recommended to use mask_mod and create_block_mask, as it’s more performant. See the FAQ on why score_mod and mask_mod are separate.
Now, let’s take a look at how we might implement causal mask with mask_mod.
Causal Mask
from torch.nn.attention.flex_attention import create_block_mask
def causal(b, h, q_idx, kv_idx):
return q_idx >= kv_idx
# Because the sparsity pattern is independent of batch and heads, we'll set them to None (which broadcasts them)
block_mask = create_block_mask(causal, B=None, H=None, Q_LEN=1024, KV_LEN=1024)
# In this case, we don't need a score_mod, so we won't pass any in.
# However, score_mod can still be combined with block_mask if you need the additional flexibility.
flex_attention(query, key, value, block_mask=block_mask)
Note that create_block_mask is a relatively expensive operation! Although FlexAttention will not need to recompile when it changes, if you aren’t careful about caching it, it can lead to significant slowdowns (check out the FAQ for suggestions on best practices).
While the TFlops are roughly the same, the execution time is 2x faster for the mask_mod version! This demonstrates that we can leverage the sparsity that BlockMask provides us without losing hardware efficiency.
Sliding Window + Causal
Source: Mistral 7B
Popularized by Mistral, sliding window attention (also known as local attention) takes advantage of the intuition that the most recent tokens are the most useful. In particular, it allows the query token to only attend to, say, the 1024 most recent tokens. This is often used together with causal attention.
SLIDING_WINDOW = 1024
def sliding_window_causal(b, h, q_idx, kv_idx):
causal_mask = q_idx >= kv_idx
window_mask = q_idx - kv_idx <= SLIDING_WINDOW
return causal_mask & window_mask
# If you want to be cute...
from torch.nn.attention import and_masks
def sliding_window(b, h, q_idx, kv_idx)
return q_idx - kv_idx <= SLIDING_WINDOW
sliding_window_causal = and_masks(causal_mask, sliding_window)
We benchmark it against F.scaled_dot_product_attention with a sliding window mask as well as FA2 with a causal mask (as a reference point for performance). Not only are we significantly faster than F.scaled_dot_product_attention, we’re also significantly faster than FA2 with a causal mask as this mask has significantly more sparsity.
PrefixLM
Source: PaliGemma: A versatile 3B VLM for transfer
The T5 architecture, proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, describes an attention variant that performs full bidirectional attention on a “prefix”, and causal attention on the rest. We again compose two mask functions to accomplish this, one for causal masking and one that is based off of the prefix length.
prefix_length: [B]
def prefix_mask(b, h, q_idx, kv_idx):
return kv_idx <= prefix_length[b]
prefix_lm_causal = or_masks(prefix_mask, causal_mask)
# In this case, our mask is different per sequence so we set B equal to our batch size
block_mask = create_block_mask(prefix_lm_causal, B=B, H=None, S, S)
Just like with score_mod, mask_mod allows us to refer to additional tensors that aren’t explicitly an input to the function! However, with prefixLM, the sparsity pattern changes per input. This means that for each new input batch, we’ll need to recompute the BlockMask. One common pattern is to call create_block_mask at the beginning of your model and reuse that block_mask for all attention calls in your model. See Recomputing Block Masks vs. Recompilation.
However, in exchange for that, we’re not only able to have an efficient attention kernel for prefixLM, we’re also able to take advantage of however much sparsity exists in the input! FlexAttention will dynamically adjust its performance based off of the BlockMask data, without needing to recompile the kernel.
Document Masking/Jagged Sequences
Another common attention variant is document masking/jagged sequences. Imagine that you have a number of sequences of varying length. You want to train on all of them together, but unfortunately, most operators only accept rectangular tensors.
Through BlockMask, we can support this efficiently in FlexAttention as well!
# The document that each token belongs to.
# e.g. [0, 0, 0, 1, 1, 2, 2, 2, 2, 2, 2] corresponds to sequence lengths 3, 2, and 6.
document_id: [SEQ_LEN]
def document_masking(b, h, q_idx, kv_idx):
return document_id[q_idx] == document_id[kv_idx]
And that’s it! In this case, we see that we end up with a blockdiagonal mask.
One interesting aspect about document masking is that it’s easy to see how it might combine with an arbitrary combination of other masks . For example, we already defined prefixlm_mask in the previous section. Do we now need to define a prefixlm_document_mask function as well?
In these cases, one pattern we’ve found quite useful is what we call a “higher level modification”. In this case, we can take an existing mask_mod and automatically transform it into one that works with jagged sequences!
def generate_doc_mask_mod(mask_mod, document_id):
# Get unique document IDs and their counts
_, counts = torch.unique_consecutive(document_id, return_counts=True)
# Create cumulative counts (offsets)
offsets = torch.cat([torch.tensor([0], device=document_id.device), counts.cumsum(0)[:-1]])
def doc_mask_wrapper(b, h, q_idx, kv_idx):
same_doc = document_id[q_idx] == document_id[kv_idx]
q_logical = q_idx - offsets[document_id[q_idx]]
kv_logical = kv_idx - offsets[document_id[kv_idx]]
inner_mask = mask_mod(b, h, q_logical, kv_logical)
return same_doc & inner_mask
return doc_mask_wrapper
For example, given the prefix_lm_causal mask from above, we can transform it into one that works on on packed documents like so:
prefix_length = torch.tensor(2, dtype=torch.int32, device="cuda")
def prefix_mask(b, h, q_idx, kv_idx):
return kv_idx < prefix_length
prefix_lm_causal = or_masks(prefix_mask, causal_mask)
doc_prefix_lm_causal_mask = generate_doc_mask_mod(prefix_lm_causal, document_id)
Now, this mask is “block-prefixLM-diagonal” shaped. :)
That’s all of our examples! There are far more attention variants than we have space to list, so check out Attention Gym for more examples. We hope that the community will contribute some of their favorite applications of FlexAttention as well.
FAQ
Q: When does FlexAttention need to recompile?
As FlexAttention leverages torch.compile for graph capture, it can actually avoid recompilation in a broad spectrum of cases. Notably, it does not need to recompile even if captured tensors change values!
flex_attention = torch.compile(flex_attention)
def create_bias_mod(bias)
def bias_mod(score, b, h, q_idx, kv_idx):
return score + bias
return bias_mod
bias_mod1 = create_bias_mod(torch.tensor(0))
flex_attention(..., score_mod=bias_mod1) # Compiles the kernel here
bias_mod2 = create_bias_mod(torch.tensor(2))
flex_attention(..., score_mod=bias_mod2) # Doesn't need to recompile!
Even changing the block-sparsity doesn’t require a recompile. However, if the block-sparsity changes, we do need to recompute the BlockMask.
Q: When should we recompute the BlockMask?
We need to recompute the BlockMask whenever the block-sparsity changes. Although computing the BlockMask is much cheaper than recompilation (on the order of hundreds of microseconds as opposed to seconds), you should still take care to not excessively recompute the BlockMask.
Here are some common patterns and some recommendations on how you might approach them.
Mask never changes (e.g. causal mask)
In this case, you can simply precompute the block mask and cache it globally, reusing it for all attention calls.
block_mask = create_block_mask(causal_mask, 1, 1, S,S)
causal_attention = functools.partial(flex_attention, block_mask=block_mask)
Mask changes every batch (e.g. document masking)
In this case, we would suggest computing the BlockMask at the beginning of the model and threading it through the model - reusing the BlockMask for all layers.
def forward(self, x, doc_mask):
# Compute block mask at beginning of forwards
block_mask = create_block_mask(doc_mask, None, None, S, S)
x = self.layer1(x, block_mask)
x = self.layer2(x, block_mask)
...
# amortize block mask construction cost across all layers
x = self.layer3(x, block_mask)
return x
Mask changes every layer (e.g. data-dependent sparsity)
This is the hardest setting, since we’re unable to amortize the block mask computation across multiple FlexAttention invocations. Although FlexAttention can certainly still benefit this case, the actual benefits from BlockMask depend on how sparse your attention mask is and how fast we can construct the BlockMask. That leads us to…
Q: How can we compute BlockMask quicker?
create_block_mask is unfortunately fairly expensive, both from a memory and compute perspective, as determining whether a block is completely sparse requires evaluating mask_mod at every single point in the block. There are a couple ways to address this:
Write a custom constructor for BlockMask. The metadata for BlockMask is quite simple (see the documentation). It’s essentially two tensors.
a. num_blocks: The number of KV blocks computed for each query block.
b. indices: The positions of the KV blocks computed for each query block.
For example, here’s a custom BlockMask constructor for causal_mask.
def create_causal_mask(S):
BLOCK_SIZE = 128
# The first query block computes one block, the second query block computes 2 blocks, etc.
num_blocks = torch.arange(S // BLOCK_SIZE, device="cuda") + 1
# Since we're always computing from the left to the right,
# we can use the indices [0, 1, 2, ...] for every query block.
indices = torch.arange(S // BLOCK_SIZE, device="cuda").expand(
S // BLOCK_SIZE, S // BLOCK_SIZE
)
num_blocks = num_blocks[None, None, :]
indices = indices[None, None, :]
return BlockMask(num_blocks, indices, BLOCK_SIZE=BLOCK_SIZE, mask_mod=causal_mask)
Q: Why are score_mod and mask_mod different? Isn’t mask_mod just a special case of score_mod?
Very astute question, hypothetical audience member! In fact, any mask_mod can be easily converted to a score_mod (we do not recommend using this function in practice!)
def mask_mod_as_score_mod(b, h, q_idx, kv_idx):
return torch.where(mask_mod(b, h, q_idx, kv_idx), score, -float("inf"))
So, if score_mod can implement everything mask_mod can, what’s the point of having mask_mod?
One immediate challenge: a score_mod requires the actual score value as an input, but when we’re precomputing the BlockMask, we don’t have the actual score value. We can perhaps fake the values by passing in all zeros, and if the score_mod returns -inf, then we consider it to be masked (in fact, we originally did this!).
However, there are two issues. The first is that this is hacky - what if the user’s score_mod returned -inf when the input is 0? Or what if the user’s score_mod masked out with a large negative value instead of -inf? It seems we’re trying to cram a round peg into a square hole. However, there’s a more important reason to separate out mask_mod from score_mod - it’s fundamentally more efficient!.
As it turns out, applying masking to every single computed element is actually quite expensive - our benchmarks see about a 15-20% degradation in performance! So, although we can get significant speedups by skipping half the computation, we lose a meaningful part of that speedup from needing to mask out every element!
Luckily, if we visualize the causal mask, we notice that the vast majority of blocks do not require a “causal mask” at all - they’re fully computed! It is only the blocks on the diagonal, partially computed and partially masked, that require masking to be applied.
The BlockMask previously told us which blocks we need to compute and which blocks we can skip. Now, we further augment this data structure to also tell us which blocks are “fully computed” (i.e. masking can be skipped) vs. “partially computed” (i.e. a mask needs to be applied). Note, however, that although masks can be skipped on “fully computed” blocks, other score_mods like relative positional embeddings still need to be applied.
Given just a score_mod, there’s no sound way for us to tell which parts of it are “masking”. Hence, the user must separate these out themselves into mask_mod.
Q: How much additional memory does the BlockMask need?
The BlockMask metadata is of size [BATCH_SIZE, NUM_HEADS, QUERY_LEN//BLOCK_SIZE, KV_LEN//BLOCK_SIZE]. If the mask is the same across the batch or heads dimension it can be broadcasted over that dimension to save memory.
At the default BLOCK_SIZE of 128, we expect that the memory usage will be fairly negligible for most use cases. For example, for a sequence length of 1 million, the BlockMask would only use 60MB of additional memory. If this is a problem, you can increase the block size: create_block_mask(..., BLOCK_SIZE=1024). For example, increasing BLOCK_SIZE to 1024 would result in this metadata dropping to under a megabyte.
Q: How do the numerics compare?
Although the results are not bitwise identical, we are confident that FlexAttention is as numerically accurate as FlashAttention. We generate the following distribution of differences comparing FlashAttention versus FlexAttention over a large range of inputs on both causal and non causal attention variants. The errors are nearly identical.
Performance
Generally speaking, FlexAttention is nearly as performant as a handwritten Triton kernel, which is unsurprising, as we heavily leverage a handwritten Triton kernel. However, due to its generality, we do incur a small performance penalty. For example, we must incur some additional latency to determine which block to compute next. In some cases, we provide some kernel options that can affect the performance of the kernel while changing its behavior. They can be found here: performance knobs
As a case study, let’s explore how the knobs affect the performance of causal attention. We will compare performance of the triton kernel versus FlashAttentionv2 on A100. The script can be found here.
FlexAttention achieves 90% of FlashAttention2’s performance in the forward pass and 85% in the backward pass. FlexAttention is currently utilizing a deterministic algorithm that recomputes more intermediates than FAv2, but we have plans to improve FlexAttention’s backward algorithm and hope to close this gap!
Conclusion
We hope you have as much fun using FlexAttention as we did developing it! While working on this, we ended up finding way more applications of this API than we could have expected. We’ve already seen it accelerate torchtune’s sample packing throughput by 71%, replace the need for a researcher to spend over a week writing their own custom Triton kernel, and deliver competitive performance with custom handwritten attention variants.
One final thing that made implementing FlexAttention quite fun is that we were able to leverage a lot of existing PyTorch infra in an interesting way. For example, one of the unique aspects about TorchDynamo (torch.compile’s frontend) is that it does not require tensors used in the compiled function to be explicitly passed in as inputs. This allows us to compile mods like document masking, which require accessing global variables where the global variables need to change!
bias = torch.randn(1024, 1024)
def score_mod(score, b, h, q_idx, kv_idx):
return score + bias[q_idx][kv_idx] # The bias tensor can change!
Furthermore, the fact that torch.compile is a generic graph-capture mechanism also allows it to support more “advanced” transformations, such as the higher order transform that transforms any mask_mod into one that works with jagged tensors.
We also leverage TorchInductor (torch.compile’s backend) infrastructure for Triton templates. Not only did this make it easy to support codegening FlexAttention - it also automatically gave us support for dynamic shapes as well as epilogue fusion (i.e. fusing an operator onto the end of attention)! In the future, we plan on extending this support to allow for quantized versions of attention or things like RadixAttention as well.
In addition, we also leveraged higher order ops, PyTorch’s autograd to automatically generate the backwards pass, as well as vmap to automatically apply score_mod for creating the BlockMask.
And, of course, this project wouldn’t have been possible without Triton and TorchInductor’s ability to generate Triton code.
We look forward to leveraging the approach we used here to more applications in the future!
Limitations and Future Work
Acknowledgements
We want to highlight some prior work (and people) that have inspired FlexAttention.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
July 11, 2024
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
by
Jay Shah and Ganesh Bikshandi, Colfax Research, Ying Zhang, Meta, Vijay Thakkar and Pradeep Ramani, NVIDIA, Tri Dao, TogetherAI and Princeton University
Attention, as a core layer of the ubiquitous Transformer architecture, is a bottleneck for large language models and long-context applications. FlashAttention (and FlashAttention-2) pioneered an approach to speed up attention on GPUs by minimizing memory reads/writes, and is now used by most libraries to accelerate Transformer training and inference. This has contributed to a massive increase in LLM context length in the last two years, from 2-4K (GPT-3, OPT) to 128K (GPT-4), or even 1M (Llama 3). However, despite its success, FlashAttention has yet to take advantage of new capabilities in modern hardware, with FlashAttention-2 achieving only 35% utilization of theoretical max FLOPs on the H100 GPU. In this blogpost, we describe three main techniques to speed up attention on Hopper GPUs: exploiting asynchrony of the Tensor Cores and TMA to (1) overlap overall computation and data movement via warp-specialization and (2) interleave block-wise matmul and softmax operations, and (3) incoherent processing that leverages hardware support for FP8 low-precision.
We’re excited to release FlashAttention-3 that incorporates these techniques. It’s 1.5-2.0x faster than FlashAttention-2 with FP16, up to 740 TFLOPS, i.e., 75% utilization of H100 theoretical max FLOPS. With FP8, FlashAttention-3 reaches close to 1.2 PFLOPS, with 2.6x smaller error than baseline FP8 attention.
FlashAttention-3 is available at: https://github.com/Dao-AILab/flash-attention
Paper
FlashAttention Recap
FlashAttention is an algorithm that reorders the attention computation and leverages tiling and recomputation to significantly speed it up and reduce memory usage from quadratic to linear in sequence length. We use tiling to load blocks of inputs from HBM (GPU memory) to SRAM (fast cache), perform attention with respect to that block, and update the output in HBM. By not writing the large intermediate attention matrices to HBM, we reduce the amount of memory reads/writes, which brings 2-4x wallclock time speedup.
Here we show a diagram of FlashAttention forward pass: with tiling and softmax rescaling, we operate by blocks and avoid having to read/write from HBM, while obtaining the correct output with no approximation.
New hardware features on Hopper GPUs - WGMMA, TMA, FP8
While FlashAttention-2 can achieve up to 70% theoretical max FLOPS on Ampere (A100) GPUs, it does not yet take advantage of new features on Hopper GPUs to maximize performance. We describe some of the new Hopper-specific features here, and why they are important.
1. WGMMA (Warpgroup Matrix Multiply-Accumulate). This new feature makes use of the new Tensor Cores on Hopper, with much higher throughput1 than the older mma.sync instruction in Ampere (image from the H100 white paper).
2. TMA (Tensor Memory Accelerator). This is a special hardware unit that accelerates the transfer of data between global memory and shared memory, taking care of all index calculation and out-of-bound predication. This frees up registers, which is a valuable resource to increase tile size and efficiency.
3. Low-precision with FP8. This doubles the Tensor Core throughput (e.g. 989 TFLOPS with FP16 and 1978 TFLOPS with FP8), but trades off accuracy by using fewer bits to represent floating point numbers.
FlashAttention-3 makes use of all of these new features of Hopper, using powerful abstractions from NVIDIA’s CUTLASS library.
By rewriting FlashAttention to use these new features, we can already significantly speed it up (e.g., from 350 TFLOPS in FlashAttention-2 FP16 forward pass to around 540-570 TFLOPS). However, the asynchronous nature of the new instructions on Hopper (WGMMA and TMA) opens up additional algorithmic opportunities to overlap operations and thereby extract even greater performance. For this blogpost, we’ll explain two such techniques specific to attention. The generic technique of warp specialization, with separate producer and consumer warps doing TMA and WGMMA, is well-covered elsewhere in the context of GEMM and works the same here.
Asynchrony: Overlapping GEMM and Softmax
Why overlap?
Attention has GEMMs (those matmuls between Q and K and between attention probability P and V) and softmax as its two main operations. Why do we need to overlap them? Isn’t most of the FLOPS in the GEMMs anyway? As long as the GEMMs are fast (e.g., computed using WGMMA instructions), shouldn’t the GPU be going brrrr?
The problem is that non-matmul operations are much slower than matmul operations on modern accelerators. Special functions such as exponential (for the softmax) have even lower throughput than floating point multiply-add; they are evaluated by the multi-function unit, a unit separate from floating point multiply-add or matrix multiply-add. As an example, the H100 GPU SXM5 has 989 TFLOPS of FP16 matrix multiply, but only 3.9 TFLOPS (256x less throughput) for special functions2! For head dimension 128, there are 512x more matmul FLOPS than exponential, which means that exponential can take 50% of the time compared to matmul. The situation is even worse for FP8, where the matmul FLOPS are twice as fast yet exponential FLOPS stay the same speed. Ideally we want matmul and softmax to operate in parallel. While the Tensor Cores are busy with matmul, the multi-function units should be calculating exponential!
Inter-warpgroup overlapping with pingpong scheduling
The first and easiest way to overlap GEMM and softmax is to do nothing at all! The warp schedulers already try to schedule warps so that if some warps are blocked (e.g., waiting for GEMM results), other warps can run. That is, the warp schedulers do some of this overlapping for us, for free.
However, we can improve on this by doing some of the scheduling manually. As an example, if we have 2 warpgroups (labeled 1 and 2 – each warpgroup is a group of 4 warps), we can use synchronization barriers (bar.sync) so that warpgroup 1 first does its GEMMs (e.g., GEMM1 of one iteration and GEMM0 of the next iteration), and then warpgroup 2 does its GEMMs while warpgroup 1 does its softmax, and so on. This “pingpong” schedule is illustrated in the figure below, where the same color denotes the same iteration.
This would allow us to perform the softmax in the shadow of the GEMMs of the other warpgroup. Of course, this figure is just a caricature; in practice the scheduling is not really this clean. Nevertheless, pingpong scheduling can improve FP16 attention forward pass from around 570 TFLOPS to 620 TFLOPS (head dim 128, seqlen 8K).
Intra-warpgroup overlapping of GEMM and Softmax
Even within one warpgroup, we can have some part of softmax running while the GEMMs of that warpgroup is running. This is illustrated in this figure, where the same color denotes the same iteration.
This pipelining increases throughput from around 620 TFLOPS to around 640-660 TFLOPS for FP16 attention forward, at the cost of higher register pressure. We need more registers to hold both accumulators of the GEMMs, and the input/output of softmax. Overall, we find this technique to offer a favorable tradeoff.
Low-precision: reduce quantization error with incoherent processing
LLM activation can have outliers with much larger magnitude than the rest of the features. These outliers make it difficult to quantize, producing much larger quantization errors. We leverage incoherent processing, a technique used in the quantization literature (e.g. from QuIP) that multiplies the query and key with a random orthogonal matrix to “spread out” the outliers and reduce quantization error. In particular, we use the Hadamard transform (with random signs), which can be done per attention head in O(d log d) instead of O(d^2) time, where d is the head dimension. Since the Hadamard transform is memory-bandwidth bound, it can be fused with previous operations such as rotary embedding (also memory-bandwidth bound) “for free”.
In our experiment where Q, K, V are generated from a standard normal distribution but 0.1% of the entries have large magnitudes (to simulate outliers), we found that incoherent processing can reduce the quantization error by 2.6x. We show numerical error comparison in the table below. Please see the paper for details.
Attention benchmark
We show some results with FlashAttention-3, and compare it to FlashAttention-2, as well as the implementation in Triton and cuDNN (both of which already use new hardware features of Hopper GPUs).
For FP16, we see about 1.6x-1.8x speedup over FlashAttention-2
For FP8, we can reach close to 1.2 PFLOPS!
Discussion
This blogpost highlights some of the optimizations for FlashAttention available on Hopper GPUs. Other optimizations (e.g., variable length sequences, persistent kernel, and in-kernel transpose for FP8) are covered in the paper.
We have seen that designing algorithms that take advantage of the hardware they run on can bring significant efficiency gains and unlock new model capabilities such as long context. We look forward to future work on optimization for LLM inference, as well as generalizing our techniques to other hardware architectures.
We also look forward to FlashAttention-3 being integrated in a future release of PyTorch.
Notes
Without the wgmma instruction, the older mma.sync instruction can only reach about ⅔ the peak throughput of Hopper Tensor Cores: https://arxiv.org/abs/2402.13499v1 ↩
The CUDA programming guide specifies that the throughput for special functions is 16 operations per streaming multiprocessor (SM) per clock cycle. We multiply 16 by 132 SMs and 1830 Mhz (clock speed used to calculate 989 TFLOPS of FP16 matmul) to get 3.9 TFLOPS ↩
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
January 03, 2024
Accelerating Generative AI Part III: Diffusion, Fast
by
Sayak Paul and Patrick von Platen (Hugging Face 🤗)
This post is the third part of a multi-series blog focused on how to accelerate generative AI models with pure, native PyTorch. We are excited to share a breadth of newly released PyTorch performance features alongside practical examples to see how far we can push PyTorch native performance. In part one, we showed how to accelerate Segment Anything over 8x using only pure, native PyTorch. In part two, we showed how to accelerate Llama-7B by almost 10x using only native PyTorch optimizations. In this blog, we’ll focus on speeding up text-to-image diffusion models by upto 3x.
We will leverage an array of optimizations including:
We will primarily focus on Stable Diffusion XL (SDXL), demonstrating a latency improvement of 3x. These techniques are PyTorch-native, which means you don’t have to rely on any third-party libraries or any C++ code to take advantage of them.
Enabling these optimizations with the 🤗Diffusers library takes just a few lines of code. If you’re already feeling excited and cannot wait to jump to the code, check out the accompanying repository here: https://github.com/huggingface/diffusion-fast.
(The discussed techniques are not SDXL-specific and can be used to speed up other text-to-image diffusion systems, as shown later.)
Below, you can find some blog posts on similar topics:
Setup
We will demonstrate the optimizations and their respective speed-up gains using the 🤗Diffusers library. Apart from that, we will make use of the following PyTorch-native libraries and environments:
For an easier reproduction environment, you can also refer to this Dockerfile. The benchmarking numbers presented in this post come from a 400W 80GB A100 GPU (with its clock rate set to its maximum capacity).
Since we use an A100 GPU (Ampere architecture) here, we can specify torch.set_float32_matmul_precision("high") to benefit from the TF32 precision format.
Run inference using a reduced precision
Running SDXL in Diffusers just takes a few lines of code:
from diffusers import StableDiffusionXLPipeline
## Load the pipeline in full-precision and place its model components on CUDA.
pipe = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0").to("cuda")
## Run the attention ops without efficiency.
pipe.unet.set_default_attn_processor()
pipe.vae.set_default_attn_processor()
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt, num_inference_steps=30).images[0]
But this isn’t very practical as it takes 7.36 seconds to generate a single image with 30 steps. This is our baseline which we will try to optimize one step at a time.
Here, we’re running the pipeline with the full precision. We can immediately cut down the inference time by using a reduced precision such as bfloat16. Besides, modern GPUs come with dedicated cores for running accelerated computation benefiting from reduced precision. To run the computations of the pipeline in the bfloat16 precision, we just need to specify the data type while initializing the pipeline:
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")
## Run the attention ops without efficiency.
pipe.unet.set_default_attn_processor()
pipe.vae.set_default_attn_processor()
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt, num_inference_steps=30).images[0]
By using a reduced precision, we’re able to cut down the inference latency from 7.36 seconds to 4.63 seconds.
Some notes on the use of bfloat16
(We later ran the experiments in float16 and found out that the recent versions of torchao do not incur numerical problems from float16.)
Use SDPA for performing attention computations
By default, Diffusers uses scaled_dot_product_attention (SDPA) for performing attention-related computations when using PyTorch 2. SDPA provides faster and more efficient kernels to run intensive attention-related operations. To run the pipeline SDPA, we simply don’t set any attention processor like so:
from diffusers import StableDiffusionXLPipeline
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt, num_inference_steps=30).images[0]
SDPA gives a nice boost from 4.63 seconds to 3.31 seconds.
Compiling the UNet and VAE
We can ask PyTorch to perform some low-level optimizations (such as operator fusion and launching faster kernels with CUDA graphs) by using torch.compile. For the StableDiffusionXLPipeline, we compile the denoiser (UNet) and the VAE:
from diffusers import StableDiffusionXLPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16
).to("cuda")
## Compile the UNet and VAE.
pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True)
pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True)
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
## First call to `pipe` will be slow, subsequent ones will be faster.
image = pipe(prompt, num_inference_steps=30).images[0]
Using SDPA attention and compiling both the UNet and VAE reduces the latency from 3.31 seconds to 2.54 seconds.
Notes on torch.compile
torch.compile offers different backends and modes. As we’re aiming for maximum inference speed, we opt for the inductor backend using the “max-autotune”. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. Using CUDA graphs greatly reduces the overhead of launching GPU operations. It saves time by using a mechanism to launch multiple GPU operations through a single CPU operation.
Specifying fullgraph to be True ensures that there are no graph breaks in the underlying model, ensuring the fullest potential of torch.compile. In our case, the following compiler flags were also important to be explicitly set:
torch._inductor.config.conv_1x1_as_mm = True
torch._inductor.config.coordinate_descent_tuning = True
torch._inductor.config.epilogue_fusion = False
torch._inductor.config.coordinate_descent_check_all_directions = True
For the full list of compiler flags, refer to this file.
We also change the memory layout of the UNet and the VAE to “channels_last” when compiling them to ensure maximum speed:
pipe.unet.to(memory_format=torch.channels_last)
pipe.vae.to(memory_format=torch.channels_last)
In the next section, we’ll show how to improve the latency even further.
Additional optimizations
No graph breaks during torch.compile
Ensuring that the underlying model/method can be fully compiled is crucial for performance (torch.compile with fullgraph=True). This means having no graph breaks. We did this for the UNet and VAE by changing how we access the returning variables. Consider the following example:
Getting rid of GPU syncs after compilation
During the iterative reverse diffusion process, we call step() on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside step(), the sigmas variable is indexed. If the sigmas array is placed on the GPU, indexing causes a communication sync between the CPU and GPU. This causes a latency, and it becomes more evident when the denoiser has already been compiled.
But if the sigmas array always stays on the CPU (refer to this line), this sync doesn’t take place, hence improved latency. In general, any CPU <-> GPU communication sync should be none or be kept to a bare minimum as it can impact inference latency.
Using combined projections for attention ops
Both the UNet and the VAE used in SDXL make use of Transformer-like blocks. A Transformer block consists of attention blocks and feed-forward blocks.
In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. In the naive implementation, these projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one shot. This increases the size of the matmuls of the input projections and improves the impact of quantization (to be discussed next).
Enabling this kind of computation in Diffusers just takes a single line of code:
pipe.fuse_qkv_projections()
This will make the attention operations for both the UNet and the VAE take advantage of the combined projections. For the cross-attention layers, we only combine the key and value matrices. To learn more, you can refer to the official documentation here. It’s worth noting that we leverage PyTorch’s scaled_dot_product_attention here internally.
These additional techniques improved the inference latency from 2.54 seconds to 2.52 seconds.
Dynamic int8 quantization
We selectively apply dynamic int8 quantization to both the UNet and the VAE. This is because quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance.
Through experimentation, we found that certain linear layers in the UNet and the VAE don’t benefit from dynamic int8 quantization. You can check out the full code for filtering those layers here (referred to as dynamic_quant_filter_fn below).
We leverage the ultra-lightweight pure PyTorch library torchao to use its user-friendly APIs for quantization:
from torchao.quantization import apply_dynamic_quant
apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn)
apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn)
Since this quantization support is limited to linear layers only, we also turn suitable pointwise convolution layers into linear layers to maximize the benefit. We also specify the following compiler flags when using this option:
torch._inductor.config.force_fuse_int_mm_with_mul = True
torch._inductor.config.use_mixed_mm = True
To prevent any numerical issues stemming from quantization, we run everything in the bfloat16 format.
Applying quantization this way improved the latency from 2.52 seconds to 2.43 seconds.
Resources
We welcome you to check out the following codebases to reproduce these numbers and extend the techniques to other text-to-image diffusion systems as well:
Other links
Improvements in other pipelines
We applied these techniques to other pipelines to test the generality of our approach. Below are our findings:
SSD-1B
Stable Diffusion v1-5
PixArt-alpha/PixArt-XL-2-1024-MS
It’s worth noting that PixArt-Alpha uses a Transformer-based architecture as its denoiser for the reverse diffusion process instead of a UNet.
Note that for Stable Diffusion v1-5 and PixArt-Alpha, we didn’t explore the best shape combination criteria for applying dynamic int8 quantization. It might be possible to get better numbers with a better combination.
Collectively, the methods we presented offer substantial speedup over the baseline without degradation in the generation quality. Furthermore, we believe that these methods should complement other optimization methods popular in the community (such as DeepCache, Stable Fast, etc.).
Conclusion and next steps
In this post, we presented a basket of simple yet effective techniques that can help improve the inference latency of text-to-image Diffusion models in pure PyTorch. In summary:
We believe there’s a lot to be explored in terms of how we apply quantization to a text-to-image diffusion system. We didn’t exhaustively explore which layers in the UNet and the VAE tend to benefit from dynamic quantization. There might be opportunities to further speed things up with a better combination of the layers being targeted for quantization.
We kept the text encoders of SDXL untouched other than just running them in bfloat16. Optimizing them might also lead to improvements in latency.
Acknowledgements
Thanks to Ollin Boer Bohan whose VAE was used throughout the benchmarking process as it is numerically more stable under reduced numerical precisions.
Thanks to Hugo Larcher from Hugging Face for helping with infrastructure.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
February 08, 2022
Practical Quantization in PyTorch
by
Suraj Subramanian, Mark Saroufim, Jerry Zhang
Quantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we’ll lay a (quick) foundation of quantization in deep learning, and then take a look at how each technique looks like in practice. Finally we’ll end with recommendations from the literature for using quantization in your workflows.
Fig 1. PyTorch <3 Quantization
Contents
Fundamentals of Quantization
If someone asks you what time it is, you don’t respond “10:14:34:430705”, but you might say “a quarter past 10”.
Quantization has roots in information compression; in deep networks it refers to reducing the numerical precision of its weights and/or activations.
Overparameterized DNNs have more degrees of freedom and this makes them good candidates for information compression [1]. When you quantize a model, two things generally happen - the model gets smaller and runs with better efficiency. Hardware vendors explicitly allow for faster processing of 8-bit data (than 32-bit data) resulting in higher throughput. A smaller model has lower memory footprint and power consumption [2], crucial for deployment at the edge.
Mapping function
The mapping function is what you might guess - a function that maps values from floating-point to integer space. A commonly used mapping function is a linear transformation given by , where is the input and are quantization parameters.
To reconvert to floating point space, the inverse function is given by .
, and their difference constitutes the quantization error.
Quantization Parameters
The mapping function is parameterized by the scaling factor and zero-point .
is simply the ratio of the input range to the output range
where [] is the clipping range of the input, i.e. the boundaries of permissible inputs. [] is the range in quantized output space that it is mapped to. For 8-bit quantization, the output range .
acts as a bias to ensure that a 0 in the input space maps perfectly to a 0 in the quantized space.
Calibration
The process of choosing the input clipping range is known as calibration. The simplest technique (also the default in PyTorch) is to record the running mininmum and maximum values and assign them to and . TensorRT also uses entropy minimization (KL divergence), mean-square-error minimization, or percentiles of the input range.
In PyTorch, Observer modules (code) collect statistics on the input values and calculate the qparams . Different calibration schemes result in different quantized outputs, and it’s best to empirically verify which scheme works best for your application and architecture (more on that later).
from torch.quantization.observer import MinMaxObserver, MovingAverageMinMaxObserver, HistogramObserver
C, L = 3, 4
normal = torch.distributions.normal.Normal(0,1)
inputs = [normal.sample((C, L)), normal.sample((C, L))]
print(inputs)
# >>>>>
# [tensor([[-0.0590, 1.1674, 0.7119, -1.1270],
# [-1.3974, 0.5077, -0.5601, 0.0683],
# [-0.0929, 0.9473, 0.7159, -0.4574]]]),
# tensor([[-0.0236, -0.7599, 1.0290, 0.8914],
# [-1.1727, -1.2556, -0.2271, 0.9568],
# [-0.2500, 1.4579, 1.4707, 0.4043]])]
observers = [MinMaxObserver(), MovingAverageMinMaxObserver(), HistogramObserver()]
for obs in observers:
for x in inputs: obs(x)
print(obs.__class__.__name__, obs.calculate_qparams())
# >>>>>
# MinMaxObserver (tensor([0.0112]), tensor([124], dtype=torch.int32))
# MovingAverageMinMaxObserver (tensor([0.0101]), tensor([139], dtype=torch.int32))
# HistogramObserver (tensor([0.0100]), tensor([106], dtype=torch.int32))
Affine and Symmetric Quantization Schemes
Affine or asymmetric quantization schemes assign the input range to the min and max observed values. Affine schemes generally offer tighter clipping ranges and are useful for quantizing non-negative activations (you don’t need the input range to contain negative values if your input tensors are never negative). The range is calculated as
. Affine quantization leads to more computationally expensive inference when used for weight tensors [3].
Symmetric quantization schemes center the input range around 0, eliminating the need to calculate a zero-point offset. The range is calculated as
. For skewed signals (like non-negative activations) this can result in bad quantization resolution because the clipping range includes values that never show up in the input (see the pyplot below).
act = torch.distributions.pareto.Pareto(1, 10).sample((1,1024))
weights = torch.distributions.normal.Normal(0, 0.12).sample((3, 64, 7, 7)).flatten()
def get_symmetric_range(x):
beta = torch.max(x.max(), x.min().abs())
return -beta.item(), beta.item()
def get_affine_range(x):
return x.min().item(), x.max().item()
def plot(plt, data, scheme):
boundaries = get_affine_range(data) if scheme == 'affine' else get_symmetric_range(data)
a, _, _ = plt.hist(data, density=True, bins=100)
ymin, ymax = np.quantile(a[a>0], [0.25, 0.95])
plt.vlines(x=boundaries, ls='--', colors='purple', ymin=ymin, ymax=ymax)
fig, axs = plt.subplots(2,2)
plot(axs[0, 0], act, 'affine')
axs[0, 0].set_title("Activation, Affine-Quantized")
plot(axs[0, 1], act, 'symmetric')
axs[0, 1].set_title("Activation, Symmetric-Quantized")
plot(axs[1, 0], weights, 'affine')
axs[1, 0].set_title("Weights, Affine-Quantized")
plot(axs[1, 1], weights, 'symmetric')
axs[1, 1].set_title("Weights, Symmetric-Quantized")
plt.show()
Fig 2. Clipping ranges (in purple) for affine and symmetric schemes
In PyTorch, you can specify affine or symmetric schemes while initializing the Observer. Note that not all observers support both schemes.
for qscheme in [torch.per_tensor_affine, torch.per_tensor_symmetric]:
obs = MovingAverageMinMaxObserver(qscheme=qscheme)
for x in inputs: obs(x)
print(f"Qscheme: {qscheme} | {obs.calculate_qparams()}")
# >>>>>
# Qscheme: torch.per_tensor_affine | (tensor([0.0101]), tensor([139], dtype=torch.int32))
# Qscheme: torch.per_tensor_symmetric | (tensor([0.0109]), tensor([128]))
Per-Tensor and Per-Channel Quantization Schemes
Quantization parameters can be calculated for the layer’s entire weight tensor as a whole, or separately for each channel. In per-tensor, the same clipping range is applied to all the channels in a layer
Fig 3. Per-Channel uses one set of qparams for each channel. Per-tensor uses the same qparams for the entire tensor.
For weights quantization, symmetric-per-channel quantization provides better accuracies; per-tensor quantization performs poorly, possibly due to high variance in conv weights across channels from batchnorm folding [3].
from torch.quantization.observer import MovingAveragePerChannelMinMaxObserver
obs = MovingAveragePerChannelMinMaxObserver(ch_axis=0) # calculate qparams for all `C` channels separately
for x in inputs: obs(x)
print(obs.calculate_qparams())
# >>>>>
# (tensor([0.0090, 0.0075, 0.0055]), tensor([125, 187, 82], dtype=torch.int32))
Backend Engine
Currently, quantized operators run on x86 machines via the FBGEMM backend, or use QNNPACK primitives on ARM machines. Backend support for server GPUs (via TensorRT and cuDNN) is coming soon. Learn more about extending quantization to custom backends: RFC-0019.
backend = 'fbgemm' if x86 else 'qnnpack'
qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
QConfig
The QConfig (code) NamedTuple stores the Observers and the quantization schemes used to quantize activations and weights.
Be sure to pass the Observer class (not the instance), or a callable that can return Observer instances. Use with_args() to override the default arguments.
my_qconfig = torch.quantization.QConfig(
activation=MovingAverageMinMaxObserver.with_args(qscheme=torch.per_tensor_affine),
weight=MovingAveragePerChannelMinMaxObserver.with_args(qscheme=torch.qint8)
)
# >>>>>
# QConfig(activation=functools.partial(<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>, qscheme=torch.per_tensor_affine){}, weight=functools.partial(<class 'torch.ao.quantization.observer.MovingAveragePerChannelMinMaxObserver'>, qscheme=torch.qint8){})
In PyTorch
PyTorch allows you a few different ways to quantize your model depending on
FX Graph Mode automatically fuses eligible modules, inserts Quant/DeQuant stubs, calibrates the model and returns a quantized module - all in two method calls - but only for networks that are symbolic traceable. The examples below contain the calls using Eager Mode and FX Graph Mode for comparison.
In DNNs, eligible candidates for quantization are the FP32 weights (layer parameters) and activations (layer outputs). Quantizing weights reduces the model size. Quantized activations typically result in faster inference.
As an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass.
Post-Training Dynamic/Weight-only Quantization
Here the model’s weights are pre-quantized; the activations are quantized on-the-fly (“dynamic”) during inference. The simplest of all approaches, it has a one line API call in torch.quantization.quantize_dynamic. Currently only Linear and Recurrent (LSTM, GRU, RNN) layers are supported for dynamic quantization.
(+) Can result in higher accuracies since the clipping range is exactly calibrated for each input [1].
(+) Dynamic quantization is preferred for models like LSTMs and Transformers where writing/retrieving the model’s weights from memory dominate bandwidths [4].
(-) Calibrating and quantizing the activations at each layer during runtime can add to the compute overhead.
import torch
from torch import nn
# toy model
m = nn.Sequential(
nn.Conv2d(2, 64, (8,)),
nn.ReLU(),
nn.Linear(16,10),
nn.LSTM(10, 10))
m.eval()
## EAGER MODE
from torch.quantization import quantize_dynamic
model_quantized = quantize_dynamic(
model=m, qconfig_spec={nn.LSTM, nn.Linear}, dtype=torch.qint8, inplace=False
)
## FX MODE
from torch.quantization import quantize_fx
qconfig_dict = {"": torch.quantization.default_dynamic_qconfig} # An empty key denotes the default applied to all modules
model_prepared = quantize_fx.prepare_fx(m, qconfig_dict)
model_quantized = quantize_fx.convert_fx(model_prepared)
Post-Training Static Quantization (PTQ)
PTQ also pre-quantizes model weights but instead of calibrating activations on-the-fly, the clipping range is pre-calibrated and fixed (“static”) using validation data. Activations stay in quantized precision between operations during inference. About 100 mini-batches of representative data are sufficient to calibrate the observers [2]. The examples below use random data in calibration for convenience - using that in your application will result in bad qparams.
Fig 4. Steps in Post-Training Static Quantization
Module fusion combines multiple sequential modules (eg: [Conv2d, BatchNorm, ReLU]) into one. Fusing modules means the compiler needs to only run one kernel instead of many; this speeds things up and improves accuracy by reducing quantization error.
(+) Static quantization has faster inference than dynamic quantization because it eliminates the float<->int conversion costs between layers.
(-) Static quantized models may need regular re-calibration to stay robust against distribution-drift.
# Static quantization of a model consists of the following steps:
# Fuse modules
# Insert Quant/DeQuant Stubs
# Prepare the fused module (insert observers before and after layers)
# Calibrate the prepared module (pass it representative data)
# Convert the calibrated module (replace with quantized version)
import torch
from torch import nn
import copy
backend = "fbgemm" # running on a x86 CPU. Use "qnnpack" if running on ARM.
model = nn.Sequential(
nn.Conv2d(2,64,3),
nn.ReLU(),
nn.Conv2d(64, 128, 3),
nn.ReLU()
)
## EAGER MODE
m = copy.deepcopy(model)
m.eval()
"""Fuse
- Inplace fusion replaces the first module in the sequence with the fused module, and the rest with identity modules
"""
torch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair
torch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair
"""Insert stubs"""
m = nn.Sequential(torch.quantization.QuantStub(),
*m,
torch.quantization.DeQuantStub())
"""Prepare"""
m.qconfig = torch.quantization.get_default_qconfig(backend)
torch.quantization.prepare(m, inplace=True)
"""Calibrate
- This example uses random data for convenience. Use representative (validation) data instead.
"""
with torch.inference_mode():
for _ in range(10):
x = torch.rand(1,2, 28, 28)
m(x)
"""Convert"""
torch.quantization.convert(m, inplace=True)
"""Check"""
print(m[[1]].weight().element_size()) # 1 byte instead of 4 bytes for FP32
## FX GRAPH
from torch.quantization import quantize_fx
m = copy.deepcopy(model)
m.eval()
qconfig_dict = {"": torch.quantization.get_default_qconfig(backend)}
# Prepare
model_prepared = quantize_fx.prepare_fx(m, qconfig_dict)
# Calibrate - Use representative (validation) data.
with torch.inference_mode():
for _ in range(10):
x = torch.rand(1,2,28, 28)
model_prepared(x)
# quantize
model_quantized = quantize_fx.convert_fx(model_prepared)
Quantization-aware Training (QAT)
Fig 5. Steps in Quantization-Aware Training
The PTQ approach is great for large models, but accuracy suffers in smaller models [[6]]. This is of course due to the loss in numerical precision when adapting a model from FP32 to the INT8 realm (Figure 6(a)). QAT tackles this by including this quantization error in the training loss, thereby training an INT8-first model.
Fig 6. Comparison of PTQ and QAT convergence [3]
All weights and biases are stored in FP32, and backpropagation happens as usual. However in the forward pass, quantization is internally simulated via FakeQuantize modules. They are called fake because they quantize and immediately dequantize the data, adding quantization noise similar to what might be encountered during quantized inference. The final loss thus accounts for any expected quantization errors. Optimizing on this allows the model to identify a wider region in the loss function (Figure 6(b)), and identify FP32 parameters such that quantizing them to INT8 does not significantly affect accuracy.
Fig 7. Fake Quantization in the forward and backward pass
Image source: https://developer.nvidia.com/blog/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-tensorrt
(+) QAT yields higher accuracies than PTQ.
(+) Qparams can be learned during model training for more fine-grained accuracy (see LearnableFakeQuantize)
(-) Computational cost of retraining a model in QAT can be several hundred epochs [1]
# QAT follows the same steps as PTQ, with the exception of the training loop before you actually convert the model to its quantized version
import torch
from torch import nn
backend = "fbgemm" # running on a x86 CPU. Use "qnnpack" if running on ARM.
m = nn.Sequential(
nn.Conv2d(2,64,8),
nn.ReLU(),
nn.Conv2d(64, 128, 8),
nn.ReLU()
)
"""Fuse"""
torch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair
torch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair
"""Insert stubs"""
m = nn.Sequential(torch.quantization.QuantStub(),
*m,
torch.quantization.DeQuantStub())
"""Prepare"""
m.train()
m.qconfig = torch.quantization.get_default_qconfig(backend)
torch.quantization.prepare_qat(m, inplace=True)
"""Training Loop"""
n_epochs = 10
opt = torch.optim.SGD(m.parameters(), lr=0.1)
loss_fn = lambda out, tgt: torch.pow(tgt-out, 2).mean()
for epoch in range(n_epochs):
x = torch.rand(10,2,24,24)
out = m(x)
loss = loss_fn(out, torch.rand_like(out))
opt.zero_grad()
loss.backward()
opt.step()
"""Convert"""
m.eval()
torch.quantization.convert(m, inplace=True)
Sensitivity Analysis
Not all layers respond to quantization equally, some are more sensitive to precision drops than others. Identifying the optimal combination of layers that minimizes accuracy drop is time-consuming, so [3] suggest a one-at-a-time sensitivity analysis to identify which layers are most sensitive, and retaining FP32 precision on those. In their experiments, skipping just 2 conv layers (out of a total 28 in MobileNet v1) give them near-FP32 accuracy. Using FX Graph Mode, we can create custom qconfigs to do this easily:
# ONE-AT-A-TIME SENSITIVITY ANALYSIS
for quantized_layer, _ in model.named_modules():
print("Only quantizing layer: ", quantized_layer)
# The module_name key allows module-specific qconfigs.
qconfig_dict = {"": None,
"module_name":[(quantized_layer, torch.quantization.get_default_qconfig(backend))]}
model_prepared = quantize_fx.prepare_fx(model, qconfig_dict)
# calibrate
model_quantized = quantize_fx.convert_fx(model_prepared)
# evaluate(model)
Another approach is to compare statistics of the FP32 and INT8 layers; commonly used metrics for these are SQNR (Signal to Quantized Noise Ratio) and Mean-Squre-Error. Such a comparative analysis may also help in guiding further optimizations.
Fig 8. Comparing model weights and activations
PyTorch provides tools to help with this analysis under the Numeric Suite. Learn more about using Numeric Suite from the full tutorial.
# extract from https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html
import torch.quantization._numeric_suite as ns
def SQNR(x, y):
# Higher is better
Ps = torch.norm(x)
Pn = torch.norm(x-y)
return 20*torch.log10(Ps/Pn)
wt_compare_dict = ns.compare_weights(fp32_model.state_dict(), int8_model.state_dict())
for key in wt_compare_dict:
print(key, compute_error(wt_compare_dict[key]['float'], wt_compare_dict[key]['quantized'].dequantize()))
act_compare_dict = ns.compare_model_outputs(fp32_model, int8_model, input_data)
for key in act_compare_dict:
print(key, compute_error(act_compare_dict[key]['float'][0], act_compare_dict[key]['quantized'][0].dequantize()))
Recommendations for your workflow
Fig 9. Suggested quantization workflow
Click for larger image
Points to note
That was a lot to digest, congratulations for sticking with it! Next, we’ll take a look at quantizing a “real-world” model that uses dynamic control structures (if-else, loops). These elements disallow symbolic tracing a model, which makes it a bit tricky to directly quantize the model out of the box. In the next post of this series, we’ll get our hands dirty on a model that is chock full of loops and if-else blocks, and even uses third-party libraries in the forward call.
We’ll also cover a cool new feature in PyTorch Quantization called Define-by-Run, that tries to ease this constraint by needing only subsets of the model’s computational graph to be free of dynamic flow. Check out the Define-by-Run poster at PTDD’21 for a preview.
References
[1] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.
[2] Krishnamoorthi, R. (2018). Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342.
[3] Wu, H., Judd, P., Zhang, X., Isaev, M., & Micikevicius, P. (2020). Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:2004.09602.
[4] PyTorch Quantization Docs
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
December 02, 2024
HadaCore: Tensor Core Accelerated Hadamard Transform Kernel
by
IBM and Meta
IBM: Krish Agarwal, Rishi Astra, Adnan Hoque, Mudhakar Srivatsa, Raghu Ganti
Meta: Less Wright, Sijia Chen
Quantization is a method for improving model inference speeds by compressing model weights and performing (faster) computation in lower precision data types. However, quantization can result in accuracy loss due to the presence of outliers. Recent works like QuaRot, SpinQuant, and FlashAttention-3 introduce methods to increase the numerical accuracy of INT4, INT8 and FP8 quantization in LLMs. These methods rely on Hadamard Transforms. In this blog, we present HadaCore, a Hadamard Transform CUDA kernel that achieves state-of-the-art performance on NVIDIA A100 and H100 GPUs. Our kernel achieves speedups of 1.1–1.4x and 1.0–1.3x, with a peak gain of 3.5x and 3.6x respectively, over Dao AI Lab’s Fast Hadamard Transform Kernel. We leverage a hardware-aware work decomposition that benefits from Tensor Core acceleration while maintaining quantization error reduction.
Figure 1: Speedup of HadaCore vs Dao AI Hadamard CUDA kernel. A peak gain of 3.46x on the A100 is achieved using 128 rotation by 8.4M elements.
The HadaCore Kernel is publicly available.
Background
QuaRot and SpinQuant both propose methods to increase the numerical accuracy of INT4 and INT8 quantization in LLMs. Both methods rotate model activations since rotations are statistically likely to reduce the magnitude of outliers, as it “distributes” extreme values among other (less extreme) dimensions, and rotation is also an easily invertible operation using the inverse of the rotation matrix. These methods can also improve FP8 inference accuracy, such as in FlashAttention-3.
Figure 2. Transformer block showing online (red) and offline rotations (blue) in QuaRot
Applying these rotation matrices introduces model runtime overhead due to the online operations shown in Figure 2. These rotations can be applied through matrix multiplication, but the added overhead would diminish the benefits from quantization. Therefore, QuaRot and SpinQuant opt to use Walsh-Hadamard matrices, a special type of rotation matrix that can be applied faster than matrix multiplication using the Fast Walsh-Hadamard Transform algorithm. HadaCore is an optimized implementation of this algorithm for NVIDIA GPUs that support Tensor Cores.
Tensor Core Accelerated Hadamard Transform
HadaCore leverages NVIDIA Tensor Cores, which are specialized compute units on NVIDIA GPUs optimized for matrix multiplication. To achieve this, our kernel performs a hardware-aware work decomposition of the Fast Walsh-Hadamard algorithm. This work decomposition ensures that we can utilize the MMA PTX instructions that execute on the Tensor Core chip. HadaCore applies a 16×16 Hadamard transform to chunks of the input data. The computation can then be offloaded to the FP16 Tensor Core with usage of the mma.m16n8k16 instruction. The warp-level parallelism for HadaCore is shown below.
Figure 3: HadaCore Parallelization, 1x256 vectors (rows) being rotated by a size 256 Hadamard.
We process fragments of 256 elements in parallel using warp-level Tensor Core operations to achieve up to a 256-size Hadamard transform. For further sizes, we shuffle data between warps and repeat.
Microbenchmarks
We benchmark HadaCore against the Dao AI Lab Hadamard Kernel on both NVIDIA H100 and A100 GPUs across varying Hadamard and input tensor sizes.
Figure 4: HadaCore Kernel Speedup on NVIDIA A100 over Dao AI Lab Fast Hadamard Kernel
Color coded Speedup Table for NVIDIA A100, Green = Speedup over Baseline
Figure 5: HadaCore Kernel Speedup on NVIDIA H100 over Dao AI Lab Fast Hadamard Kernel
Color coded Speedup Table for NVIDIA H100, Green = Speedup over Baseline
We showcase our speedup as the input tensor size (labeled element count) in our charts increase. Element count is the number of elements in the target matrix we are rotating. For example, in multi-head attention:
The queries (Q), keys (K) and values (V) tensors are 4D tensors of size:
(batch_size, seq_len, n_heads, head_dim)
A Hadamard matrix of size head_dim is applied to these activation tensors, so we refer to this as using a Hadamard size of head_dim with an element count of:
batch_size*seq_len*n_heads*head_dim.
Common element counts for query rotations in an attention block:
HadaCore achieves 1.1–1.4x speedup on A100 and 1.0–1.3x speedup on H100 over Dao AI Lab’s Fast Hadamard kernel, with a peak gain of 3.5x and 3.6x, respectively. For smaller sizes on H100, HadaCore’s gain decreases. For future work, we plan to incorporate usage of Hopper specific features like TMA and WGMMA for improved H100 performance.
MMLU Benchmarks
We evaluated MMLU scores on a Llama 3.1-8B inference workload where the FlashAttention computation was performed in FP8. Newer generation NVIDIA Hopper GPUs come equipped with FP8 Tensor Cores that deliver substantial compute gain over FP16.
Our results show the benefit of using HadaCore for accuracy preservation when combined with optimizations such as FP8 FlashAttention.
Table 1: MMLU scores for Llama3.1 8B with FP16 baseline and FP8 attention using Hadamard transforms, comparing an implementation with explicit Hadamard matrix multiplications vs. HadaCore (higher is better)
From the above MMLU scores, we note that for Llama3.1-8B inference with FP8 attention, HadaCore improves the quantization error introduced from computing attention in a lower precision.
Conclusion
We showcased our speedups achieved by moving the Fast-Walsh Hadamard algorithm into a CUDA kernel that leverages Tensor Core acceleration and achieves a peak speedup of 3.5x and 3.6x over the Dao AI Fast-Hadamard kernel on NVIDIA A100 and H100, respectively.
Further, we showed on the MMLU benchmark that rotating with HadaCore maintains similar quantization error reduction to the Fast-Hadamard kernel, while providing computational acceleration.
Future Work
We plan to implement a Triton version of our kernel and experiment with more advanced techniques such as kernel fusion to support fused Hadamard transform and quantization. Further, we plan to extend our kernel to support BF16 Tensor Core compute.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
November 01, 2024
Deep Dive on CUTLASS Ping-Pong GEMM Kernel
by
Less Wright, Adnan Hoque
Figure 1. FP8 GEMM Throughput Comparison CUTLASS vs Triton
Summary
In this post, we provide an overview, with relevant FP8 inference kernel benchmarking, of the CUTLASS Ping-Pong GEMM kernel.
Ping-Pong is one of the fastest matmul (GEMM) kernel architectures available for the Hopper GPU architecture. Ping-Pong is a member of the Warp Group Specialized Persistent Kernels family, which includes both Cooperative and Ping-Pong variants. Relative to previous GPUs, Hopper’s substantial tensor core compute capability requires deep asynchronous software pipelining in order to achieve peak performance.
The Ping-Pong and Cooperative kernels exemplify this paradigm, as the key design patterns are persistent kernels to amortize launch and prologue overhead, and ‘async everything’ with specialized warp groups with two consumers and one producer, to create a highly overlapped processing pipeline that is able to continuously supply data to the tensor cores.
When the H100 (Hopper) GPU was launched, Nvidia billed it as the first truly asynchronous GPU. That statement highlights the need for H100 specific kernel architectures to also be asynchronous in order to fully maximize computational/GEMM throughput.
The pingpong GEMM, introduced in CUTLASS 3.x, exemplifies this by moving all aspects of the kernel to a ‘fully asynchronous’ processing paradigm. In this blog, we’ll showcase the core features of the ping-pong kernel design as well as showcase its performance on inference workloads vs cublas and triton split-k kernels.
Ping-Pong Kernel Design
Ping-Pong (or technically ‘sm90_gemm_tma_warpspecialized_pingpong’) operates with an asynchronous pipeline, leveraging warp specialization. Instead of the more classical homogeneous kernels, “warp groups” take on specialized roles. Note that a warp group consists of 4 warps of 32 threads each, or 128 total threads.
On earlier architectures, latency was usually hidden by running multiple thread blocks per SM. However, with Hopper, the Tensor Core throughput is so high that it necessitates moving to deeper pipelines. These deeper pipelines then hinder running multiple thread blocks per SM. Thus, persistent thread blocks now issue collective main loops across multiple tiles and multiple warp groups. Thread block clusters are allocated based on the total SM count.
For Ping-Pong, each warp group takes on a specialized role of either Data producer or Data consumer.
The producer warp group focuses on producing data movement to fill the shared memory buffers (via TMA). Two other warp groups are dedicated consumers that process the math (MMA) portion with tensor cores, and then do any follow up work and write their results back to global memory (epilogue).
Producer warp groups work with TMA (Tensor Memory Accelerator), and are deliberately kept as lightweight as possible. In fact, in Ping-Pong, they deliberately reduce their register resources to improve occupancy. Producers will reduce their max register counts by 40, vs consumers will increase their max register count by 232, an effect we can see in the CUTLASS source and corresponding SASS:
Unique to Ping-Pong, each consumer works on separate C output tiles. (For reference, the cooperative kernel is largely equivalent to Ping-Pong, but both consumer groups work on the same C output tile). Further, the two consumer warp groups then split their work between the main loop MMA and epilogue.
This is shown in the below image:
Figure 2: An overview of the Ping-Pong Kernel pipeline. Time moves left to right.
By having two consumers, it means that one can be using the tensor cores for MMA while the other performs the epilogue, and then vice-versa. This maximizes the ‘continuous usage’ of the tensor cores on each SM, and is a key part of the reason for the max throughput. The tensor cores can be continuously fed data to realize their (near) maximum compute capability. (See the bottom section of the Fig 2 illustration above).
Similar to how Producer threads stay focused only on data movements, MMA threads only issue MMA instructions in order to achieve peak issue rate. MMA threads must issue multiple MMA instructions and keep these in flight against TMA wait barriers.
An excerpt of the kernel code is shown below to cement the specialization aspects:
// Two types of warp group 'roles'
enum class WarpGroupRole {
Producer = 0,
Consumer0 = 1,
Consumer1 = 2
};
//warp group role assignment
auto warp_group_role = WarpGroupRole(canonical_warp_group_idx());
Data Movement with Producers and Tensor Memory Accelerator
The producer warps focus exclusively on data movement - specifically they are kept as lightweight as possible and in fact give up some of their register space to the consumer warps (keeping only 40 registers, while consumers will get 232). Their main task is issuing TMA (tensor memory accelerator) commands to move data from Global memory to shared memory as soon as a shared memory buffer is signaled as being empty.
To expand on TMA, or Tensor Memory Accelerator, TMA is a hardware component introduced with H100’s that asynchronously handles the transfer of memory from HBM (global memory) to shared memory. By having a dedicated hardware unit for memory movement, worker threads are freed to engage in other work rather than computing and managing data movement. TMA not only handles the movement of the data itself, but also calculates the required destination memory addresses, can apply any transforms (reductions, etc.) to the data and can handle layout transformations to deliver data to shared memory in a ‘swizzled’ pattern so that it’s ready for use without any bank conflicts. Finally, it can also multicast the same data if needed to other SM’s that are members of the same thread cluster. Once the data has been delivered, TMA will then signal the consumer of interest that the data is ready.
CUTLASS Asynchronous Pipeline Class
This signaling between producers and consumers is coordinated via the new Asynchronous Pipeline Class which CUTLASS describes as follows:
“Implementing a persistent GEMM algorithm calls for managing dozens of different kinds of asynchronously executing operations that synchronize using multiple barriers organized as a circular list.
This complexity is too much for human programmers to manage by hand.
As a result, we have developed [CUTLASS Pipeline Async Class]…”
Barriers and synchronization within the Ping-Pong async pipeline
Producers must ‘acquire’ a given smem buffer via ‘producer_acquire’. At the start, a pipeline is empty meaning that producer threads can immediately acquire the barrier and begin moving data.
PipelineState mainloop_pipe_producer_state = cutlass::make_producer_start_state<MainloopPipeline>();
Once the data movement is complete, producers issue the ‘producer_commit’ method to signal the consumer threads that data is ready.
However, for Ping-Pong, this is actually a noop instruction since TMA based producer’s barriers are automatically updated by the TMA when writes are completed.
consumer_wait - wait for data from producer threads (blocking).
consumer_release - signal waiting producer threads that they are finished consuming data from a given smem buffer. In other words, allow producers to go to work refilling this with new data.
From there, synchronization will begin in earnest where the producers will wait via the blocking producer acquire until they can acquire a lock, at which point their data movement work will repeat. This continues until the work is finished.
To provide a pseudo-code overview:
//producer
While (work_tile_info.is_valid_tile) {
collective_mainloop.dma() // fetch data with TMA
scheduler.advance_to_next_work()
Work_tile_info = scheduler.get_current_work()
}
// Consumer 1, Consumer 2
While (work_tile_info.is_valid_tile()) {
collective_mainloop.mma()
scheduler.advance_to_next_work()
Work_tile_info = scheduler.get_current_work()
}
And a visual birds-eye view putting it all together with the underlying hardware:
Figure 3: An overview of the full async pipeline for Ping-Pong
Step-by-Step Breakdown of Ping-Pong Computation Loop
Finally, a more detailed logical breakout of the Ping-Pong processing loop:
A - Producer (DMA) warp group acquires a lock on a shared memory buffer.
B - this allows it to kick off a tma cp_async.bulk request to the tma chip (via a single thread).
C - TMA computes the actual shared memory addressing required, and moves the data to shared memory. As part of this, swizzling is performed in order to layout the data in smem for the fastest (no bank conflict) access.
C1 - potentially, data can also be multicast to other SMs and/or it may need to wait for data from other tma multicast to complete the loading. (threadblock clusters now share shared memory across multiple SMs!)
D - At this point, the barrier is updated to signal the arrival of the data to smem.
E - The relevant consumer warpgroup now gets to work by issuing multiple wgmma.mma_async commands, which then read the data from smem to Tensor cores as part of it’s wgmma.mma_async matmul operation.
F - the MMA accumulator values are written to register memory as the tiles are completed.
G - the consumer warp group releases the barrier on the shared memory.
H - the producer warp groups go to work issuing the next tma instruction to refill the now free smem buffer.
I - The consumer warp group simultaneously applies any epilogue actions to the accumulator, and then move data from register to a different smem buffer.
J - The consumer warp issues a cp_async command to move data from smem to global memory.
The cycle repeats until the work is completed. Hopefully this provides you with a working understanding of the core concepts that power Ping-Pong’s impressive performance.
Microbenchmarks
To showcase some of Ping-Pong’s performance, below are some comparison charts related to our work on designing fast inference kernels.
First a general benchmarking of the three fastest kernels so far (lower is better): \
Figure 4, above: Benchmark timings of FP8 GEMMs, lower is better (faster)
And translating that into a relative speedup chart of Ping-Pong vs cuBLAS and Triton:
Figure 5, above: Relative speedup of Ping-Pong vs the two closest kernels.
The full source code for the Ping-Pong kernel is here (619 lines of deeply templated CUTLASS code, or to paraphrase the famous turtle meme - “it’s templates…all the way down! ):
In addition, we have implemented PingPong as a CPP extension to make it easy to integrate into use with PyTorch here (along with a simple test script showing it’s usage):
Finally, for continued learning, Nvidia has two GTC videos that dive into kernel design with CUTLASS:
Future Work
Data movement is usually the biggest impediment to top performance for any kernel, and thus having an optimal strategy understanding of TMA (Tensor Memory Accelerator) on Hopper is vital. We previously published work on TMA usage in Triton. Once features like warp specialization are enabled in Triton, we plan to do another deep dive on how Triton kernels like FP8 GEMM and FlashAttention can leverage kernel designs like Ping-Pong for acceleration on Hopper GPUs.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
April 27, 2023
Introducing Hidet: A Deep Learning Compiler for Efficient Model Serving
by
Team Hidet
Hidet is a powerful deep learning compiler that simplifies the process of implementing high-performing deep learning operators on modern accelerators (e.g., NVIDIA GPUs). With the new feature of torch.compile(...) in PyTorch 2.0, integrating a novel compiler into PyTorch is easier than ever - Hidet now can be used as a torch.compile(...) backend to accelerate PyTorch models, making it an attractive option for PyTorch users who want to improve the inference performance of their models, especially for those who also need to implement extremely optimized custom operators.
Using Hidet to Compile A PyTorch Model
To use Hidet in PyTorch, you need to first install the hidet package via pip:
pip install hidet
Hidet is integrated with PyTorch as a torch.compile(...) backend following the Custom Backends tutorial. You can specify hidet as the backend when you compile a model. (Note: requires PyTorch version 2.0+):
torch.compile(..., backend='hidet')
Hidet converts the given PyTorch model in the torch.fx.Graph format into its internal graph representation, and conducts a series of optimizations. Hidet provides a few options to configure the optimizations. For example, we can use hidet.torch.dynamo_config.use_tensor_core(True) to allow Hidet to generate CUDA kernels that leverage the Tensor Cores on NVIDIA GPUs, and use hidet.torch.dynamo_config.search_space(2) to allow Hidet to search for the best operator schedule specific for your hardware and input sizes. More configurations can be found in Hidet’s documentation.
Here’s a complete example of how to use Hidet to compile and optimize a pre-trained ResNet50 model from torchvision:
import hidet
import torch
# Load a pre-trained ResNet50 model
x = torch.randn(1, 3, 224, 224, device='cuda').half()
model = torch.hub.load(
'pytorch/vision:v0.6.0', 'resnet50', pretrained=True
).cuda().half().eval()
# Configure hidet to use tensor core and enable tuning
hidet.torch.dynamo_config.use_tensor_core(True)
hidet.torch.dynamo_config.search_space(2)
# Compile the model using Hidet
model_opt = torch.compile(model, backend='hidet')
# Check correctness
torch.testing.assert_close(actual=model_opt(x), expected=model(x), rtol=1e-2, atol=1e-2)
# Benchmark
from hidet.utils import benchmark_func
print('eager: {:2f}'.format(benchmark_func(lambda: model(x))))
print('hidet: {:2f}'.format(benchmark_func(lambda: model_opt(x))))
We encourage you to try out the above script on your own NVIDIA GPU(s)! If you run this script on an aws.g5.2xlarge instance, you would get the result shown in the following figure. Hidet achieves the speedup because it could automatically fuse multiple operators, tune operator schedules, and use CUDA Graph to reduce framework-level overhead. More results can be found in the ASPLOS’23 publication of Hidet and our performance tracking
Using Hidet Script to Write Custom Operators
Hidet Script is one approach to implement tensor operators in Python. The following example shows how to implement a naive matrix multiplication using Hidet Script and integrate it as a PyTorch operator.
import torch
import hidet
def matmul(m_size, n_size, k_size):
from hidet.lang import f32, attr
from hidet.lang.cuda import threadIdx, blockIdx, blockDim
with hidet.script_module() as script_module:
@hidet.script
def matmul(
a: f32[m_size, k_size],
b: f32[k_size, n_size],
c: f32[m_size, n_size]
):
attr.cuda_grid_dim = ((m_size + 31) // 32, (n_size + 31) // 32)
attr.cuda_block_dim = (32, 32)
i = threadIdx.x + blockIdx.x * blockDim.x
j = threadIdx.y + blockIdx.y * blockDim.y
if i < m_size and j < n_size:
c[i, j] = 0.0
for k in range(k_size):
c[i, j] += a[i, k] * b[k, j]
ir_module = script_module.ir_module()
func = hidet.driver.build_ir_module(ir_module)
return func
class NaiveMatmul(torch.autograd.Function):
@staticmethod
def forward(ctx, a, b):
m, k = a.shape
k, n = b.shape
c = torch.empty([m, n], dtype=a.dtype, device=a.device)
func = matmul(m, n, k)
func(a, b, c)
return c
a = torch.randn([3, 4], device='cuda')
b = torch.randn([4, 5], device='cuda')
c = NaiveMatmul.apply(a, b)
cc = torch.matmul(a, b)
torch.testing.assert_close(c, cc)
More optimizations can be applied, see the example in our documentation to learn more.
Hidet Script vs. Triton: Triton greatly simplifies the CUDA programming by introducing the tile-based programming model where the parallel execution unit is thread blocks instead of threads. However, this simplification also prevents the tensor program developers from manipulating the fine-grained computation and memory resources (e.g., warps, shared memory) in their preferred ways. It would be challenging to implement an optimization that requires fine-grained control of these resources using Triton if it has not been implemented by the Triton compiler itself. Hidet Script, on the other hand, simplifies tensor programming while still enabling users to implement their own optimizations with extensive flexibility. It’s worth noting that the more granular control of Hidet Script also brings added complexity compared to Triton.
More about Hidet
Hidet originates from a research project led by the EcoSystem lab at the University of Toronto (UofT) and AWS. The authors propose a new way, named the task-mapping programming paradigm, to construct tensor programs. It aims to simplify the tensor programming without sacrificing any optimization opportunity. Now, Hidet is an open-source project, jointly supported by CentML and the EcoSystem lab, that aims to provide an efficient solution to end-to-end inference on modern accelerators (e.g., NVIDIA GPUs).
Additional Resources
Acknowledgement
We would like to thank Jerry Park, Mark Saroufim, Jason Liang and Helen Suk for their valuable help on preparing the blog post and feedback on the text. We also would like to thank Nikita Shulga, Jason Ansel, and Dmytro Dzhulgakov for reviewing and improving our PR https://github.com/pytorch/pytorch/pull/93873 on the 3rd-party dynamo backend registration.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
December 06, 2024
Accelerating 2D Dynamic Block Quantized Float8 GEMMs in Triton
by
Meta: Less Wright, IBM: Adnan Hoque
2D block quantization for Float8 (FP8) holds the promise of improving the accuracy of Float8 quantization while also accelerating GEMM’s for both inference and training. In this blog, we showcase advances using Triton for the two main phases involved in doing block quantized Float8 GEMMs.
For the incoming quantization of A and B tensors from high precision (BFloat16) to Float8, we showcase GridQuant which leverages a mini-grid stride loop style of processing with nearly 2x speedups (99.31%) over a current 2D block quantization kernel.
For the Float8 GEMM, we showcase 3 new developments for Triton - Warp Specialization, TMA and a persistent kernel to effectively create a cooperative style kernel (an alternative to the Ping-Pong schedule). As a result, we achieve ~1.2x speedup over our best-performing SplitK kernel from last year.
Figure 1: A comparison of the 2D quantization speedup over a current baseline, across a range of sizes. (lower-is-better)
Why 2D Blockwise Quantization for FP8?
Generally speaking, the accuracy of fp8 quantization improves as we move from tensor-wise scaling, to row-wise scaling, to 2D block-wise, and then finally to column-wise scaling. This is because features for a given token are stored in each column, and thus each column in that tensor is more similarly scaled.
To minimize the number of outliers of a given numerical set, we want to find commonality so that numbers are being scaled in a similar fashion. For transformers, this means column based quantization could be optimal…however, columnar memory access is massively inefficient due to the data being laid out in memory in a rowwise contiguous manner. Thus columnwise loading would require memory access involving large strides in memory to pull isolated values, contrary to the core tenets of efficient memory access.
However, 2D is the next best option as it includes some aspects of columnar while being more memory efficient to pull since we can vectorize these loads with 2D vectorization. Therefore, we want to find ways to improve the speed for 2D block quantization which is why we developed the GridQuant kernel.
For the quantization process, we need to 2D block quantize both the higher precision BF16 incoming tensors (A = input activations, B = weights) and then proceed to do the Float8 matmul using the quantized tensors and their 2D block scaling values, and return an output C tensor in BF16.
How does GridQuant improve 2D block quantization efficiency?
The GridQuant kernel has several improvements over the initial baseline quantization implementation which was a standard tile based implementation. The GridQuant kernel has two full passes through the entire input tensor and works as follows:
Phase 1 - Determine the max abs value for each 256x256 sub block from the incoming high precision tensor.
1 - We divide the BF16 tensor into 256 x 256 sub blocks. This quantization size is configurable, but 256x256 is the default as it provides a blend of quantization precision and processing efficiency.
2 - Each 256x256 sub-block is subdivided into 64 sub-blocks arranged in an 8x8 pattern, with each sub-block processing a 32x32 element block. A single warp (32 threads) handles the computation for all elements within its assigned 32x32 block.
3 - We declare a 32x32 max_vals array in shared memory. This will store the current max val for each position i,j as the 2d vector block moves across the entire 256x256 sub_block.
This is an important improvement because it means we can do vectorized, rather than scalar, updates to the max vals scoring system and allows for much more efficient updates.
Figure 2: The Fractionalized layout of an incoming tensor - a grid of 256x256 is created across the tensor, and within each 256x256 block, it is further refined into 32x32 sub blocks. A 32x32 max_vals is created for each 256x256 block.
4 - Each warp processes a 32x32 chunk and because we are using 4 warps, we ensure the Triton compiler can pipeline the memory loads for the next 32x32 chunk with the actual processing of absmax calculations for the current chunk. This ensures that the warp scheduler is able to toggle warps loading data with those processing and keep the SM continuously busy.
5 - The 32x32 2D vector block processing is moved across and through the entire 256x256 subblock in a grid stride looping fashion, with each warp updating the shared memory 32x32 max_vals against its current 32x32 sub-block. Thus max_vals[i,j] holds the latest max value as each sub block is processed.
After completing the 256x256 block grid stride loop, the maxvals matrix is then itself reduced to find the absolute single max value for that entire 256 block.
This gives us our final scaling factor value for this 2D 256 x 256 block.
Phase 2 - Quantize the 256x256 block values to Float8, by using the single max value scaling factor found during Phase 1.
Next, we make a second pass through the entire 256x256 block to rescale all the numbers using this max value found in phase 1 to convert them to the float 8 format.
Because we know we need to do 2 complete passes, for the loads during the phase 1 portion we instruct the triton compiler to keep these values in cache at higher priority (evict policy = last).
This means that during the second pass, we can get a high hit rate from the L2 cache which provides much faster memory access than going all the way to HBM.
With the 2D block quantization processing complete when all 256 x256 blocks are processed, we can return the new Float8 quantized tensor along with it’s scaling factor matrix, which we’ll use in the next phase of the GEMM processing. This input quantization is repeated for the second input tensor as well, meaning we end up with A_Float 8, A_scaling_matrix, and B_Float8 and B_scaling matrix.
GridQuant - GEMM Kernel
The GridQuant-GEMM kernel takes in the four outputs from the quantization above for processing. Our high-performance GEMM kernel features several new Triton developments to achieve SOTA performance for matrix shape profiles relevant in LLM inference during the decoding phase.
These new features are commonly found in Hopper optimized kernels like FlashAttention-3 and Machete, built using CUTLASS 3.x. Here, we discuss these methods and showcase the performance benefits that can be achieved leveraging them in Triton.
Tensor Memory Accelerator (TMA)
The TMA unit on NVIDIA Hopper GPUs, is a dedicated hardware unit for load/store operations that act on multidimensional tensors commonly found in AI workloads. This has several important benefits.
Transferring data from global and shared memory can occur without involving other resources on GPU SMs, freeing up registers and CUDA Cores. Further, when used in warp-specialized kernels, light-weight TMA operations can be assigned to a producer warp allowing for a high degree of overlap of memory transfers and computation.
For more details on how TMA is used in Triton see our previous blog.
Warp-Specialization (Cooperative Persistent Kernel Design)
Warp Specialization is a technique to leverage pipeline parallelism on GPUs. This experimental feature enables the expression of specialized threads through a tl.async_task API, allowing the user to specify how operations in a Triton program should be “split” amongst warps. The cooperative Triton kernel performs different types of computation and loads that each take place on their own dedicated hardware. Having dedicated hardware for each of these specialized tasks makes it possible to realize parallelism efficiently for operations that have no data dependency.
Figure 3. Logical view of dedicated HW units in NVIDIA H100 SM
The operations in our kernel that create the pipeline are:
A - Load per-block scale from GMEM into SMEM (cp.async engine)
B - Load activation (A) and Weight (B) tiles from GMEM into SMEM (TMA)
C - Matrix-Multiplication of A tile and B tile = C tile (Tensor Core)
D - Scale C tile with per-block scale from A and per-block scale from B (CUDA core)
These steps can be assigned to “tasks” which are carried out by specialized warp groups in a threadblock. The cooperative strategy has three warp groups. A producer warp group that is responsible for feeding the compute units and 2 consumer warp groups that perform the computation. The two consumer warp groups each work on half of the same output tile.
Figure 4. Warp-Specialized Persistent Cooperative kernel (source: NVIDIA)
This is different from the ping-pong schedule we discussed in our previous blog, where each consumer warp group works on different output tiles. We note that the Tensor Core ops are not overlapped with the epilogue computation. Decreased utilization of the Tensor Core pipeline during the epilogue phase of the computation will reduce register pressure for the consumer warp group compared to ping-pong which always keeps the Tensor Core busy, thus allowing for larger tile sizes.
Lastly, our kernel is designed to be persistent when the grid size exceeds the number of available compute units on H100 GPUs (132). Persistent kernels remain active on the GPU for an extended period and compute multiple output tiles during its lifetime. Our kernel leverages TMA async shared to global memory stores, while continuing to do work on the next output tile as opposed to incurring the cost of scheduling multiple threadblocks.
Microbenchmarks
Figure 5: Latency comparison (us) of Gridquant-GEMM vs our best performing SplitK kernel for small batch regime and Llama3 8192 N,K sizing. (lower-is-better)
The Warp-Specialized Triton kernel achieves SOTA performance at the above small-M and square matrix shapes, achieving a nearly 1.2x speedup over the SplitK Triton kernel, which was the previous best performing strategy for Triton GEMMs in this low arithmetic intensity regime. For future work, we plan to tune our kernel performance for the medium-to-large M regime and non-square matrices.
Conclusion and Future Work
Future work includes benchmarking gridquant on end to end workflows. In addition, we plan to run more extensive benchmarks on non-square (rectangular) matrices as well as medium-to-large M sizes. Finally, we plan to explore ping-pong style warp-specialization in Triton versus the current cooperative implementation.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
December 15, 2021
Efficient PyTorch: Tensor Memory Format Matters
by
Dhruv Matani, Suraj Subramanian
Ensuring the right memory format for your inputs can significantly impact the running time of your PyTorch vision models. When in doubt, choose a Channels Last memory format.
When dealing with vision models in PyTorch that accept multimedia (for example image Tensorts) as input, the Tensor’s memory format can significantly impact the inference execution speed of your model on mobile platforms when using the CPU backend along with XNNPACK. This holds true for training and inference on server platforms as well, but latency is particularly critical for mobile devices and users.
Outline of this article
Matrix Storage Representation in C++
Images are fed into PyTorch ML models as multi-dimensional Tensors. These Tensors have specific memory formats. To understand this concept better, let’s take a look at how a 2-d matrix may be stored in memory.
Broadly speaking, there are 2 main ways of efficiently storing multi-dimensional data in memory.
You can see the differences graphically below.
C++ stores multi-dimensional data in row-major format.
Efficiently accessing elements of a 2d matrix
Similar to the storage format, there are 2 ways to access data in a 2d matrix.
For maximum efficiency, one should always access data in the same format in which it is stored. I.e. if the data is stored in row-major order, then one should try to access it in that order.
The code below (main.cpp) shows 2 ways of accessing all the elements of a 2d 4000x4000 matrix.
#include <iostream>
#include <chrono>
// loop1 accesses data in matrix 'a' in row major order,
// since i is the outer loop variable, and j is the
// inner loop variable.
int loop1(int a[4000][4000]) {
int s = 0;
for (int i = 0; i < 4000; ++i) {
for (int j = 0; j < 4000; ++j) {
s += a[i][j];
}
}
return s;
}
// loop2 accesses data in matrix 'a' in column major order
// since j is the outer loop variable, and i is the
// inner loop variable.
int loop2(int a[4000][4000]) {
int s = 0;
for (int j = 0; j < 4000; ++j) {
for (int i = 0; i < 4000; ++i) {
s += a[i][j];
}
}
return s;
}
int main() {
static int a[4000][4000] = {0};
for (int i = 0; i < 100; ++i) {
int x = rand() % 4000;
int y = rand() % 4000;
a[x][y] = rand() % 1000;
}
auto start = std::chrono::high_resolution_clock::now();
auto end = start;
int s = 0;
#if defined RUN_LOOP1
start = std::chrono::high_resolution_clock::now();
s = 0;
for (int i = 0; i < 10; ++i) {
s += loop1(a);
s = s % 100;
}
end = std::chrono::high_resolution_clock::now();
std::cout << "s = " << s << std::endl;
std::cout << "Time for loop1: "
<< std::chrono::duration<double, std::milli>(end - start).count()
<< "ms" << std::endl;
#endif
#if defined RUN_LOOP2
start = std::chrono::high_resolution_clock::now();
s = 0;
for (int i = 0; i < 10; ++i) {
s += loop2(a);
s = s % 100;
}
end = std::chrono::high_resolution_clock::now();
std::cout << "s = " << s << std::endl;
std::cout << "Time for loop2: "
<< std::chrono::duration<double, std::milli>(end - start).count()
<< "ms" << std::endl;
#endif
}
Let’s build and run this program and see what it prints.
g++ -O2 main.cpp -DRUN_LOOP1 -DRUN_LOOP2
./a.out
Prints the following:
s = 70
Time for loop1: 77.0687ms
s = 70
Time for loop2: 1219.49ms
loop1() is 15x faster than loop2(). Why is that? Let’s find out below!
Measure cache misses using Cachegrind
Cachegrind is a cache profiling tool used to see how many I1 (first level instruction), D1 (first level data), and LL (last level) cache misses your program caused.
Let’s build our program with just loop1() and just loop2() to see how cache friendly each of these functions is.
Build and run/profile just loop1()
g++ -O2 main.cpp -DRUN_LOOP1
valgrind --tool=cachegrind ./a.out
Prints:
==3299700==
==3299700== I refs: 643,156,721
==3299700== I1 misses: 2,077
==3299700== LLi misses: 2,021
==3299700== I1 miss rate: 0.00%
==3299700== LLi miss rate: 0.00%
==3299700==
==3299700== D refs: 160,952,192 (160,695,444 rd + 256,748 wr)
==3299700== D1 misses: 10,021,300 ( 10,018,723 rd + 2,577 wr)
==3299700== LLd misses: 10,010,916 ( 10,009,147 rd + 1,769 wr)
==3299700== D1 miss rate: 6.2% ( 6.2% + 1.0% )
==3299700== LLd miss rate: 6.2% ( 6.2% + 0.7% )
==3299700==
==3299700== LL refs: 10,023,377 ( 10,020,800 rd + 2,577 wr)
==3299700== LL misses: 10,012,937 ( 10,011,168 rd + 1,769 wr)
==3299700== LL miss rate: 1.2% ( 1.2% + 0.7% )
Build and run/profile just loop2()
g++ -O2 main.cpp -DRUN_LOOP2
valgrind --tool=cachegrind ./a.out
Prints:
==3300389==
==3300389== I refs: 643,156,726
==3300389== I1 misses: 2,075
==3300389== LLi misses: 2,018
==3300389== I1 miss rate: 0.00%
==3300389== LLi miss rate: 0.00%
==3300389==
==3300389== D refs: 160,952,196 (160,695,447 rd + 256,749 wr)
==3300389== D1 misses: 160,021,290 (160,018,713 rd + 2,577 wr)
==3300389== LLd misses: 10,014,907 ( 10,013,138 rd + 1,769 wr)
==3300389== D1 miss rate: 99.4% ( 99.6% + 1.0% )
==3300389== LLd miss rate: 6.2% ( 6.2% + 0.7% )
==3300389==
==3300389== LL refs: 160,023,365 (160,020,788 rd + 2,577 wr)
==3300389== LL misses: 10,016,925 ( 10,015,156 rd + 1,769 wr)
==3300389== LL miss rate: 1.2% ( 1.2% + 0.7% )
The main differences between the 2 runs are:
As you can see, loop2() causes many many more (~16x more) L1 data cache misses than loop1(). This is why loop1() is ~15x faster than loop2().
Memory Formats supported by PyTorch Operators
While PyTorch operators expect all tensors to be in Channels First (NCHW) dimension format, PyTorch operators support 3 output memory formats.
The reason that ChannelsLast is preferred for vision models is because XNNPACK (kernel acceleration library) used by PyTorch expects all inputs to be in Channels Last format, so if the input to the model isn’t channels last, then it must first be converted to channels last, which is an additional operation.
Additionally, most PyTorch operators preserve the input tensor’s memory format, so if the input is Channels First, then the operator needs to first convert to Channels Last, then perform the operation, and then convert back to Channels First.
When you combine it with the fact that accelerated operators work better with a channels last memory format, you’ll notice that having the operator return back a channels-last memory format is better for subsequent operator calls or you’ll end up having every operator convert to channels-last (should it be more efficient for that specific operator).
From the XNNPACK home page:
“All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension”.
PyTorch Best Practice
The best way to get the most performance from your PyTorch vision models is to ensure that your input tensor is in a Channels Last memory format before it is fed into the model.
You can get even more speedups by optimizing your model to use the XNNPACK backend (by simply calling optimize_for_mobile() on your torchscripted model). Note that XNNPACK models will run slower if the inputs are contiguous, so definitely make sure it is in Channels-Last format.
Working example showing speedup
Run this example on Google Colab - note that runtimes on colab CPUs might not reflect accurate performance; it is recommended to run this code on your local machine.
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
import torch.backends.xnnpack
import time
print("XNNPACK is enabled: ", torch.backends.xnnpack.enabled, "\n")
N, C, H, W = 1, 3, 200, 200
x = torch.rand(N, C, H, W)
print("Contiguous shape: ", x.shape)
print("Contiguous stride: ", x.stride())
print()
xcl = x.to(memory_format=torch.channels_last)
print("Channels-Last shape: ", xcl.shape)
print("Channels-Last stride: ", xcl.stride())
## Outputs:
# XNNPACK is enabled: True
# Contiguous shape: torch.Size([1, 3, 200, 200])
# Contiguous stride: (120000, 40000, 200, 1)
# Channels-Last shape: torch.Size([1, 3, 200, 200])
# Channels-Last stride: (120000, 1, 600, 3)
The input shape stays the same for contiguous and channels-last formats. Internally however, the tensor’s layout has changed as you can see in the strides. Now, the number of jumps required to go across channels is only 1 (instead of 40000 in the contiguous tensor).
This better data locality means convolution layers can access all the channels for a given pixel much faster. Let’s see now how the memory format affects runtime:
from torchvision.models import resnet34, resnet50, resnet101
m = resnet34(pretrained=False)
# m = resnet50(pretrained=False)
# m = resnet101(pretrained=False)
def get_optimized_model(mm):
mm = mm.eval()
scripted = torch.jit.script(mm)
optimized = optimize_for_mobile(scripted) # explicitly call the xnnpack rewrite
return scripted, optimized
def compare_contiguous_CL(mm):
# inference on contiguous
start = time.perf_counter()
for i in range(20):
mm(x)
end = time.perf_counter()
print("Contiguous: ", end-start)
# inference on channels-last
start = time.perf_counter()
for i in range(20):
mm(xcl)
end = time.perf_counter()
print("Channels-Last: ", end-start)
with torch.inference_mode():
scripted, optimized = get_optimized_model(m)
print("Runtimes for torchscripted model: ")
compare_contiguous_CL(scripted.eval())
print()
print("Runtimes for mobile-optimized model: ")
compare_contiguous_CL(optimized.eval())
## Outputs (on an Intel Core i9 CPU):
# Runtimes for torchscripted model:
# Contiguous: 1.6711160129999598
# Channels-Last: 1.6678222839999535
# Runtimes for mobile-optimized model:
# Contiguous: 0.5712863490000473
# Channels-Last: 0.46113000699995155
Conclusion
The Memory Layout of an input tensor can significantly impact a model’s running time. For Vision Models, prefer a Channels Last memory format to get the most out of your PyTorch models.
References
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
July 28, 2020
Introducing native PyTorch automatic mixed precision for faster training on NVIDIA GPUs
by
Mengdi Huang, Chetan Tekur, Michael Carilli
Most deep learning frameworks, including PyTorch, train with 32-bit floating point (FP32) arithmetic by default. However this is not essential to achieve full accuracy for many deep learning models. In 2017, NVIDIA researchers developed a methodology for mixed-precision training, which combined single-precision (FP32) with half-precision (e.g. FP16) format when training a network, and achieved the same accuracy as FP32 training using the same hyperparameters, with additional performance benefits on NVIDIA GPUs:
In order to streamline the user experience of training in mixed precision for researchers and practitioners, NVIDIA developed Apex in 2018, which is a lightweight PyTorch extension with Automatic Mixed Precision (AMP) feature. This feature enables automatic conversion of certain GPU operations from FP32 precision to mixed precision, thus improving performance while maintaining accuracy.
For the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, torch.cuda.amp. torch.cuda.amp is more flexible and intuitive compared to apex.amp. Some of apex.amp’s known pain points that torch.cuda.amp has been able to fix:
With AMP being added to PyTorch core, we have started the process of deprecating apex.amp. We have moved apex.amp to maintenance mode and will support customers using apex.amp. However, we highly encourage apex.amp customers to transition to using torch.cuda.amp from PyTorch Core.
Example Walkthrough
Please see official docs for usage:
Example:
import torch
# Creates once at the beginning of training
scaler = torch.cuda.amp.GradScaler()
for data, label in data_iter:
optimizer.zero_grad()
# Casts operations to mixed precision
with torch.cuda.amp.autocast():
loss = model(data)
# Scales the loss, and calls backward()
# to create scaled gradients
scaler.scale(loss).backward()
# Unscales gradients and calls
# or skips optimizer.step()
scaler.step(optimizer)
# Updates the scale for next iteration
scaler.update()
Performance Benchmarks
In this section, we discuss the accuracy and performance of mixed precision training with AMP on the latest NVIDIA GPU A100 and also previous generation V100 GPU. The mixed precision performance is compared to FP32 performance, when running Deep Learning workloads in the NVIDIA pytorch:20.06-py3 container from NGC.
Accuracy: AMP (FP16), FP32
The advantage of using AMP for Deep Learning training is that the models converge to the similar final accuracy while providing improved training performance. To illustrate this point, for Resnet 50 v1.5 training, we see the following accuracy results where higher is better. Please note that the below accuracy numbers are sample numbers that are subject to run to run variance of up to 0.4%. Accuracy numbers for other models including BERT, Transformer, ResNeXt-101, Mask-RCNN, DLRM can be found at NVIDIA Deep Learning Examples Github.
Training accuracy: NVIDIA DGX A100 (8x A100 40GB)
Training accuracy: NVIDIA DGX-1 (8x V100 16GB)
Speedup Performance:
FP16 on NVIDIA V100 vs. FP32 on V100
AMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.
Figure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better.
FP16 on NVIDIA A100 vs. FP16 on V100
AMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.
Figure 3. Performance of mixed precision training on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100. The higher the better.
Call to action
AMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning examples. NVIDIA PyTorch with native AMP support is available from the PyTorch NGC container version 20.06. We highly encourage existing apex.amp customers to transition to using torch.cuda.amp from PyTorch Core available in the latest PyTorch 1.6 release.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
March 26, 2020
Introduction to Quantization on PyTorch
by
Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath, and Seth Weidman
It’s important to make efficient use of both server-side and on-device compute resources when developing machine learning applications. To support more efficient deployment on servers and edge devices, PyTorch added a support for model quantization using the familiar eager mode Python API.
Quantization leverages 8bit integer (int8) instructions to reduce the model size and run the inference faster (reduced latency) and can be the difference between a model achieving quality of service goals or even fitting into the resources available on a mobile device. Even when resources aren’t quite so constrained it may enable you to deploy a larger and more accurate model. Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 we published quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.
This blog post provides an overview of the quantization support on PyTorch and its incorporation with the TorchVision domain library.
What is Quantization?
Quantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:
Quantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy.
We designed quantization to fit into the PyTorch framework. The means that:
We developed three techniques for quantizing neural networks in PyTorch as part of quantization tooling in the torch.quantization name-space.
The Three Modes of Quantization Supported in PyTorch starting version 1.3
Dynamic Quantization
The easiest method of quantization PyTorch supports is called dynamic quantization. This involves not just converting the weights to int8 - as happens in all quantization variants - but also converting the activations to int8 on the fly, just before doing the computation (hence “dynamic”). The computations will thus be performed using efficient int8 matrix multiplication and convolution implementations, resulting in faster compute. However, the activations are read and written to memory in floating point format.
import torch.quantization
quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
Post-Training Static Quantization
One can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting “observer” modules at different points that record these distributions). This information is used to determine how specifically the different activations should be quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.
With this release, we’re supporting several features that allow users to optimize their static quantization:
PyTorch API:
We have a tutorial with an end-to-end example of quantization (this same tutorial also covers our third quantization method, quantization-aware training), but because of our simple API, the three lines that perform post-training static quantization on the pre-trained model myModel are:
# set quantization config for server (x86)
deploymentmyModel.qconfig = torch.quantization.get_default_config('fbgemm')
# insert observers
torch.quantization.prepare(myModel, inplace=True)
# Calibrate the model and collect statistics
# convert to quantized version
torch.quantization.convert(myModel, inplace=True)
Quantization Aware Training
Quantization-aware training(QAT) is the third method, and the one that typically results in highest accuracy of these three. With QAT, all weights and activations are “fake quantized” during both the forward and backward passes of training: that is, float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. Thus, all the weight adjustments during training are made while “aware” of the fact that the model will ultimately be quantized; after quantizing, therefore, this method usually yields higher accuracy than the other two methods.
PyTorch API:
For example, in the end-to-end example, we load in a pre-trained model as qat_model, then we simply perform quantization-aware training using:
# specify quantization config for QAT
qat_model.qconfig=torch.quantization.get_default_qat_qconfig('fbgemm')
# prepare QAT
torch.quantization.prepare_qat(qat_model, inplace=True)
# convert to quantized version, removing dropout, to check for accuracy on each
epochquantized_model=torch.quantization.convert(qat_model.eval(), inplace=False)
Device and Operator Support
Quantization support is restricted to a subset of available operators, depending on the method being used, for a list of supported operators, please see the documentation at https://pytorch.org/docs/stable/quantization.html.
The set of available operators and the quantization numerics also depend on the backend being used to run quantized models. Currently quantized operators are supported only for CPU inference in the following backends: x86 and ARM. Both the quantization configuration (how tensors should be quantized and the quantized kernels (arithmetic with quantized tensors) are backend dependent. One can specify the backend by doing:
import torchbackend='fbgemm'
# 'fbgemm' for server, 'qnnpack' for mobile
my_model.qconfig = torch.quantization.get_default_qconfig(backend)
# prepare and convert model
# Set the backend on which the quantized kernels need to be run
torch.backends.quantized.engine=backend
However, quantization aware training occurs in full floating point and can run on either GPU or CPU. Quantization aware training is typically only used in CNN models when post training static or dynamic quantization doesn’t yield sufficient accuracy. This can occur with models that are highly optimized to achieve small size (such as Mobilenet).
Integration in torchvision
We’ve also enabled quantization for some of the most popular models in torchvision: Googlenet, Inception, Resnet, ResNeXt, Mobilenet and Shufflenet. We have upstreamed these changes to torchvision in three forms:
Choosing an approach
The choice of which scheme to use depends on multiple factors:
Currently, operator coverage is limited and may restrict the choices listed in the table below:
The table below provides a guideline.
Performance Results
Quantization provides a 4x reduction in the model size and a speedup of 2x to 3x compared to floating point implementations depending on the hardware platform and the model being benchmarked. Some sample results are:
Accuracy results
We also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we compared the F1 score of BERT on the GLUE benchmark for MRPC.
Computer Vision Model accuracy
Speech and NLP Model accuracy
Conclusion
To get started on quantizing your models in PyTorch, start with the tutorials on the PyTorch website. If you are working with sequence data start with dynamic quantization for LSTM, or BERT. If you are working with image data then we recommend starting with the transfer learning with quantization tutorial. Then you can explore static post training quantization. If you find that the accuracy drop with post training quantization is too high, then try quantization aware training.
If you run into issues you can get community help by posting in at discuss.pytorch.org, use the quantization category for quantization related issues.
This post is authored by Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath and Seth Weidman. Special thanks to Jianyu Huang, Lingyi Liu and Haixin Liu for producing quantization metrics included in this post.
Further reading:
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Run PyTorch locally or get started quickly with one of the supported cloud platforms
Whats new in PyTorch tutorials
Familiarize yourself with PyTorch concepts and modules
Bite-size, ready-to-deploy PyTorch code examples
Master PyTorch basics with our engaging YouTube tutorial series
Learn about the tools and frameworks in the PyTorch Ecosystem
Join the PyTorch developer community to contribute, learn, and get your questions answered.
A place to discuss PyTorch code, issues, install, research
Find resources and get questions answered
Award winners announced at this year's PyTorch Conference
Build innovative and privacy-aware AI experiences for edge devices
End-to-end solution for enabling on-device inference capabilities across mobile and edge devices
Explore the documentation for comprehensive guidance on how to use PyTorch.
Read the PyTorch Domains documentation to learn more about domain-specific libraries.
Catch up on the latest technical news and happenings
Stories from the PyTorch ecosystem
Learn about the latest PyTorch tutorials, new, and more
Learn how our community solves real, everyday machine learning problems with PyTorch
Find events, webinars, and podcasts
Stay up-to-date with the latest updates
Learn more about the PyTorch Foundation.
March 13, 2024
Maximizing training throughput using PyTorch FSDP
by
Team PyTorch at IBM and Team PyTorch at Meta
In this blog, we demonstrate the scalability of FSDP with a pre-training exemplar, a 7B model trained for 2T tokens, and share various techniques we used to achieve a rapid training speed of 3,700 tokens/sec/GPU, or 40B tokens/day on 128 A100 GPUs. This translates to a model FLOPS utilization (MFU) and hardware FLOPS utilization (HFU) of 57%. Additionally, we have observed near linear scaling of FSDP to 512 GPUs, implying that training a 7B model on 512 GPUs to 2T tokens using this method would take just under two weeks.
IBM researchers trained a Meta Llama 2 7B architecture to 2T tokens, which we will refer to as LlamaT(est). This model demonstrates comparable model quality as Llama 2 on various academic benchmarks. All of the training code, along with our methodology to achieve this throughput, can be found in this blog. We also share the configuration knobs that work well for the Llama 2 models – 7B, 13B, 34B, and 70B for A100s and H100s.
In this process, we also propose a _new _selective activation checkpointing mechanism that applies to FSDP which gives us a 10% boost beyond out-of-the box FSDP. We have open sourced the training code base and an associated scalable data loader as the methodology to achieve this throughput.
One key benefit of a PyTorch native pathway for training is the ability to seamlessly train on multiple hardware backends. For example, the recent end-to-end stack for training that was released by AllenAI through OLMo also leverages PyTorch FSDP for training on AMD and NVIDIA GPUs. There are three main components that we leverage from FSDP to achieve our throughput:
IBM has been working closely with Team PyTorch at Meta on PyTorch FSDP for nearly two years: introducing the rate limiter for achieving better throughput on Ethernet interconnects, distributed checkpointing to improve the checkpoint times by an order of magnitude, and implementing the early version of checkpointing for the hybrid sharding mode of FSDP. Late last year, we used FSDP to train a model end-to-end.
Training Details
The 7B model is trained on 128 A100 GPUs with 400Gbps network connectivity and GPU direct RDMA. We use SDPA FlashAttention v2 for attention computation, and for this model we turned off activation checkpointing that limits the batch size, but provides the highest throughput – batch size is 1 million tokens per batch for 128 GPUs and improves throughput by about 10% when compared to activation checkpointing. With these parameters, we have an almost full overlap in computation and communication. We use the AdamW optimizer in 32-bit with beta1 of 0.9 and beta2 of 0.95, weight decay of 0.1, and a learning rate ending at 3e-5 with a warmup to max learning rate of 3e-4 and a cosine schedule to reduce to 3e-5 over 2T tokens. The training was performed using mixed precision bf16 on an internal dataset. The training stack is using IBM’s Foundation Model Stack for model architecture and PyTorch nightlies post-2.2 release for FSDP and SDPA. We tried a few different nightlies during the time period of Nov 2023 through Feb 2024 and we observed an improvement in the throughput.
Selective activation checkpointing
We jointly implemented a simple and effective mechanism of selective activation checkpointing (AC). In FSDP, the common practice is to checkpoint each transformer block. A simple extension is to checkpoint every _n _blocks and reduce the amount of recomputation, while increasing the memory needed. This is quite effective for the 13B model size, increasing the throughput by 10%. For the 7B model size, we did not need activation checkpointing at all. Future versions of FSDP will provide selective activation checkpointing at an operator level, enabling an optimal compute-memory tradeoff. The code for the above is implemented here.
Throughput and MFU, HFU computation
While we only trained the 7B model to 2T tokens, we performed numerous experiments on the other model sizes to provide the best configuration options. This is summarized in the table below for two types of infrastructure — an A100 cluster with 128 GPUs and 400Gbps inter-node interconnect, and an H100 cluster with 96 GPUs and 800Gbps inter-node interconnect.
Table 1: Model and Hardware FLOPS utilization of various model sizes on A100 and H100 GPUs
HFU numbers are computed using the PyTorch FLOP counter and the theoretical bf16 performance of A100 and H100 GPUs, whereas MFU numbers are computed using the methodology outlined in NanoGPT and the PaLM paper. We also note that the batch sizes we use for the larger models are intentionally kept at 2 per GPU to mimic choices made in training models of 4k sequence length and achieve this up to 512 GPUs without exceeding the 4M tokens popular batch size. Beyond that, we would need tensor parallelism or sequence parallelism.
We note in the table above that for A100s, that activation recomputation causes the MFU to reduce, while HFU increases! With the introduction of better activation checkpointing schemes, we expect MFU to increase and catch up with HFU. However, we observe that for H100s, both MFU and HFU are relatively low. We analyze the PyTorch profile traces on H100 and observe that there is a 10% gap due to network “peeking” out. In addition, we hypothesize that the HBM bandwidth of H100s is the cause for the reduced HFU/MFU on H100s and not being able to obtain the 3x improvement (H100s are theoretically 3x faster than A100s - 312 vs 989TFLOPS, but only have <2x the HBM bandwidth than A100s - 2.0 vs 3.35TBps). We plan to try out other configuration options like Tensor Parallel to improve the knobs for the 70B model on H100s.
Model details
The loss curve for training is shown in the below figure.
Figure 1: LlamaT training loss curve
The 2T checkpoint is converted to Hugging Face format by a script that is provided in the repository and we then use lm-evaluation-harness to compute key academic benchmarks and compare that by running it on Llama2-7B. These results are captured in the below table.
Table 1: LM eval harness scores
We observe that the model performs competitively with Llama2 (bolder is better).
Training chronicles
Training was stable with no crashes, though we did observe a few hiccups:
0-200B tokens: We observed a slowdown in the iteration time (time taken to execute one training step). We stopped the job to ensure that the data loader was not causing any slowdowns and the checkpointing was performant and accurate. We did not find any issues. By this time, HSDP checkpointing code was available in PyTorch, and we took this opportunity to make the switch to PyTorch checkpointing code.
200B tokens-1.9T: We did not do any manual intervention in the job in late December. When we came back early January, disk space had exceeded and checkpoints were failing to be written, although the training job continued. The last known checkpoint was 1.5T.
1.5T-1.7T: We evaluated the 1.5T checkpoint with lm-evaluation-harness and discovered that model has been trained with an extra special token between two documents due to the Hugging Face tokenizer introducing a separator token and our dataloader also appending its own document separator. We modified the dataloader to eliminate the extra special token, and continued training with the modified dataloader from 1.7T token onwards.
1.7T-2T: The loss initially spiked due to the change in the special tokens which was quickly recovered in a few billion tokens. The training finished without any other manual intervention!
Key takeaways and even more speed
We demonstrated how one can use FSDP to train a model to 2T tokens with an excellent performance of 3700 tokens/sec/GPU and that generates a good quality model. As part of this exercise, we open sourced all our code for training and the knobs to achieve this throughput. These knobs can be leveraged by not only large-scale runs, but also smaller scale tuning runs. You can find the code here.
FSDP APIs implement the ZeRO algorithms in a PyTorch native manner and allow for tuning and training of large models. In the past, we have seen FSDP proof points (Stanford Alpaca, Hugging Face, Llama 2 recipes) on tuning a variety of LLMs (such as Meta Llama 2 7B to 70B Llama) using simple training loops and achieving good throughputs and training times.
Finally, we note that there are several levers for speeding up training:
We have leveraged 1, 3, and a variation of 4 in this blog and are working closely with Team PyTorch at Meta to get torch.compile (2) as well as a more advanced version of 4 with per-operator selective activation recomputation. We plan to share a simple formatting code and example data to ingest into our data loader to enable others to use the code base for training of models.
Acknowledgements
There are several teams that have been involved in reaching this proof point and we would like to thank the teams across Meta and IBM. Specifically, we extend our gratitude to the PyTorch distributed team, Facebook Research and Applied AI teams that built the FSDP APIs and made enhancements based on our feedback. We also wish to thank the data team at IBM Research that curated the data corpus used in this exercise and the infrastructure team at IBM Research (especially, Claudia Misale, Shweta Salaria, and Seetharami Seelam) that optimized NCCL and network configurations. By building and leveraging all of these components, we have successfully demonstrated the LlamaT proof point.
The selective activation checkpointing was conceptualized at IBM by Linsong Chu, Davis Wertheimer, Mudhakar Srivatsa, and Raghu Ganti and implemented by Less Wright at Meta.
Special thanks to Stas Bekman and Minjia Zhang, who provided extensive feedback and helped improve the blog. Their insights have been invaluable in highlighting key aspects of optimizing the training and exploring further enhancements.
Appendix
Communication computation overlap
Another key aspect of training in a multi-node setting is the ability to overlap communication and computation. In FSDP, there are multiple opportunities for overlapping – during the FSDP unit gathering phase at forward pass as well as the backward pass computation. Overlapping the gather during forward pass while the computation of the previous unit and overlapping backward computation with the next unit gathering and gradient scattering help improve GPU utilization by nearly 2x. We illustrate this on the 400Gbps network interconnect with A100 80GB GPUs. In the case of HSDP, there is no inter-node traffic during the pre-fetch stage for forward pass and the overlap is only for the backward gradient computation phase. Of course, HSDP is feasible only when the model can be sharded within a single node, limiting the size of models to around 30B parameters.
The below figure shows three steps in FSDP with the communication between nodes at the bottom and the compute stream at the top of the second half of the image. For the 7B model with no activation recomputation, we observe the overlap to be complete. In practice, the overlap percentage possible is 90% since the first block during forward pass and the last block during backward pass are not able to overlap.
A zoomed in view of the above three-step process is shown below for a single step. We can clearly see the granularity of the computation and communication and how they overlap in an interleaved manner.
Docs
Access comprehensive developer documentation for PyTorch
Tutorials
Get in-depth tutorials for beginners and advanced developers
Resources
Find development resources and get your questions answered
Stay in touch for updates, event info, and the latest news
By submitting this form, I consent to receive marketing emails from the LF and its projects regarding their events, training, research, developments, and related announcements. I understand that I can unsubscribe at any time using the links in the footers of the emails I receive. Privacy Policy.
© Copyright The Linux Foundation. The PyTorch Foundation is a project of The Linux Foundation.
For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
Linux Foundation Policies. The PyTorch Foundation supports the PyTorch open source
project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
please see LF Projects, LLC Policies. Privacy Policy and Terms of Use.
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. As the current maintainers of this site, Facebook’s Cookies Policy applies. Learn more, including about available controls: Cookies Policy.
|
Subsets and Splits