content
stringlengths
0
188k
How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog December 2022 In this post, I’ll iteratively optimize an implementation of matrix multiplication written in CUDA. My goal is not to build a cuBLAS replacement, but to deeply understand the most important performance characteristics of the GPUs that are used for modern deep learning. This includes coalescing global memory accesses, shared memory caching and occupancy optimizations, among others.You can download the code for all kernels from Github. Also checkout wangzyon’s repo from which I copied the benchmarking setup. This post is less polished than my normal uploads, and includes many more sidenotes. I used it as notepad for ideas and scribbles while writing the kernels. That’s why I called it a worklog :) Matrix multiplication on GPUs may currently be the most important algorithm that exists, considering it makes up almost all the FLOPs during the training and inference of large deep-learning models. So how much work is it to write a performant CUDA SGEMMSGEMM performs C=αAB+βC at single (=32b) precision. from scratch? I’ll start with a naive kernel and step-by-step apply optimizations until we get within 95% (on a good day) of the performance of cuBLAS (NVIDIA’s official matrix library):cuBLAS at FP32 that is. In my setting, doing the matmul using TF32 or BF16 precision allows cuBLAS to use the tensor cores, which increases FLOPS by 2.5x or 3.5x. I may look into tensor cores / warp matrix functions in a future post. – Come work on kernels at Anthropic! We’re always hiring for capable performance & kernel engineers to optimize our models on TPUs, GPUs & Trainium. Apply here! – Kernel 1: Naive Implementation In the CUDA programming model, computation is ordered in a three-level hierarchy. Each invocation of a CUDA kernel creates a new grid, which consists of multiple blocks. Each block consists of up to 1024 individual threads.These constants can be looked-up in the CUDA Programming guide. Threads that are in the same block have access to the same shared memory region (SMEM). The number of threads in a block can be configured using a variable normally called blockDim, which is a vector consisting of three ints. The entries of that vector specify the sizes of blockDim.x, blockDim.y and blockDim.z, as visualized below: Similarly, the number of blocks in a grid is configurable using the gridDim variable. When we launch a new kernel from the hostIn accelerator lingo, host refers to the CPU and device is the accelerator, here the GPU., it creates a single grid, containing the blocks and threads as specified.From here on I’ll only be talking about 2D grids and blocks, partly because the 3D-structure is seldom used and because drawing in 3D is too hard. It’s important to keep in mind that the thread hierarchy we just talked about mostly concerns program correctness. For program performance, as we’ll see later, it’s not a good idea to treat all threads in the same block as equals. For our first kernel, we’ll use the grid, block and thread hierarchy to assign each thread a unique entry in the result matrix C. Then that thread will compute the dot product of the corresponding row of A and column of B, and write the result to C. Due to each location of C being written to by only one thread, we have to do no synchronization. We’ll launch the kernel like so: // create as many blocks as necessary to map all of C dim3 gridDim(CEIL_DIV(M, 32), CEIL_DIV(N, 32), 1); // 32 * 32 = 1024 thread per block dim3 blockDim(32, 32, 1); // launch the asynchronous execution of the kernel on the device // The function call returns immediately on the host sgemm_naive<<<gridDim, blockDim>>>(M, N, K, alpha, A, B, beta, C); CUDA code is written from a single-thread perspective. In the code of the kernel, we access the blockIdx and threadIdx built-in variables. These will return different values based on the thread that’s accessing them.In our example, threadIdx.x and threadIdx.y will vary from 0 to 31 based on the position of the thread in the grid. Same for blockIdx.x and blockIdx.y, which will vary from 0 to CEIL_DIV(N, 32) or CEIL_DIV(M, 32) based on the position of the thread’s block in the grid. We’ll do a lot of indexing into strided in-memory representations of matrices. Edward Yang’s post on PyTorch Internals contains a good explanation of strided tensors. __global__ void sgemm_naive(int M, int N, int K, float alpha, const float *A, const float *B, float beta, float *C) { // compute position in C that this thread is responsible for const uint x = blockIdx.x * blockDim.x + threadIdx.x; const uint y = blockIdx.y * blockDim.y + threadIdx.y; // `if` condition is necessary for when M or N aren't multiples of 32. if (x < M && y < N) { float tmp = 0.0; for (int i = 0; i < K; ++i) { tmp += A[x * K + i] * B[i * N + y]; } // C = α*(A@B)+β*C C[x * N + y] = alpha * tmp + beta * C[x * N + y]; } } To visualize this simple kernel:If the size of the matrix is not divisible by the size of the block, we’ll have to launch extra blocks to process the remainder. For example, in the picture below, we’ll create 9 blocks of equal threadsize, but only 4 of those fully utilize their 1024 threads. This artifact is called tile quantization, and appears whenever we try to map a fixed-sized volume across a variable-sized input. This kernel takes about 0.5s to process three 4092² fp32 matrices on my A6000 GPU. Let’s do some non-implementation-specific calculations: Lower Bounding the Fastest Possible Runtime For a matrix multiplication of two 4092² matrices, followed by an addition of a 4092² matrix (to make the GEMM): So 268MB is the absolute minimum of memory that any implementation would have to transfer from/to global GPU memory,Global memory is the GPU’s main memory region. If Nvidia sells you a GPU advertised with 80GB of memory and 1TB/s of bandwidth, they’re talking about the capacity and bandwidth of global memory. Later we’ll talk about other memory regions on the GPU, like the shared memory, which is physically distinct and has very different performance characteristics. assuming it has a big enough cache.The cuBLAS kernel loads a total of 500MB of GMEM during the whole calculation. We’ll see later how increasing arithmetic intensity allows us to achieve an access volume that low. Let’s calculate some upper bounds on kernel performance. The GPU is advertised with 30TFLOPs/s of fp32 compute throughput and 768GB/s of global memory bandwidth. If we achieved those numbers,Reminder that peak FLOPs is a reductionist metric, since it depends on the instruction mix. There’s no way you’d reach those 30TFLOPs/s if your FLOP of choice is DIV. However, since matmul uses mainly FMA instructions, which tends to be the fastest FLOPs, we have a good chance of actually getting close to that peak FLOP value. Similar story for the bandwidth: Peak bandwidth can only be reached if the access pattern suits the hardware. we’d need 4.5ms for the calculation and 0.34ms for the memory transfers. So in our napkin math, the calculation takes ~10x more time than the memory accesses. This means our final optimized kernel will be compute-bound, as long as we end up having to transfer <10x the absolute minimum memory volume of 278MB.The A6000 is advertised with 309TFLOPs/s of tensor core performance. If we could use tensor cores for our fp32 matmul, the calculation would only take 0.44ms, and an optimized kernel doing 4092^2 matrix mul would almost surely still be memory bound. This puts into perspective just how fast the tensor cores are. Now that we’ve calculated some lower bounds for our fp32 GEMM calculation, let’s get back to the kernel on hand, to figure out why it’s so much slower than it could be. Memory Access Pattern of the Naive Kernel In our kernel, two threads in the same block with ThreadIds (0, 0) and (0, 1) will load the same column of B but different rows of A. If we assume the worst case of zero caching, then each thread has to load 2*4092+1 floats from global memory. As we have 4092² threads total, this would result in 548GB of memory traffic. Below is a visualization of the memory access pattern of our naive kernel, taking two threads A (red) and B (green) as an example: So to recap, when I run this kernel on an A6000 GPU it achieves ~300GFLOPs when multiplying two 4092x4092 float32 matrices. Pretty bad, considering that the A6000 is advertised as being able to achieve almost 30 TFLOPs.Just for comparison, 300 GFLOPs is also roughly the performance achieved by the optimized BLAS library on the 2015 Haswell CPU that I used in my earlier post on CPU matmul. So how can we start to make this faster? One way is to optimize the memory access pattern of our kernel such that global memory accesses can be coalesced (=combined) into fewer accesses. Kernel 2: Global Memory Coalescing Before we get into global memory coalescing, we need to learn about the concept of a warp. For execution, the threads of a block are grouped into so-called warps, consisting of 32 threads. A warp is then assigned to a warp scheduler, which is the physical core that executes the instructions.Before the Volta architecture, it used to be the case that all threads of a warp were fed from the same instruction stream. On a branch, the threads that didn’t take the branch were inactived using the so-called active mask. However, since Volta, it’s no longer a good idea to rely on this ‘warp-synchronous’ behaviour, as instructions from different branches may be interleaved even for the same threads within a warp. There are four warp schedulers per multiprocessor. The grouping into warps happens based on a consecutive threadId. If we set the blockDim to be multi-dimension, then the threadId is calculated like so: threadId = threadIdx.x+blockDim.x*(threadIdx.y+blockDim.y*threadIdx.z) Then, threads with neighbouring threadId become part of the same warp. Below I tried to illustrate this, using a smaller “warpsize” of 8 threads (real warps always contain 32 threads):I like to think of the three dimensions x,y,z of threadId as being “column-major”, due to the first dimension x being the one that’s continuous in “warpspace”. I don’t know if others use that term, but it makes the concept more clear to me. The concept of a warp is relevant for this second kernel, as sequential memory accesses by threads that are part of the same warp can be grouped and executed as one. This is referred to as global memory coalescing. It’s the most important thing to keep in mind when optimizing a kernel’s GMEM memory accesses toward achieving the peak bandwidth. Below is an example, where consecutive memory accesses by threads in the same warp are grouped, allowing each warp to execute 8 memory accesses using only 2 32B loads: In reality, the GPU supports 32B, 64B and 128B memory accesses. So, if each thread is loading a 32bit float from global memory, the warp scheduler (probably the MIO) can coalesce this 32*4B=128B load into a single transaction. This is only possible if the floats loaded are consecutive in memory, and if access is aligned.In that way, optimizing for global memory coalescing on GPU has a lot of similarities to optimizing for cache line utilization on CPU. Interestingly, to allow coalescing the threads within a warp have to access consecutive addresses, but the accesses don’t have to be consecutive within-warp. Illustrated below: If they aren’t, or if access cannot be coalesced for some other reason, then the GPU will execute as many 32B loads as necessary to fetch all floats, leading to a lot of wasted bandwidth. Profiling our naive kernel, we can observe the detrimental effect of non-coalesced access as we achieve only 15GB/s of GMEM throughput. Looking back at the previous kernel, we assigned threads their entry of C like so: const uint x = blockIdx.x * blockDim.x + threadIdx.x; const uint y = blockIdx.y * blockDim.y + threadIdx.y; Hence, threads of the same warp (those with consecutive threadIdx.x) were loading the rows of A non-consecutively from memory. The naive kernel’s pattern of accessing the memory of A looked more like so: To enable coalescing, we can change how we assign positions of the result matrix C to threads. This change in the global memory access pattern is illustrated below: To implement this, we only need to change the first two lines: const int x = blockIdx.x * BLOCKSIZE + (threadIdx.x / BLOCKSIZE); const int y = blockIdx.y * BLOCKSIZE + (threadIdx.x % BLOCKSIZE); if (x < M && y < N) { float tmp = 0.0; for (int i = 0; i < K; ++i) { tmp += A[x * K + i] * B[i * N + y]; } C[x * N + y] = alpha * tmp + beta * C[x * N + y]; } And we call it like so:This wasn’t immediately obvious to me, but enabling GMEM coalescing changes nothing in the assembly, see the SASS output on Godbolt. Access coalescing is done at kernel runtime by the hardware. This makes sense since coalescing requires aligned access, which cannot be guaranteed at compile time as we pass the matrix pointers as function arguments. Also: the assembly features partial unrolling of our inner loop even though the loop count K is not known at compile time. Exciting! // gridDim stays the same dim3 gridDim(CEIL_DIV(M, 32), CEIL_DIV(N, 32)); // make blockDim 1-dimensional, but don't change number of threads dim3 blockDim(32 * 32); sgemm_coalescing<<<gridDim, blockDim>>>(M, N, K, alpha, A, B, beta, C); Global memory coalescing increases memory throughput from 15GB/s to 110GB/s. Performance reaches 2000 GFLOPS, a big improvement compared to the 300 GFLOPS of the first, naive kernel. For the next kernel, we’ll use the GPU’s fast on-chip memory, called shared memory, to cache data that will be re-used. Kernel 3: Shared Memory Cache-Blocking Next to the large global memory, a GPU has a much smaller region of memory that is physically located on the chip, called shared memory (SMEM). Physically, there’s one shared memory per SM.Here’s a helpful illustration of the memory hierarchy on an A100 GPU (source): Logically, this shared memory is partitioned among the blocks. This means that a thread can communicate with the other threads in its block via the shared memory chunk. On my A6000 GPU, each block has access to a maximum of 48KB of shared memory.The amount of SMEM is configurable, by trading off a larger shared memory for a smaller L1 cache. For specifics, see the compute capability documentation. Also, it’s possible to use more than 48KB of SMEM per thread by utilizing dynamic shared memory. As the shared memory is located on-chip, it has a much lower latency and higher bandwidth than global memory. I couldn’t find good benchmark results for the Ampere architecture but for Volta (released in 2017) the benchmarks performed in this paper report 750GiB/s of global memory bandwidth, and 12,080GiB/s of shared memory bandwidth.It doesn’t look like these numbers have changed much since Volta. Nvidia reports ~750GB of max GMEM bandwidth for my A6000 (Ampere). So for this next kernel, we’ll load a chunk of A and a chunk of B from global memory into shared memory. Then we’ll perform as much work as possible on the two chunks, with each thread still being assigned one entry of C. We’ll move the chunks along the columns of A and the rows of B performing partial sums on C until the result is computed. This is illustrated below: The important parts of the code are below, with variable names corresponding to the plot above:In general, I didn’t write the code to work for arbitrary sizes of M, N and K, as the condition checking introduces a lot of clutter and isn’t very interesting. To make sure the kernel works correctly, I test it with random data and a few different matrix sizes by comparing to cuBLAS. // advance pointers to the starting positions A += cRow * BLOCKSIZE * K; // row=cRow, col=0 B += cCol * BLOCKSIZE; // row=0, col=cCol C += cRow * BLOCKSIZE * N + cCol * BLOCKSIZE; // row=cRow, col=cCol float tmp = 0.0; // the outer loop advances A along the columns and B along // the rows until we have fully calculated the result in C. for (int bkIdx = 0; bkIdx < K; bkIdx += BLOCKSIZE) { // Have each thread load one of the elements in A & B from // global memory into shared memory. // Make the threadCol (=threadIdx.x) the consecutive index // to allow global memory access coalescing As[threadRow * BLOCKSIZE + threadCol] = A[threadRow * K + threadCol]; Bs[threadRow * BLOCKSIZE + threadCol] = B[threadRow * N + threadCol]; // block threads in this block until cache is fully populated __syncthreads(); // advance pointers onto next chunk A += BLOCKSIZE; B += BLOCKSIZE * N; // execute the dotproduct on the currently cached block for (int dotIdx = 0; dotIdx < BLOCKSIZE; ++dotIdx) { tmp += As[threadRow * BLOCKSIZE + dotIdx] * Bs[dotIdx * BLOCKSIZE + threadCol]; } // need to sync again at the end, to avoid faster threads // fetching the next block into the cache before slower threads are done __syncthreads(); } C[threadRow * N + threadCol] = alpha * tmp + beta * C[threadRow * N + threadCol]; This kernel achieves ~2200 GFLOPS, a 50% improvement over the previous version.There’s only a 50% improvement partly because our previous kernel already had pretty good L1 cache hit rates. We’re still far away from hitting the ~30 TFLOPs that the GPU can provide. This is obvious from the roofline plot below:Notice how we’re achieving a higher memory bandwidth than cuBLAS. But because we’re doing much less work per byte loaded from memory (=lower arithmetic intensity), overall performance is worse. At a CHUNKSIZE of 32, this uses 2*32*32*4B=8KB of shared memory space.This info can also be obtained by compiling with --ptxas-options=-v, which outputs: Used 37 registers, 8192 bytes smem, 400 bytes cmem[0]. My A6000 GPU has a maximum of 48KB of shared memory space available for each block, so we’re far away from hitting that limit. This is not necessarily a problem, as there are downsides to increasing per-block shared-memory usage. Each multiprocessor (SM) has a maximum of 100KB of SMEM available. This means that if we’d modify our kernel to use the full 48KB of SMEM available, each SM could only keep two blocks loaded at the same time. In CUDA parlance, increasing per-block SMEM utilization can decrease occupancy. Occupancy is defined as the ratio between the number of active warps per SM and the maximum possible number of active warps per SM. High occupancy is useful because it allows us to hide the high latency of our operations, by having a bigger pool of issue-able instructions available.On GPUs, math operations like FMA have a latency of 4 cycles which is equal to 2.6ns at a 1.5GHz clock. Compare this to a recent x86 CPU, where FMA has a 6 cycle latency or 1.8ns at a 3.5GHz clock. There are three main limits to keeping more active blocks loaded on an SM: register count, warp count and SMEM capacity. Let’s do an example calculation for our current kernel. Occupancy Calculation for Kernel 3 Here are the relevant hardware stats for my GPU, obtained from the cudaGetDeviceProperties API (Multiprocessors are the SMs we talked about earlier):The amount of shared memory is configurable by using a feature called SharedMemoryCarveout. The so-called unified data cache is partitioned into L1 cache and shared memory, so we can trade-off less shared-memory for more L1 cache. And here are the resource demands for our kernel: Work is scheduled onto the SMs on a block granularity. Each SM will load more blocks, as long as it has enough resources to accommodate them. Calculation:I found lots of official and unofficial occupancy calculators, but no official formulae as how to calculate the occupancy. The results are correct (I checked using NVIDIA’s official tools), but there may be small errors eg in the application of rounding. So this kernel is limited by the number of threads per block, and the number of registers per thread. We cannot load more than one block per SM, giving us a final occupancy of 32 active warps / 48 max active warps = 66%. A 66% occupancy is not too bad, so this doesn’t explain why our kernel runs so slow.We know that it’s possible to optimize our kernel towards high arithmetic intensity (AI) by observing that cuBLAS achieves ~245 FLOPs/Byte. Both at very high and very low AI, high occupancy is not needed to achieve peak throughput. For more details on this, see V. Volkov’s PhD thesis and its coverage of “cusp behaviour”: Looking at the profiler gives us some hints. First, if we look at the mix of executed instructions, most of them are memory loads:LDS are shared memory loads. FMA is our fused multiply add. IADD3 is a “3 input integer addition”, which we need for moving the pointers along the K dimension. Our inner loop looks like this in PTX (Godbolt link): ld.shared.f32 %f91, [%r8+3456]; ld.shared.f32 %f92, [%r7+108]; fma.rn.f32 %f93, %f92, %f91, %f90; That’s not good, given that a memory load is bound to have a higher latency than a simple FMA, and given that we know our kernel should be compute bound. We see this effect when looking at the profiler’s sampling of warp states. This quantifies how many cycles were spent in each state per executed instruction:Stall Not Selected means that the warp was eligible to be scheduled, but the scheduler selected another eligible warp instead. This adds evidence to our earlier hypothesis that occupancy is currently not a problem. The meaning of the states is documented in the Kernel Profiling Guide. For Stall MIO Throttle it reads: Warp was stalled waiting for the MIO (memory input/output) instruction queue to be not full. This stall reason is high in cases of extreme utilization of the MIO pipelines, which include special math instructions, dynamic branches, as well as shared memory instructions We’re not using special math instructions, nor dynamic branches, so it’s clear that we’re stalling waiting for our SMEM accesses to return. So how do we make our kernel issue less SMEM instructions? One way is to have each thread compute more than one output element, which allows us to perform more of the work in registers and relying less on SMEM. Kernel 4: 1D Blocktiling for Calculating Multiple Results per Thread So this next kernel works like our last kernel, but adds a new inner loop, for calculating multiple C entries per thread. We now use a SMEM cache size of BM*BK + BN*BK = 64*8 + 64*8 = 1024 floats, for a total of 4KB per block. Below a visualization. I have highlighted two of the threads and the values they access in the inner loop in orange and red. All of the important changes for this kernel happen in the inner loop. The loading for GMEM to SMEM stays largely the same as before. Let’s have a look:Godbolt link. // allocate thread-local cache for results in registerfile float threadResults[TM] = {0.0}; // outer loop over block tiles for (uint bkIdx = 0; bkIdx < K; bkIdx += BK) { // populate the SMEM caches (same as before) As[innerRowA * BK + innerColA] = A[innerRowA * K + innerColA]; Bs[innerRowB * BN + innerColB] = B[innerRowB * N + innerColB]; __syncthreads(); // advance blocktile for outer loop A += BK; B += BK * N; // calculate per-thread results for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) { // we make the dotproduct loop the outside loop, which facilitates // reuse of the Bs entry, which we can cache in a tmp var. float Btmp = Bs[dotIdx * BN + threadCol]; for (uint resIdx = 0; resIdx < TM; ++resIdx) { threadResults[resIdx] += As[(threadRow * TM + resIdx) * BK + dotIdx] * Btmp; } } __syncthreads(); } This kernel achieves ~8600 GFLOPs, 2.2x faster than our previous kernel. Let’s calculate how many memory accesses each thread performed in our previous kernel, where each thread calculated one result: And for our new kernel, where each thread calculates eight results: As expected, we now spend much fewer cycles per instruction stalling due to memory pressure:Careful: The axis has changed compared to the previous plot. Sidenote on Compiler Optimizations Above we explicitly cached the entry of B into Btmp and reordered the two inner loops for efficiency. If we don’t do that, then the code looks like this: for (uint resIdx = 0; resIdx < TM; ++resIdx) { for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) { threadResults[resIdx] += As[(threadRow * TM + resIdx) * BK + dotIdx] * Bs[dotIdx * BN + threadCol]; } } Interestingly, this has no adverse effect on performance. This is surprising since our inner two loops now incur BK (=8) * TM (=8) * 2 = 128 SMEM accesses, instead of the previous 72. Looking at the assembly (Godbolt link) has the answer: // first inner-most loop ld.shared.f32 %f45, [%r9]; ld.shared.f32 %f46, [%r8]; fma.rn.f32 %f47, %f46, %f45, %f212; ld.shared.f32 %f48, [%r9+256]; ld.shared.f32 %f49, [%r8+4]; fma.rn.f32 %f50, %f49, %f48, %f47; ld.shared.f32 %f51, [%r9+512]; ld.shared.f32 %f52, [%r8+8]; fma.rn.f32 %f53, %f52, %f51, %f50; ld.shared.f32 %f54, [%r9+768]; ld.shared.f32 %f55, [%r8+12]; fma.rn.f32 %f56, %f55, %f54, %f53; ld.shared.f32 %f57, [%r9+1024]; ld.shared.f32 %f58, [%r8+16]; fma.rn.f32 %f59, %f58, %f57, %f56; ld.shared.f32 %f60, [%r9+1280]; ld.shared.f32 %f61, [%r8+20]; fma.rn.f32 %f62, %f61, %f60, %f59; ld.shared.f32 %f63, [%r9+1536]; ld.shared.f32 %f64, [%r8+24]; fma.rn.f32 %f65, %f64, %f63, %f62; ld.shared.f32 %f66, [%r9+1792]; ld.shared.f32 %f67, [%r8+28]; fma.rn.f32 %f212, %f67, %f66, %f65; // second inner-most loop ld.shared.f32 %f68, [%r8+32]; fma.rn.f32 %f69, %f68, %f45, %f211; ld.shared.f32 %f70, [%r8+36]; fma.rn.f32 %f71, %f70, %f48, %f69; ld.shared.f32 %f72, [%r8+40]; fma.rn.f32 %f73, %f72, %f51, %f71; ld.shared.f32 %f74, [%r8+44]; fma.rn.f32 %f75, %f74, %f54, %f73; ld.shared.f32 %f76, [%r8+48]; fma.rn.f32 %f77, %f76, %f57, %f75; ld.shared.f32 %f78, [%r8+52]; fma.rn.f32 %f79, %f78, %f60, %f77; ld.shared.f32 %f80, [%r8+56]; fma.rn.f32 %f81, %f80, %f63, %f79; ld.shared.f32 %f82, [%r8+60]; fma.rn.f32 %f211, %f82, %f66, %f81; // ... continues like this for inner-loops 3-8 ... The compiler unrolls both loopsThe compiler can unroll them since the loop count is known at compile time. and then eliminates the repeated SMEM loads of the Bs entries, so we end up with the same amount of SMEM accesses as our optimized CUDA code. When the PTX is compiled to SASS, the SMEM loads from Bs are vectorized:This already hints at an optimization we’ll perform later: Transposing As such that we can also vectorize those loads. LDS R26, [R35.X4+0x800] // a 32b load from As LDS.128 R8, [R2] // a 128b load from Bs LDS.128 R12, [R2+0x20] LDS R24, [R35.X4+0x900] LDS.128 R20, [R2+0x60] LDS R36, [R35.X4+0xb00] LDS.128 R16, [R2+0x40] LDS.128 R4, [R2+0x80] LDS R38, [R35.X4+0xd00] Areas of Improvement: Arithmetic Intensity Our current kernel still suffers from the same stalling-for-memory problem as kernel 3, just to a lesser extent. So we’ll just apply the same optimization again: computing even more results per thread. The main reason this makes our kernel run faster is that it increases arithmetic intensity.Defined as the number of FLOPs executed per byte transferred (load + store!) between GMEM and SMEM. Below I tried to make it more immediately obvious why calculating more results per thread raises arithmetic intensity:It’s more efficient to calculate a square of results per thread than a column of results because we can share more of the inputs: In conclusion, all our kernels perform the same number of FLOPs, but we can reduce the number of GMEM accesses by calculating more results per thread. We’ll continue optimizing arithmetic intensity for as long as we’re still memory bound. Kernel 5: Increasing Arithmetic Intensity via 2D Blocktiling The basic idea for kernel 5 will be to compute a grid of 8*8 elements of C per thread. The first stage of the kernel is for all threads to work together to populate the SMEM cache. We’ll have each thread load multiple elements. This code looks like so:Here’s a graphical representation of the GMEM loading: for (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) { As[(innerRowA + loadOffset) * BK + innerColA] = A[(innerRowA + loadOffset) * K + innerColA]; } for (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) { Bs[(innerRowB + loadOffset) * BN + innerColB] = B[(innerRowB + loadOffset) * N + innerColB]; } __syncthreads(); Now that the SMEM cache is populated, we have each thread multiply its relevant SMEM entries and accumulate the result into local registers. Below I illustrated the (unchanged) outer loop along the input matrices, and the three inner loops for the dot product and the TN and TM dimension: The interesting parts of the code look like this:Godbolt link // allocate thread-local cache for results in registerfile float threadResults[TM * TN] = {0.0}; // register caches for As and Bs float regM[TM] = {0.0}; float regN[TN] = {0.0}; // outer-most loop over block tiles for (uint bkIdx = 0; bkIdx < K; bkIdx += BK) { // populate the SMEM caches for (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) { As[(innerRowA + loadOffset) * BK + innerColA] = A[(innerRowA + loadOffset) * K + innerColA]; } for (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) { Bs[(innerRowB + loadOffset) * BN + innerColB] = B[(innerRowB + loadOffset) * N + innerColB]; } __syncthreads(); // advance blocktile A += BK; // move BK columns to right B += BK * N; // move BK rows down // calculate per-thread results for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) { // load relevant As & Bs entries into registers for (uint i = 0; i < TM; ++i) { regM[i] = As[(threadRow * TM + i) * BK + dotIdx]; } for (uint i = 0; i < TN; ++i) { regN[i] = Bs[dotIdx * BN + threadCol * TN + i]; } // perform outer product on register cache, accumulate // into threadResults for (uint resIdxM = 0; resIdxM < TM; ++resIdxM) { for (uint resIdxN = 0; resIdxN < TN; ++resIdxN) { threadResults[resIdxM * TN + resIdxN] += regM[resIdxM] * regN[resIdxN]; } } } __syncthreads(); } In the inner loop, we can reduce the number of SMEM accesses by making dotIdx the outer loop, and explicitly loading the values we need for the two inner loops into registers. Below is a drawing of the dotIdx loop across time, to visualize which SMEM entries get loaded into thread-local registers at each step:I had to reduce some dimensions to make it easier to draw. In the kernel: BK=TM=TN=8. Resulting performance: 16TFLOPs, another 2x improvement. Let’s repeat the memory access calculation. We’re now calculating TM*TN = 8*8 = 64 results per thread. Slowly performance is reaching acceptable levels, however, warp stalls due to memory pipeline congestion are still too frequent. For kernel 6 we’ll take two measures to try to improve that: Transposing As to enable auto-vectorization of SMEM loads, and promising the compiler alignment on the GMEM accesses. Kernel 6: Vectorize SMEM and GMEM Accesses The first optimization that I already hinted at earlier is to transpose As. This will allow us to load from As using vectorized SMEM loads (LDS.128 in SASS). Below the same visualization of the three inner loops as for kernel 5, but now with As transposed in memory: Looking at the assemblyGodbolt link we see that loading As into the registers, which used to be a 32b LDS load, is now also a 128b LDS.128 load, just like it had already been for Bs. This gives us a 500GFLOPs speedup, or ~3%. Next, we’ll vectorize all loads and stores from/to GMEM using vector datatypes, namely float4. The code looks like this:Godbolt link for the full kernel float4 tmp = reinterpret_cast<float4 *>(&A[innerRowA * K + innerColA * 4])[0]; // transpose A during the GMEM to SMEM transfer As[(innerColA * 4 + 0) * BM + innerRowA] = tmp.x; As[(innerColA * 4 + 1) * BM + innerRowA] = tmp.y; As[(innerColA * 4 + 2) * BM + innerRowA] = tmp.z; As[(innerColA * 4 + 3) * BM + innerRowA] = tmp.w; reinterpret_cast<float4 *>(&Bs[innerRowB * BN + innerColB * 4])[0] = reinterpret_cast<float4 *>(&B[innerRowB * N + innerColB * 4])[0]; __syncthreads(); This leads to the 32b GMEM load instructions (LDG.E and STG.E) being replaced with 128b counterparts (LDG.E.128 and STG.E.128). Initially, I was confused as to why running this: reinterpret_cast<float4 *>(&Bs[innerRowB * BN + innerColB * 4])[0] = reinterpret_cast<float4 *>(&B[innerRowB * N + innerColB * 4])[0]; would be any faster than just manually unrolling the access (or using pragma unroll): Bs[innerRowB * BN + innerColB * 4 + 0] = B[innerRowB * N + innerColB * 4 + 0]; Bs[innerRowB * BN + innerColB * 4 + 1] = B[innerRowB * N + innerColB * 4 + 1]; Bs[innerRowB * BN + innerColB * 4 + 2] = B[innerRowB * N + innerColB * 4 + 2]; Bs[innerRowB * BN + innerColB * 4 + 3] = B[innerRowB * N + innerColB * 4 + 3]; Shouldn’t the compiler just be able to coalesce the 2nd version and also generate 128b loads? I think the reason is that the compiler has no way to verify that the float* B pointer that is passed to the kernel is 128b aligned, which would be a requirement for using LDG.E.128. So the reinterpret_cast’s only purpose is to promise the compiler that the float* B pointer will be aligned.Compare this to SMEM loads, where the compiler automatically generates vectorized loads because that memory is not user-managed. Kernel 6 achieves 19TFLOPs. The profiler still shows a bunch of problem areas and optimization opportunities: We’re running into shared-memory bank conflicts (which cuBLAS avoids), our occupancy is higher than necessary, and we haven’t implemented any double buffering (which the CUTLASS docs seem to suggest is pretty useful). But before we get to those, let’s cover some more low-hanging fruit: Autotuning the kernel’s parameters. Kernel 9: AutotuningI skipped kernels 7 and 8, which I wrote while figuring out how to best eliminate shared memory bank conflicts. They eliminate the conflicts but were overall still slower, so I won’t cover them here. We’ve accumulated a total of five template parameters: For kernel 6, these were set to BM=BN=128 and BK=TM=TN=8. I wrote a bash script that searches through all sensible combinations and benchmarks their runtime. This required me to make sure that: The necessary modifications to the code ended up taking quite some time to implement. It turns out that the optimal parameters vary quite a bit depending on the GPU model.I guess that’s why compilers like Triton provide routines for autotuning. I wonder how this works for cuBLAS, they probably store a precomputed mapping from {GPU type, matrix size, dtype, …} to the optimal GEMM implementation inside the cuBLAS binary. On my A6000, BM=BN=128 BK=16 TM=TN=8 increased performance by 5%, from 19 to 20 TFLOPs. On an A100 SMX4 40GB, that same configuration reached 12 TFLOPs, 6% worse than the optimal setting found by the autotuner (BM=BN=64 BK=16 TM=TN=4), which reached 12.6 TFLOPs.The A100 has worse fp32 performance than the A6000, which is why the FLOPs numbers are lower (cuBLAS reaches 14.7 TFLOPs on the A100). Nvidia rates the A100 at 19.5 TFLOPs and the A6000 at 38.7 TFLOPs. I can’t explain why these specific parameters end up producing the optimal performance. Autotuning works, every high-performance library uses it, but it also feels very unsatisfying.I’m sure that with enough time, enough access to low-level performance counters and some facetime with Nvidia engineers, I’d eventually figure it out. It’s good have a strong belief that computers can be understood. Kernel 10: Warptiling Currently, our loop structure looks like this: We’ll now add another hierarchy of tiling, in between our blocktiling and threadtiling loops: warptiling. Warptiling is somewhat confusing initially since unlike blocks and threads, warps don’t show up anywhere in the CUDA code explicitly. They are a hardware feature that has no direct analog in the scalar CUDA-software world. We can calculate a given thread’s warpId as warpId=threadIdx.x % warpSize, where warpSize is a built-in variable that is equal to 32 on any CUDA GPU I’ve ever worked with. Warps are relevant for performance since (among other reasons): Warptiling is elegant since we now make explicit all levels of parallelism: The warptiling looks like this in the CUDA code:Godbolt link. // dotIdx loops over contents of SMEM for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) { // populate registers for this thread's part of the warptile for (uint wSubRowIdx = 0; wSubRowIdx < WMITER; ++wSubRowIdx) { for (uint i = 0; i < TM; ++i) { regM[wSubRowIdx * TM + i] = As[(dotIdx * BM) + warpRow * WM + wSubRowIdx * WSUBM + threadRowInWarp * TM + i]; } } for (uint wSubColIdx = 0; wSubColIdx < WNITER; ++wSubColIdx) { for (uint i = 0; i < TN; ++i) { regN[wSubColIdx * TN + i] = Bs[(dotIdx * BN) + warpCol * WN + wSubColIdx * WSUBN + threadColInWarp * TN + i]; } } // execute warptile matmul. Later this will map well to // warp-wide matrix instructions, executed on tensor cores. for (uint wSubRowIdx = 0; wSubRowIdx < WMITER; ++wSubRowIdx) { for (uint wSubColIdx = 0; wSubColIdx < WNITER; ++wSubColIdx) { // calculate per-thread results with register-cache locality for (uint resIdxM = 0; resIdxM < TM; ++resIdxM) { for (uint resIdxN = 0; resIdxN < TN; ++resIdxN) { threadResults[(wSubRowIdx * TM + resIdxM) * (WNITER * TN) + (wSubColIdx * TN) + resIdxN] += regM[wSubRowIdx * TM + resIdxM] * regN[wSubColIdx * TN + resIdxN]; } } } } } I tried my best to visualize all three levels of tiling below, although the structure is getting quite complex.The CUTLASS docs about efficient GEMMs go even more in-depth into warptiling, and their visualizations are illuminating. Each warp will compute a chunk of size (WSUBN * WNITER) x (WSUBM * WMITER). Each thread computes WNITER * WMITER many chunks of size TM*TN. After autotuning the parameters, performance improves from 19.7 TFLOPs to 21.7 TFLOPs on an A100. Here’s a plot that compares our warptiling kernel against cuBLAS across increasing matrix sizes: I generated this plot on an A100, which is why the absolute FLOPs numbers are different. At dimensions 2048 and 4096, our measured FLOPs are only a few percentage points slower than cuBLAS. However, for smaller matrices, we’re doing poorly in comparison to Nvidia’s library! This happens because cuBLAS contains not one single implementation of SGEMM, but hundreds of them. There’s a reason I guess for why the library is 500MB of compiled code. To print all the kernels: cuobjdump --list-text <cublas location>. At runtime, based on the dimensions, cuBLAS will pick which kernel to run.I launched matmuls for square matrices on all dimensions up to 4096 and found 16 different SGEMM kernels. Here’s a script for finding the kernel that was launched by cuBLAS (h/t Horace He). I traced the cuBLAS call and these are the kernels it’s calling at each size:I used the Nsight Systems CLI for this. At dimension 256 it calls two kernels: a matmul kernel followed by a reduction kernel.Split-K refers to partitioning the K-dimension across multiple threadblocks. This means that each block will only compute part of the chunk of C, and cuBLAS follows up with a reduce kernel to accumulate the final result. This requires some extra memory space to store the intermediate results before the reduction. I imagine this looks like so (but I’m uncertain here): So if we were trying to write a high-performance library that works for all shapes and sizes we would have specializations for different shapes, and at runtime dispatch to the one that’s the best fit. I also want to report a negative results: For this kernel, I additionally implemented an optimization called thread swizzling. This technique assumes that threadblocks are launched in order of increasing blockIdx, and optimizes the mapping of blockIdx to C chunks in a way that should increase L2 locality.Remember that L2 is a cache for global memory that exists once for the whole GPU. This Nvidia post has more info and visualizations. It didn’t increase performance, presumably because L2 hit rate is already fairly high at 80%, so I ended up removing the swizzling code.The commit is here if anyone is interested. It makes sense to move the loop over BK towards the outside, since it follows our maxim of “load some data, then do as much work on that data as possible”. It further means that all computation that happens inside the BK loop will be independent and can be parallelized (for example using ILP). We can now also start prefetching the data necessary for the next loop iteration already, a technique called double buffering. Work in Progress: Kernel 11 If I get back to working on this post, here’s what I’ll look at next: Conclusion Writing this post was a similar experience to my previous post on optimizing SGEMM on CPU: Optimizing SGEMM iteratively is one of the best ways to deeply understand the performance characteristics of the hardware. For writing the CUDA programs I was surprised by how easy it was to implement the code once I had made a good visualization of how I wanted the kernel to work. Also: Powerlaws are everywhere. It took me two weekends to write the first 6 kernels which reach 80% of peak FLOPs, and then 4 more weekends to do autotuning and warptiling to get to 94%. How much I’m learning while writing this code has also seen diminishing results, hence I’m putting off hunting the last 6% until some future time. All my code is available on Github. Lastly, a big thanks to the creators of Godbolt.org (for looking at PTX and SASS assembly) and Excalidraw (for drawing the kernels)! Both of these tools are a joy to use and have helped me learn much faster. If you enjoy kernel work like this you’re likely a good fit for the Performance team at Anthropic. Come work with me! The team is headed by Tristan Hume who is the most capable & thoughtful manager I’ve ever had. We optimize Anthropic’s model for GPUs, TPUs and AWS Trainium. Feel free to reach out! Further Resources and References
Fundamental Optimizations in CUDA Peng Wang, Developer Technology, NVIDIA Optimization Overview GPU architecture Kernel optimization — Memory optimization — Latency optimization — Instruction optimization CPU-GPU interaction optimization — Overlapped execution using streams Optimization Overview GPU architecture Kernel optimization — Memory optimization — Execution configuration — Instruction optimization CPU-GPU interaction optimization — Overlapped execution using streams GPU High Level View Streaming Multiprocessor Global memory Fermi Multiprocessor 2 Warp Scheduler — In-order dual-issue — Up to 1536 concurrent threads 32 CUDA Cores — Full IEEE 754-2008 FP32 and FP64 — 32 FP32 ops/clock, 16 FP64 ops/clock Configurable 16/48 KB shared memory Configurable 16/48 KB L1 cache 4 SFUs 32K 32-bit registers Uniform Cache 64K Configurable Cache / Shared Mem Load/Store Units x 16 Core Special Func Units x 4 Interconnect Network Instruction Cache Scheduler Scheduler Dispatch Dispatch Register File Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core Core GPU and Programming Model Warp and SIMT Block 32 Threads 32 Threads 32 Threads ... Warps = • Blocks divide into groups of 32 threads called warps • Warps are basic scheduling units • Context switching is free • A lot of warps can hide memory latency • Warps always perform the same instruction (SIMT) • Each thread CAN execute its own code path Fermi Memory Hierarchy Register — Spills to local memory Caches — Shared memory — L1 cache — L2 cache — Constant cache — Texture cache Global memory Fermi Memory Hierarchy Review L2 Global Memory Registers L1 SM-N SMEM Registers L1 SM-0 SMEM Registers L1 SM-1 SMEM General Optimization Strategies: Measurement Find out the limiting factor in kernel performance — Memory bandwidth bound (memory optimization) — Instruction throughput bound (instruction optimization) — Latency bound (configuration optimization) Measure effective memory/instruction throughput Optimize for peak memory/instruction throughput — Finding out the bottleneck — Typically an iterative process Optimization Overview GPU architecture Kernel optimization — Memory optimization — Latency optimization — Instruction optimization CPU-GPU interaction optimization — Overlapped execution using streams Memory Optimization If the code is memory-bound and effective memory throughput is much lower than the peak Purpose: access only data that are absolutely necessary Major techniques — Improve access pattern to reduce wasted transactions: coalescing — Reduce redundant access: shared memory Coalescing Global memory latency: 400-800 cycles — The single most important performance consideration! Coalescing: global memory access from a warp can be coalesced into a single transaction Criterion: requests from a warp falling in a L1 cache line, one transaction # transaction = # L1 line accessed Caching or Non-caching? On Fermi, by default all global memory access are cached in L1. — L1 can be by-passed by passing ―-Xptxas –dlcm=cg‖ to nvcc: cache only in L2 If non-cached: same coalescing criterion — But transaction size can be reduced to 32B segment Caching or Non-caching? Caching — Help on some non-coalesced access, e.g. misaligned — May lead to lower performance for some uncoalesced access due to more wasted bandwidth Non-caching — Reduce wasted bandwidth — Leave more space for register spilling Caching Load Warp requests 32 aligned, consecutive 4-byte words Addresses fall within 1 cache-line — Warp needs 128 bytes — 128 bytes move across the bus on a miss — Bus utilization: 100% ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Caching Load ... 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses addresses from a warp 0 Warp requests 32 aligned, permuted 4-byte words Addresses fall within 1 cache-line — Warp needs 128 bytes — 128 bytes move across the bus on a miss — Bus utilization: 100% Caching Load 96 192 128 160 224 288 256 ... addresses from a warp 32 64 0 352 320 384 448 416 Memory addresses Warp requests 32 misaligned, consecutive 4-byte words Addresses fall within 2 cache-lines — Warp needs 128 bytes — 256 bytes move across the bus on misses — Bus utilization: 50% Non-caching Load 96 192 128 160 224 288 256 ... addresses from a warp 32 64 0 352 320 384 448 416 Memory addresses Warp requests 32 misaligned, consecutive 4-byte words Addresses fall within at most 5 segments — Warp needs 128 bytes — At most 160 bytes move across the bus — Bus utilization: at least 80% Some misaligned patterns will fall within 4 segments, so 100% utilization Caching Load ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 All threads in a warp request the same 4-byte word Addresses fall within a single cache-line — Warp needs 4 bytes — 128 bytes move across the bus on a miss — Bus utilization: 3.125% Non-caching Load ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 All threads in a warp request the same 4-byte word Addresses fall within a single segment — Warp needs 4 bytes — 32 bytes move across the bus on a miss — Bus utilization: 12.5% Caching Load ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Warp requests 32 scattered 4-byte words Addresses fall within N cache-lines — Warp needs 128 bytes — N*128 bytes move across the bus on a miss — Bus utilization: 128 / (N*128) Non-caching Load ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Warp requests 32 scattered 4-byte words Addresses fall within N segments — Warp needs 128 bytes — N*32 bytes move across the bus on a miss — Bus utilization: 128 / (N*32) Shared Memory Low latency: a few cycles High throughput: 73.6 GB/s per SM (1.03 TB/s per GPU) Main use — Inter-block communication — User-managed cache to reduce redundant global memory accesses — Avoid non-coalesced access Shared Memory Example: Matrix Multiplication A B C C=AxB Every thread corresponds to one entry in C. Naive Kernel __global__ void simpleMultiply(float* a, float* b, float* c, int N) { int row = threadIdx.x + blockIdx.x*blockDim.x; int col = threadIdx.y + blockIdx.y*blockDim.y; float sum = 0.0f; for (int i = 0; i < N; i++) { sum += a[row*N+i] * b[i*N+col]; } c[row*N+col] = sum; } Every thread corresponds to one entry in C. Blocked Matrix Multiplication A B C C=AxB Data reuse in the blocked version Blocked and cached kernel __global__ void coalescedMultiply(double*a, double* b, double*c, int N) { __shared__ float aTile[TILE_DIM][TILE_DIM]; __shared__ double bTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; for (int k = 0; k < N; k += TILE_DIM) { aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x]; bTile[threadIdx.y][threadIdx.x] = b[threadIdx.y*N+col]; __syncthreads(); for (int i = k; i < k+TILE_DIM; i++) sum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x]; } c[row*N+col] = sum; } Performance Results M=N=K=512 Bank Conflicts Shared memory is divided into banks — Successive 32-bit words assigned to successive banks — Number of banks = 32 (Fermi) Bank conflict: two R/W fall in the same bank, the access will be serialized. Special cases — If all threads in a warp access the same word, one broadcast. Fermi can also do multi-broadcast. — If reading continuous byte/double, no conflict on Fermi Bank 31 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Shared memory Bank Access Examples Bank Access Examples Optimizing Bank Conflict Measure whether it matters  Change SMEM reads to the same value to see the impact Avoiding bank conflict — Change address patterns — Padding Use array[N_BANK][N_BANK+1] Memory Optimizations Strive for perfect coalescing — Transpose the data structure, e.g. AOS to SOA — Padding — Change parallelization scheme: 1-thread-per-task to 1-warp-per-task? Use shared memory to reduce global memory access, avoid non-coalesced access Bound to texture cache for unpredictable uncoalesced access Use constant cache if all threads in a warp will access the same constant data Global Memory Throughput Metric Measuring effective memory throughput: — From the app point of view (―useful‖ bytes): number of bytes needed by the algorithm divided by kernel time — Compare to the theoretical bandwidth 70-80% is very good Finding out bottleneck — Start with global memory operations, achieve good throughput — Add arithmetic, shared memory, etc, measuring perf as you go Optimization Overview GPU architecture Kernel optimization — Memory optimization — Latency optimization — Instruction optimization CPU-GPU interaction optimization — Overlapped execution using streams Latency Optimization When the code is latency bound — Both the memory and instruction throughputs are far from the peak Latency hiding: — Instructions are issued in order — A thread blocks when one of the operands isn’t ready — Latency is hidden by switching threads GMEM: 400-800 cycles Arithmetic: 18-22 cycles Purpose: have enough concurrency to hide latency Major techniques: increase concurrency — Adjust resource usage to increase active warps (TLP) Grid/Block Size Heuristics # of blocks >> # of SM > 100 to scale well to future device Block size should be a multiple of 32 (warp size) Minimum: 64. I generally use 128 or 256. But use whatever is best for your app. Depends on the problem, do experiments! Occupancy Occupancy: ratio of active warps per SM to the maximum number of allowed warps — Maximum number: 48 in Fermi We need the occupancy to be high enough to hide latency Occupancy is limited by resource usage Dynamical Partitioning of SM Resources Shared memory is partitioned among blocks Registers are partitioned among threads: <= 63 Thread block slots: <= 8 Thread slots: <= 1536 Any of those can be the limiting factor on how many threads can be launched at the same time on a SM If adding a single instruction leads to significant perf drop, occupancy is the primary suspect Latency Hiding Occupancy Calculation Assume global memory takes 400 cycles, we need 400/2 = 200 arithmetic instructions to hide the latency. Assume the code has 8 independent arithmetic instructions for every one global memory access. Thus 200/8~26 warps would be enough (54% occupancy). Lessons: — Required occupancy depends on BOTH architecture and application — In this example , beyond 54%, higher occupancy won’t lead to further performance increase. Occupancy Optimizations Know the current occupancy — Visual profiler — --ptxas-options=-v: output resource usage info; input to Occupancy Calculator Adjust resource usage to increase occupancy — Change block size — Limit register usage Compiler option –maxrregcount=n: per file __launch_bounds__: per kernel Use template to reduce register usage — Dynamical allocating shared memory Occupancy Calculator http://developer.download.nvidia.com/compute/cuda/CUDA_Occupancy_calculator.xls Increase ILP of Each Thread Load by itself doesn’t stall execution Increment a 64M element array — Two accesses per thread (load then store, but they are dependent) Thus, each warp (32 threads) has one outstanding transaction at a time Several independent smaller accesses have the same effect as one larger one. For example: Four 32-bit ~= one 128-bit Optimization Overview GPU architecture Kernel optimization — Memory optimization — Latency optimization — Instruction optimization CPU-GPU interaction optimization — Overlapped execution using streams Instruction Optimization If you find out the code is instruction bound — Compute-intensive algorithm can easily become memory-bound if not careful enough — Typically, worry about instruction optimization after memory and execution configuration optimizations Purpose: reduce instruction count — Use less instructions to get the same job done Major techniques — Use high throughput instructions — Reduce wasted instructions: branch divergence, bank conflict, etc. Fermi Arithmetic Instruction Throughputs Throughputs of common instructions — Int & fp32: 2 cycles — fp64: 2 cycles — Fp32 transendental: 8 cycles — Int divide and modulo are expensive Divide by 2^n, use ―>> n‖ Modulo 2^n, use ―& (2^n – 1)‖ Reduce Instruction Count Avoid automatic conversion of double to float — Adding ―f‖ to floating literals (e.g. 1.0f) because the default is double Fermi default: -ftz=false, -prec-div=true, -prec-sqrt=true for IEEE compliance Fast math functions — Two types of runtime math library functions func(): slower but higher accuracy (5 ulp or less) __func(): fast but lower accuracy (see prog. guide for full details) — -use_fast_math: forces every func() to __func () Control Flow Divergent branches: — Threads within a single warp take different paths — Example with divergence:  if (threadIdx.x > 2) {...} else {...}  Branch granularity < warp size — Divergence inside a warp is processed by turning off the inactive threads  Different if-else branches are both executes: serialized Different warps can execute different code with no impact on performance Avoid diverging within a warp — Example without divergence:  if (threadIdx.x / WARP_SIZE > 2) {...} else {...}  Branch granularity is a whole multiple of warp size Kernel Optimization Workflow Find Limiter Compare to peak GB/s Memory optimization Compare to peak Ginst/s Instruction optimization Configuration optimization Memory bound Instruction bound Latency bound Done! << << ~ ~ Optimization Overview GPU architecture Kernel optimization — Memory optimization — Latency optimization — Instruction optimization CPU-GPU interaction optimization — Overlapped execution using streams Minimizing CPU-GPU data transfer Host<->device data transfer has much lower bandwidth than global memory access. — 8 GB/s (PCIe x16 Gen2) vs 156 GB/s & 515 Ginst/s (C2050) Minimize transfer — Intermediate data directly on GPU — Recompute — Move CPU codes to GPU that do not have performance gains if it can reduce data transfer Group transfer — One large transfer much better than many small ones: 10 microsec latency, 8 GB/s => latency dominated if data size < 80 KB Streams and Async API Default API: — Kernel launches are asynchronous with CPU — Memcopies (D2H, H2D) block CPU thread — CUDA calls are serialized by the driver Streams and async functions provide: — Memcopies (D2H, H2D) asynchronous with CPU — Ability to concurrently execute a kernel and a memcopy — Concurrent kernel in Fermi Stream = sequence of operations that execute in issue-order on GPU — Operations from different streams can be interleaved — A kernel and memcopy from different streams can be overlapped Pinned (non-pageable) memory Pinned memory enables: — memcopies asynchronous with CPU & GPU Usage — cudaHostAlloc / cudaFreeHost instead of malloc / free — Additional flags if pinned region is to be shared between lightweight CPU threads Note: — pinned memory is essentially removed from virtual memory — cudaHostAlloc is typically very expensive Overlap kernel and memory copy Requirements: — D2H or H2D memcopy from pinned memory — Device with compute capability ≥ 1.1 (G84 and later) — Kernel and memcopy in different, non-0 streams Code: cudaStream_t stream1, stream2; cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); cudaMemcpyAsync( dst, src, size, dir, stream1 ); kernel<<<grid, block, 0, stream2>>>(…); potentially overlapped Summary Optimization needs an understanding of GPU architecture Memory optimization: coalescing, shared memory Execution configuration: latency hiding Instruction throughput: use high throughput inst, reduce wasted cycles Do measurements! — Use the Profiler, simple code modifications — Compare to theoretical peaks
CUDA C++ Best Practices Guide The programming guide to using the CUDA Toolkit to obtain the best performance from NVIDIA GPUs. 1. Preface This Best Practices Guide is a manual to help developers obtain the best performance from NVIDIA® CUDA® GPUs. It presents established parallelization and optimization techniques and explains coding metaphors and idioms that can greatly simplify programming for CUDA-capable GPU architectures. While the contents can be used as a reference manual, you should be aware that some topics are revisited in different contexts as various programming and configuration topics are explored. As a result, it is recommended that first-time readers proceed through the guide sequentially. This approach will greatly improve your understanding of effective programming practices and enable you to better use the guide for reference later. 1.1. Who Should Read This Guide? The discussions in this guide all use the C++ programming language, so you should be comfortable reading C++ code. This guide refers to and relies on several other documents that you should have at your disposal for reference, all of which are available at no cost from the CUDA website https://docs.nvidia.com/cuda/. The following documents are especially important resources: CUDA Installation Guide CUDA C++ Programming Guide CUDA Toolkit Reference Manual In particular, the optimization section of this guide assumes that you have already successfully downloaded and installed the CUDA Toolkit (if not, please refer to the relevant CUDA Installation Guide for your platform) and that you have a basic familiarity with the CUDA C++ programming language and environment (if not, please refer to the CUDA C++ Programming Guide). 1.2. Assess, Parallelize, Optimize, Deploy This guide introduces the Assess, Parallelize, Optimize, Deploy(APOD) design cycle for applications with the goal of helping application developers to rapidly identify the portions of their code that would most readily benefit from GPU acceleration, rapidly realize that benefit, and begin leveraging the resulting speedups in production as early as possible. APOD is a cyclical process: initial speedups can be achieved, tested, and deployed with only minimal initial investment of time, at which point the cycle can begin again by identifying further optimization opportunities, seeing additional speedups, and then deploying the even faster versions of the application into production. 1.2.1. Assess For an existing project, the first step is to assess the application to locate the parts of the code that are responsible for the bulk of the execution time. Armed with this knowledge, the developer can evaluate these bottlenecks for parallelization and start to investigate GPU acceleration. By understanding the end-user’s requirements and constraints and by applying Amdahl’s and Gustafson’s laws, the developer can determine the upper bound of performance improvement from acceleration of the identified portions of the application. 1.2.2. Parallelize Having identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler. On the other hand, some applications’ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures will require exposing parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput. 1.2.3. Optimize After each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the process as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. Instead, strategies can be applied incrementally as they are learned. Optimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developer’s optimization efforts and provide references into the relevant portions of the optimization section of this guide. 1.2.4. Deploy Having completed the GPU acceleration of one or more components of the application it is possible to compare the outcome with the original expectation. Recall that the initial assess step allowed the developer to determine an upper bound for the potential speedup attainable by accelerating given hotspots. Before tackling other hotspots to improve the total speedup, the developer should consider taking the partially parallelized implementation and carry it through to production. This is important for a number of reasons; for example, it allows the user to profit from their investment as early as possible (the speedup may be partial but is still valuable), and it minimizes risk for the developer and the user by providing an evolutionary rather than revolutionary set of changes to the application. 1.3. Recommendations and Best Practices Throughout this guide, specific recommendations are made regarding the design and implementation of CUDA C++ code. These recommendations are categorized by priority, which is a blend of the effect of the recommendation and its scope. Actions that present substantial improvements for most CUDA applications have the highest priority, while small optimizations that affect only very specific situations are given a lower priority. Before implementing lower priority recommendations, it is good practice to make sure all higher priority recommendations that are relevant have already been applied. This approach will tend to provide the best results for the time invested and will avoid the trap of premature optimization. The criteria of benefit and scope for establishing priority will vary depending on the nature of the program. In this guide, they represent a typical case. Your code might reflect different priority factors. Regardless of this possibility, it is good practice to verify that no higher-priority recommendations have been overlooked before undertaking lower-priority items. Note Code samples throughout the guide omit error checking for conciseness. Production code should, however, systematically check the error code returned by each API call and check for failures in kernel launches by calling cudaGetLastError(). 1.4. Assessing Your Application From supercomputers to mobile phones, modern processors increasingly rely on parallelism to provide performance. The core computational unit, which includes control, arithmetic, registers and typically some cache, is replicated some number of times and connected to memory via a network. As a result, all modern processors require parallel code in order to achieve good utilization of their computational power. While processors are evolving to expose more fine-grained parallelism to the programmer, many existing applications have evolved either as serial codes or as coarse-grained parallel codes (for example, where the data is decomposed into regions processed in parallel, with sub-regions shared using MPI). In order to profit from any modern processor architecture, GPUs included, the first steps are to assess the application to identify the hotspots, determine whether they can be parallelized, and understand the relevant workloads both now and in the future. 2. Heterogeneous Computing CUDA programming involves running code on two different platforms concurrently: a host system with one or more CPUs and one or more CUDA-enabled NVIDIA GPU devices. While NVIDIA GPUs are frequently associated with graphics, they are also powerful arithmetic engines capable of running thousands of lightweight threads in parallel. This capability makes them well suited to computations that can leverage parallel execution. However, the device is based on a distinctly different design from the host system, and it’s important to understand those differences and how they determine the performance of CUDA applications in order to use CUDA effectively. 2.1. Differences between Host and Device The primary differences are in threading model and in separate physical memories: Execution pipelines on host systems can support a limited number of concurrent threads. For example, servers that have two 32 core processors can run only 64 threads concurrently (or small multiple of that if the CPUs support simultaneous multithreading). By comparison, the smallest executable unit of parallelism on a CUDA device comprises 32 threads (termed a warp of threads). Modern NVIDIA GPUs can support up to 2048 active threads concurrently per multiprocessor (see Features and Specifications of the CUDA C++ Programming Guide) On GPUs with 80 multiprocessors, this leads to more than 160,000 concurrently active threads. Threads on a CPU are generally heavyweight entities. The operating system must swap threads on and off CPU execution channels to provide multithreading capability. Context switches (when two threads are swapped) are therefore slow and expensive. By comparison, threads on GPUs are extremely lightweight. In a typical system, thousands of threads are queued up for work (in warps of 32 threads each). If the GPU must wait on one warp of threads, it simply begins executing work on another. Because separate registers are allocated to all active threads, no swapping of registers or other state need occur when switching among GPU threads. Resources stay allocated to each thread until it completes its execution. In short, CPU cores are designed to minimize latency for a small number of threads at a time each, whereas GPUs are designed to handle a large number of concurrent, lightweight threads in order to maximize throughput. The host system and the device each have their own distinct attached physical memories 1. As the host and device memories are separated, items in the host memory must occasionally be communicated between device memory and host memory as described in What Runs on a CUDA-Enabled Device?. These are the primary hardware differences between CPU hosts and GPU devices with respect to parallel programming. Other differences are discussed as they arise elsewhere in this document. Applications composed with these differences in mind can treat the host and device together as a cohesive heterogeneous system wherein each processing unit is leveraged to do the kind of work it does best: sequential work on the host and parallel work on the device. 2.2. What Runs on a CUDA-Enabled Device? The following issues should be considered when determining what parts of an application to run on the device: The device is ideally suited for computations that can be run on numerous data elements simultaneously in parallel. This typically involves arithmetic on large data sets (such as matrices) where the same operation can be performed across thousands, if not millions, of elements at the same time. This is a requirement for good performance on CUDA: the software must use a large number (generally thousands or tens of thousands) of concurrent threads. The support for running numerous threads in parallel derives from CUDA’s use of a lightweight threading model described above. To use CUDA, data values must be transferred from the host to the device. These transfers are costly in terms of performance and should be minimized. (See Data Transfer Between Host and Device.) This cost has several ramifications: The complexity of operations should justify the cost of moving data to and from the device. Code that transfers data for brief use by a small number of threads will see little or no performance benefit. The ideal scenario is one in which many threads perform a substantial amount of work. For example, transferring two matrices to the device to perform a matrix addition and then transferring the results back to the host will not realize much performance benefit. The issue here is the number of operations performed per data element transferred. For the preceding procedure, assuming matrices of size NxN, there are N2 operations (additions) and 3N2 elements transferred, so the ratio of operations to elements transferred is 1:3 or O(1). Performance benefits can be more readily achieved when this ratio is higher. For example, a matrix multiplication of the same matrices requires N3 operations (multiply-add), so the ratio of operations to elements transferred is O(N), in which case the larger the matrix the greater the performance benefit. The types of operations are an additional factor, as additions have different complexity profiles than, for example, trigonometric functions. It is important to include the overhead of transferring data to and from the device in determining whether operations should be performed on the host or on the device. Data should be kept on the device as long as possible. Because transfers should be minimized, programs that run multiple kernels on the same data should favor leaving the data on the device between kernel calls, rather than transferring intermediate results to the host and then sending them back to the device for subsequent calculations. So, in the previous example, had the two matrices to be added already been on the device as a result of some previous calculation, or if the results of the addition would be used in some subsequent calculation, the matrix addition should be performed locally on the device. This approach should be used even if one of the steps in a sequence of calculations could be performed faster on the host. Even a relatively slow kernel may be advantageous if it avoids one or more transfers between host and device memory. Data Transfer Between Host and Device provides further details, including the measurements of bandwidth between the host and the device versus within the device proper. For best performance, there should be some coherence in memory access by adjacent threads running on the device. Certain memory access patterns enable the hardware to coalesce groups of reads or writes of multiple data items into one operation. Data that cannot be laid out so as to enable coalescing, or that doesn’t have enough locality to use the L1 or texture caches effectively, will tend to see lesser speedups when used in computations on GPUs. A noteworthy exception to this are completely random memory access patterns. In general, they should be avoided, because compared to peak capabilities any architecture processes these memory access patterns at a low efficiency. However, compared to cache based architectures, like CPUs, latency hiding architectures, like GPUs, tend to cope better with completely random memory access patterns. On Systems on a Chip with integrated GPUs, such as NVIDIA® Tegra®, host and device memory are physically the same, but there is still a logical distinction between host and device memory. See the Application Note on CUDA for Tegra for details. 3. Application Profiling 3.1. Profile Many codes accomplish a significant portion of the work with a relatively small amount of code. Using a profiler, the developer can identify such hotspots and start to compile a list of candidates for parallelization. 3.1.1. Creating the Profile There are many possible approaches to profiling the code, but in all cases the objective is the same: to identify the function or functions in which the application is spending most of its execution time. Note High Priority: To maximize developer productivity, profile the application to determine hotspots and bottlenecks. The most important consideration with any profiling activity is to ensure that the workload is realistic - i.e., that information gained from the test and decisions based upon that information are relevant to real data. Using unrealistic workloads can lead to sub-optimal results and wasted effort both by causing developers to optimize for unrealistic problem sizes and by causing developers to concentrate on the wrong functions. There are a number of tools that can be used to generate the profile. The following example is based on gprof, which is an open-source profiler for Linux platforms from the GNU Binutils collection. $ gcc -O2 -g -pg myprog.c $ gprof ./a.out > profile.txt Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 33.34 0.02 0.02 7208 0.00 0.00 genTimeStep 16.67 0.03 0.01 240 0.04 0.12 calcStats 16.67 0.04 0.01 8 1.25 1.25 calcSummaryData 16.67 0.05 0.01 7 1.43 1.43 write 16.67 0.06 0.01 mcount 0.00 0.06 0.00 236 0.00 0.00 tzset 0.00 0.06 0.00 192 0.00 0.00 tolower 0.00 0.06 0.00 47 0.00 0.00 strlen 0.00 0.06 0.00 45 0.00 0.00 strchr 0.00 0.06 0.00 1 0.00 50.00 main 0.00 0.06 0.00 1 0.00 0.00 memcpy 0.00 0.06 0.00 1 0.00 10.11 print 0.00 0.06 0.00 1 0.00 0.00 profil 0.00 0.06 0.00 1 0.00 50.00 report 3.1.2. Identifying Hotspots In the example above, we can clearly see that the function genTimeStep() takes one-third of the total running time of the application. This should be our first candidate function for parallelization. Understanding Scaling discusses the potential benefit we might expect from such parallelization. It is worth noting that several of the other functions in the above example also take up a significant portion of the overall running time, such as calcStats() and calcSummaryData(). Parallelizing these functions as well should increase our speedup potential. However, since APOD is a cyclical process, we might opt to parallelize these functions in a subsequent APOD pass, thereby limiting the scope of our work in any given pass to a smaller set of incremental changes. 3.1.3. Understanding Scaling The amount of performance benefit an application will realize by running on CUDA depends entirely on the extent to which it can be parallelized. Code that cannot be sufficiently parallelized should run on the host, unless doing so would result in excessive transfers between the host and the device. Note High Priority: To get the maximum benefit from CUDA, focus first on finding ways to parallelize sequential code. By understanding how applications can scale it is possible to set expectations and plan an incremental parallelization strategy. Strong Scaling and Amdahl’s Law describes strong scaling, which allows us to set an upper bound for the speedup with a fixed problem size. Weak Scaling and Gustafson’s Law describes weak scaling, where the speedup is attained by growing the problem size. In many applications, a combination of strong and weak scaling is desirable. 3.1.3.1. Strong Scaling and Amdahl’s Law Strong scaling is a measure of how, for a fixed overall problem size, the time to solution decreases as more processors are added to a system. An application that exhibits linear strong scaling has a speedup equal to the number of processors used. Strong scaling is usually equated with Amdahl’s Law, which specifies the maximum speedup that can be expected by parallelizing portions of a serial program. Essentially, it states that the maximum speedup S of a program is: \(S = \frac{1}{(1 - P) + \frac{P}{N}}\) Here P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs. The larger N is(that is, the greater the number of processors), the smaller the P/N fraction. It can be simpler to view N as a very large number, which essentially transforms the equation into \(S = 1/(1 - P)\). Now, if 3/4 of the running time of a sequential program is parallelized, the maximum speedup over serial code is 1 / (1 - 3/4) = 4. In reality, most applications do not exhibit perfectly linear strong scaling, even if they do exhibit some degree of strong scaling. For most purposes, the key point is that the larger the parallelizable portion P is, the greater the potential speedup. Conversely, if P is a small number (meaning that the application is not substantially parallelizable), increasing the number of processors N does little to improve performance. Therefore, to get the largest speedup for a fixed problem size, it is worthwhile to spend effort on increasing P, maximizing the amount of code that can be parallelized. 3.1.3.2. Weak Scaling and Gustafson’s Law Weak scaling is a measure of how the time to solution changes as more processors are added to a system with a fixed problem size per processor; i.e., where the overall problem size increases as the number of processors is increased. Weak scaling is often equated with Gustafson’s Law, which states that in practice, the problem size scales with the number of processors. Because of this, the maximum speedup S of a program is: \(S = N + (1 - P)(1 - N)\) Here P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs. Another way of looking at Gustafson’s Law is that it is not the problem size that remains constant as we scale up the system but rather the execution time. Note that Gustafson’s Law assumes that the ratio of serial to parallel execution remains constant, reflecting additional cost in setting up and handling the larger problem. 3.1.3.3. Applying Strong and Weak Scaling Understanding which type of scaling is most applicable to an application is an important part of estimating speedup. For some applications the problem size will remain constant and hence only strong scaling is applicable. An example would be modeling how two molecules interact with each other, where the molecule sizes are fixed. For other applications, the problem size will grow to fill the available processors. Examples include modeling fluids or structures as meshes or grids and some Monte Carlo simulations, where increasing the problem size provides increased accuracy. Having understood the application profile, the developer should understand how the problem size would change if the computational performance changes and then apply either Amdahl’s or Gustafson’s Law to determine an upper bound for the speedup. 4. Parallelizing Your Application Having identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler. On the other hand, some applications’ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures require exposing this parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput. 5. Getting Started There are several key strategies for parallelizing sequential code. While the details of how to apply these strategies to a particular application is a complex and problem-specific topic, the general themes listed here apply regardless of whether we are parallelizing code to run on for multicore CPUs or for use on CUDA GPUs. 5.1. Parallel Libraries The most straightforward approach to parallelizing an application is to leverage existing libraries that take advantage of parallel architectures on our behalf. The CUDA Toolkit includes a number of such libraries that have been fine-tuned for NVIDIA CUDA GPUs, such as cuBLAS, cuFFT, and so on. The key here is that libraries are most useful when they match well with the needs of the application. Applications already using other BLAS libraries can often quite easily switch to cuBLAS, for example, whereas applications that do little to no linear algebra will have little use for cuBLAS. The same goes for other CUDA Toolkit libraries: cuFFT has an interface similar to that of FFTW, etc. Also of note is the Thrust library, which is a parallel C++ template library similar to the C++ Standard Template Library. Thrust provides a rich collection of data parallel primitives such as scan, sort, and reduce, which can be composed together to implement complex algorithms with concise, readable source code. By describing your computation in terms of these high-level abstractions you provide Thrust with the freedom to select the most efficient implementation automatically. As a result, Thrust can be utilized in rapid prototyping of CUDA applications, where programmer productivity matters most, as well as in production, where robustness and absolute performance are crucial. 5.2. Parallelizing Compilers Another common approach to parallelization of sequential codes is to make use of parallelizing compilers. Often this means the use of directives-based approaches, where the programmer uses a pragma or other similar notation to provide hints to the compiler about where parallelism can be found without needing to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow the compiler to do the detailed work of mapping the computation onto the parallel architecture. The OpenACC standard provides a set of compiler directives to specify loops and regions of code in standard C, C++ and Fortran that should be offloaded from a host CPU to an attached accelerator such as a CUDA GPU. The details of managing the accelerator device are handled implicitly by an OpenACC-enabled compiler and runtime. See http://www.openacc.org/ for details. 5.3. Coding to Expose Parallelism For applications that need additional functionality or performance beyond what existing parallel libraries or parallelizing compilers can provide, parallel programming languages such as CUDA C++ that integrate seamlessly with existing sequential code are essential. Once we have located a hotspot in our application’s profile assessment and determined that custom code is the best approach, we can use CUDA C++ to expose the parallelism in that portion of our code as a CUDA kernel. We can then launch this kernel onto the GPU and retrieve the results without requiring major rewrites to the rest of our application. This approach is most straightforward when the majority of the total running time of our application is spent in a few relatively isolated portions of the code. More difficult to parallelize are applications with a very flat profile - i.e., applications where the time spent is spread out relatively evenly across a wide portion of the code base. For the latter variety of application, some degree of code refactoring to expose the inherent parallelism in the application might be necessary, but keep in mind that this refactoring work will tend to benefit all future architectures, CPU and GPU alike, so it is well worth the effort should it become necessary. 6. Getting the Right Answer Obtaining the right answer is clearly the principal goal of all computation. On parallel systems, it is possible to run into difficulties not typically found in traditional serial-oriented programming. These include threading issues, unexpected values due to the way floating-point values are computed, and challenges arising from differences in the way CPU and GPU processors operate. This chapter examines issues that can affect the correctness of returned data and points to appropriate solutions. 6.1. Verification 6.1.1. Reference Comparison A key aspect of correctness verification for modifications to any existing program is to establish some mechanism whereby previous known-good reference outputs from representative inputs can be compared to new results. After each change is made, ensure that the results match using whatever criteria apply to the particular algorithm. Some will expect bitwise identical results, which is not always possible, especially where floating-point arithmetic is concerned; see Numerical Accuracy and Precision regarding numerical accuracy. For other algorithms, implementations may be considered correct if they match the reference within some small epsilon. Note that the process used for validating numerical results can easily be extended to validate performance results as well. We want to ensure that each change we make is correct and that it improves performance (and by how much). Checking these things frequently as an integral part of our cyclical APOD process will help ensure that we achieve the desired results as rapidly as possible. 6.1.2. Unit Testing A useful counterpart to the reference comparisons described above is to structure the code itself in such a way that is readily verifiable at the unit level. For example, we can write our CUDA kernels as a collection of many short __device__ functions rather than one large monolithic __global__ function; each device function can be tested independently before hooking them all together. For example, many kernels have complex addressing logic for accessing memory in addition to their actual computation. If we validate our addressing logic separately prior to introducing the bulk of the computation, then this will simplify any later debugging efforts. (Note that the CUDA compiler considers any device code that does not contribute to a write to global memory as dead code subject to elimination, so we must at least write something out to global memory as a result of our addressing logic in order to successfully apply this strategy.) Going a step further, if most functions are defined as __host__ __device__ rather than just __device__ functions, then these functions can be tested on both the CPU and the GPU, thereby increasing our confidence that the function is correct and that there will not be any unexpected differences in the results. If there are differences, then those differences will be seen early and can be understood in the context of a simple function. As a useful side effect, this strategy will allow us a means to reduce code duplication should we wish to include both CPU and GPU execution paths in our application: if the bulk of the work of our CUDA kernels is done in __host__ __device__ functions, we can easily call those functions from both the host code and the device code without duplication. 6.2. Debugging CUDA-GDB is a port of the GNU Debugger that runs on Linux and Mac; see: https://developer.nvidia.com/cuda-gdb. The NVIDIA Nsight Visual Studio Edition is available as a free plugin for Microsoft Visual Studio; see: https://developer.nvidia.com/nsight-visual-studio-edition. Several third-party debuggers support CUDA debugging as well; see: https://developer.nvidia.com/debugging-solutions for more details. 6.3. Numerical Accuracy and Precision Incorrect or unexpected results arise principally from issues of floating-point accuracy due to the way floating-point values are computed and stored. The following sections explain the principal items of interest. Other peculiarities of floating-point arithmetic are presented in Features and Technical Specifications of the CUDA C++ Programming Guide as well as in a whitepaper and accompanying webinar on floating-point precision and performance available from https://developer.nvidia.com/content/precision-performance-floating-point-and-ieee-754-compliance-nvidia-gpus. 6.3.1. Single vs. Double Precision Devices of CUDA Compute Capability 1.3 and higher provide native support for double-precision floating-point values (that is, values 64 bits wide). Results obtained using double-precision arithmetic will frequently differ from the same operation performed via single-precision arithmetic due to the greater precision of the former and due to rounding issues. Therefore, it is important to be sure to compare values of like precision and to express the results within a certain tolerance rather than expecting them to be exact. 6.3.2. Floating Point Math Is Not Associative Each floating-point arithmetic operation involves a certain amount of rounding. Consequently, the order in which arithmetic operations are performed is important. If A, B, and C are floating-point values, (A+B)+C is not guaranteed to equal A+(B+C) as it is in symbolic math. When you parallelize computations, you potentially change the order of operations and therefore the parallel results might not match sequential results. This limitation is not specific to CUDA, but an inherent part of parallel computation on floating-point values. 6.3.3. IEEE 754 Compliance All CUDA compute devices follow the IEEE 754 standard for binary floating-point representation, with some small exceptions. These exceptions, which are detailed in Features and Technical Specifications of the CUDA C++ Programming Guide, can lead to results that differ from IEEE 754 values computed on the host system. One of the key differences is the fused multiply-add (FMA) instruction, which combines multiply-add operations into a single instruction execution. Its result will often differ slightly from results obtained by doing the two operations separately. 6.3.4. x86 80-bit Computations x86 processors can use an 80-bit double extended precision math when performing floating-point calculations. The results of these calculations can frequently differ from pure 64-bit operations performed on the CUDA device. To get a closer match between values, set the x86 host processor to use regular double or single precision (64 bits and 32 bits, respectively). This is done with the FLDCW x86 assembly instruction or the equivalent operating system API. 7. Optimizing CUDA Applications After each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the process as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. Instead, strategies can be applied incrementally as they are learned. Optimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developer’s optimization efforts and provide references into the relevant portions of the optimization section of this guide. 8. Performance Metrics When attempting to optimize CUDA code, it pays to know how to measure performance accurately and to understand the role that bandwidth plays in performance measurement. This chapter discusses how to correctly measure performance using CPU timers and CUDA events. It then explores how bandwidth affects performance metrics and how to mitigate some of the challenges it poses. 8.1. Timing CUDA calls and kernel executions can be timed using either CPU or GPU timers. This section examines the functionality, advantages, and pitfalls of both approaches. 8.1.1. Using CPU Timers Any CPU timer can be used to measure the elapsed time of a CUDA call or kernel execution. The details of various CPU timing approaches are outside the scope of this document, but developers should always be aware of the resolution their timing calls provide. When using CPU timers, it is critical to remember that many CUDA API functions are asynchronous; that is, they return control back to the calling CPU thread prior to completing their work. All kernel launches are asynchronous, as are memory-copy functions with the Async suffix on their names. Therefore, to accurately measure the elapsed time for a particular call or sequence of CUDA calls, it is necessary to synchronize the CPU thread with the GPU by calling cudaDeviceSynchronize() immediately before starting and stopping the CPU timer. cudaDeviceSynchronize()blocks the calling CPU thread until all CUDA calls previously issued by the thread are completed. Although it is also possible to synchronize the CPU thread with a particular stream or event on the GPU, these synchronization functions are not suitable for timing code in streams other than the default stream. cudaStreamSynchronize() blocks the CPU thread until all CUDA calls previously issued into the given stream have completed. cudaEventSynchronize() blocks until a given event in a particular stream has been recorded by the GPU. Because the driver may interleave execution of CUDA calls from other non-default streams, calls in other streams may be included in the timing. Because the default stream, stream 0, exhibits serializing behavior for work on the device (an operation in the default stream can begin only after all preceding calls in any stream have completed; and no subsequent operation in any stream can begin until it finishes), these functions can be used reliably for timing in the default stream. Be aware that CPU-to-GPU synchronization points such as those mentioned in this section imply a stall in the GPU’s processing pipeline and should thus be used sparingly to minimize their performance impact. 8.1.2. Using CUDA GPU Timers The CUDA event API provides calls that create and destroy events, record events (including a timestamp), and convert timestamp differences into a floating-point value in milliseconds. How to time code using CUDA events illustrates their use. How to time code using CUDA events cudaEvent_t start, stop; float time; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); kernel<<<grid,threads>>> ( d_odata, d_idata, size_x, size_y, NUM_REPS); cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); cudaEventElapsedTime( &time, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); Here cudaEventRecord() is used to place the start and stop events into the default stream, stream 0. The device will record a timestamp for the event when it reaches that event in the stream. The cudaEventElapsedTime() function returns the time elapsed between the recording of the start and stop events. This value is expressed in milliseconds and has a resolution of approximately half a microsecond. Like the other calls in this listing, their specific operation, parameters, and return values are described in the CUDA Toolkit Reference Manual. Note that the timings are measured on the GPU clock, so the timing resolution is operating-system-independent. 8.2. Bandwidth Bandwidth - the rate at which data can be transferred - is one of the most important gating factors for performance. Almost all changes to code should be made in the context of how they affect bandwidth. As described in Memory Optimizations of this guide, bandwidth can be dramatically affected by the choice of memory in which data is stored, how the data is laid out and the order in which it is accessed, as well as other factors. To measure performance accurately, it is useful to calculate theoretical and effective bandwidth. When the latter is much lower than the former, design or implementation details are likely to reduce bandwidth, and it should be the primary goal of subsequent optimization efforts to increase it. Note High Priority: Use the effective bandwidth of your computation as a metric when measuring performance and optimization benefits. 8.2.1. Theoretical Bandwidth Calculation Theoretical bandwidth can be calculated using hardware specifications available in the product literature. For example, the NVIDIA Tesla V100 uses HBM2 (double data rate) RAM with a memory clock rate of 877 MHz and a 4096-bit-wide memory interface. Using these data items, the peak theoretical memory bandwidth of the NVIDIA Tesla V100 is 898 GB/s: \(\left. \left( 0.877 \times 10^{9} \right. \times (4096/8) \times 2 \right) \div 10^{9} = 898\text{GB/s}\) In this calculation, the memory clock rate is converted in to Hz, multiplied by the interface width (divided by 8, to convert bits to bytes) and multiplied by 2 due to the double data rate. Finally, this product is divided by 109 to convert the result to GB/s. Note Some calculations use 10243 instead of 109 for the final calculation. In such a case, the bandwidth would be 836.4 GiB/s. It is important to use the same divisor when calculating theoretical and effective bandwidth so that the comparison is valid. Note On GPUs with GDDR memory with ECC enabled the available DRAM is reduced by 6.25% to allow for the storage of ECC bits. Fetching ECC bits for each memory transaction also reduced the effective bandwidth by approximately 20% compared to the same GPU with ECC disabled, though the exact impact of ECC on bandwidth can be higher and depends on the memory access pattern. HBM2 memories, on the other hand, provide dedicated ECC resources, allowing overhead-free ECC protection.2 8.2.2. Effective Bandwidth Calculation Effective bandwidth is calculated by timing specific program activities and by knowing how data is accessed by the program. To do so, use this equation: \(\text{Effective\ bandwidth} = \left( {\left( B_{r} + B_{w} \right) \div 10^{9}} \right) \div \text{time}\) Here, the effective bandwidth is in units of GB/s, Br is the number of bytes read per kernel, Bw is the number of bytes written per kernel, and time is given in seconds. For example, to compute the effective bandwidth of a 2048 x 2048 matrix copy, the following formula could be used: \(\text{Effective\ bandwidth} = \left( {\left( 2048^{2} \times 4 \times 2 \right) \div 10^{9}} \right) \div \text{time}\) The number of elements is multiplied by the size of each element (4 bytes for a float), multiplied by 2 (because of the read and write), divided by 109 (or 1,0243) to obtain GB of memory transferred. This number is divided by the time in seconds to obtain GB/s. 8.2.3. Throughput Reported by Visual Profiler For devices with compute capability of 2.0 or greater, the Visual Profiler can be used to collect several different memory throughput measures. The following throughput metrics can be displayed in the Details or Detail Graphs view: Requested Global Load Throughput Requested Global Store Throughput Global Load Throughput Global Store Throughput DRAM Read Throughput DRAM Write Throughput The Requested Global Load Throughput and Requested Global Store Throughput values indicate the global memory throughput requested by the kernel and therefore correspond to the effective bandwidth obtained by the calculation shown under Effective Bandwidth Calculation. Because the minimum memory transaction size is larger than most word sizes, the actual memory throughput required for a kernel can include the transfer of data not used by the kernel. For global memory accesses, this actual throughput is reported by the Global Load Throughput and Global Store Throughput values. It’s important to note that both numbers are useful. The actual memory throughput shows how close the code is to the hardware limit, and a comparison of the effective or requested bandwidth to the actual bandwidth presents a good estimate of how much bandwidth is wasted by suboptimal coalescing of memory accesses (see Coalesced Access to Global Memory). For global memory accesses, this comparison of requested memory bandwidth to actual memory bandwidth is reported by the Global Memory Load Efficiency and Global Memory Store Efficiency metrics. As an exception, scattered writes to HBM2 see some overhead from ECC but much less than the overhead with similar access patterns on ECC-protected GDDR5 memory. 9. Memory Optimizations Memory optimizations are the most important area for performance. The goal is to maximize the use of the hardware by maximizing bandwidth. Bandwidth is best served by using as much fast memory and as little slow-access memory as possible. This chapter discusses the various kinds of memory on the host and device and how best to set up data items to use the memory effectively. 9.1. Data Transfer Between Host and Device The peak theoretical bandwidth between the device memory and the GPU is much higher (898 GB/s on the NVIDIA Tesla V100, for example) than the peak theoretical bandwidth between host memory and device memory (16 GB/s on the PCIe x16 Gen3). Hence, for best overall application performance, it is important to minimize data transfer between the host and the device, even if that means running kernels on the GPU that do not demonstrate any speedup compared with running them on the host CPU. Note High Priority: Minimize data transfer between the host and the device, even if it means running some kernels on the device that do not show performance gains when compared with running them on the host CPU. Intermediate data structures should be created in device memory, operated on by the device, and destroyed without ever being mapped by the host or copied to host memory. Also, because of the overhead associated with each transfer, batching many small transfers into one larger transfer performs significantly better than making each transfer separately, even if doing so requires packing non-contiguous regions of memory into a contiguous buffer and then unpacking after the transfer. Finally, higher bandwidth between the host and the device is achieved when using page-locked (or pinned) memory, as discussed in the CUDA C++ Programming Guide and the Pinned Memory section of this document. 9.1.1. Pinned Memory Page-locked or pinned memory transfers attain the highest bandwidth between the host and the device. On PCIe x16 Gen3 cards, for example, pinned memory can attain roughly 12 GB/s transfer rates. Pinned memory is allocated using the cudaHostAlloc() functions in the Runtime API. The bandwidthTest CUDA Sample shows how to use these functions as well as how to measure memory transfer performance. For regions of system memory that have already been pre-allocated, cudaHostRegister() can be used to pin the memory on-the-fly without the need to allocate a separate buffer and copy the data into it. Pinned memory should not be overused. Excessive use can reduce overall system performance because pinned memory is a scarce resource, but how much is too much is difficult to know in advance. Furthermore, the pinning of system memory is a heavyweight operation compared to most normal system memory allocations, so as with all optimizations, test the application and the systems it runs on for optimal performance parameters. 9.1.2. Asynchronous and Overlapping Transfers with Computation Data transfers between the host and the device using cudaMemcpy() are blocking transfers; that is, control is returned to the host thread only after the data transfer is complete. The cudaMemcpyAsync() function is a non-blocking variant of cudaMemcpy() in which control is returned immediately to the host thread. In contrast with cudaMemcpy(), the asynchronous transfer version requires pinned host memory (see Pinned Memory), and it contains an additional argument, a stream ID. A stream is simply a sequence of operations that are performed in order on the device. Operations in different streams can be interleaved and in some cases overlapped - a property that can be used to hide data transfers between the host and the device. Asynchronous transfers enable overlap of data transfers with computation in two different ways. On all CUDA-enabled devices, it is possible to overlap host computation with asynchronous data transfers and with device computations. For example, Asynchronous and Overlapping Transfers with Computation demonstrates how host computation in the routine cpuFunction() is performed while data is transferred to the device and a kernel using the device is executed. Overlapping computation and data transfers cudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, 0); kernel<<<grid, block>>>(a_d); cpuFunction(); The last argument to the cudaMemcpyAsync() function is the stream ID, which in this case uses the default stream, stream 0. The kernel also uses the default stream, and it will not begin execution until the memory copy completes; therefore, no explicit synchronization is needed. Because the memory copy and the kernel both return control to the host immediately, the host function cpuFunction() overlaps their execution. In Asynchronous and Overlapping Transfers with Computation, the memory copy and kernel execution occur sequentially. On devices that are capable of concurrent copy and compute, it is possible to overlap kernel execution on the device with data transfers between the host and the device. Whether a device has this capability is indicated by the asyncEngineCount field of the cudaDeviceProp structure (or listed in the output of the deviceQuery CUDA Sample). On devices that have this capability, the overlap once again requires pinned host memory, and, in addition, the data transfer and kernel must use different, non-default streams (streams with non-zero stream IDs). Non-default streams are required for this overlap because memory copy, memory set functions, and kernel calls that use the default stream begin only after all preceding calls on the device (in any stream) have completed, and no operation on the device (in any stream) commences until they are finished. Asynchronous and Overlapping Transfers with Computation illustrates the basic technique. Concurrent copy and execute cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); cudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, stream1); kernel<<<grid, block, 0, stream2>>>(otherData_d); In this code, two streams are created and used in the data transfer and kernel executions as specified in the last arguments of the cudaMemcpyAsync call and the kernel’s execution configuration. Asynchronous and Overlapping Transfers with Computation demonstrates how to overlap kernel execution with asynchronous data transfer. This technique could be used when the data dependency is such that the data can be broken into chunks and transferred in multiple stages, launching multiple kernels to operate on each chunk as it arrives. Sequential copy and execute and Staged concurrent copy and execute demonstrate this. They produce equivalent results. The first segment shows the reference sequential implementation, which transfers and operates on an array of N floats (where N is assumed to be evenly divisible by nThreads). Sequential copy and execute cudaMemcpy(a_d, a_h, N*sizeof(float), dir); kernel<<<N/nThreads, nThreads>>>(a_d); Staged concurrent copy and execute shows how the transfer and kernel execution can be broken up into nStreams stages. This approach permits some overlapping of the data transfer and execution. Staged concurrent copy and execute size=N*sizeof(float)/nStreams; for (i=0; i<nStreams; i++) { offset = i*N/nStreams; cudaMemcpyAsync(a_d+offset, a_h+offset, size, dir, stream[i]); kernel<<<N/(nThreads*nStreams), nThreads, 0, stream[i]>>>(a_d+offset); } (In Staged concurrent copy and execute, it is assumed that N is evenly divisible by nThreads*nStreams.) Because execution within a stream occurs sequentially, none of the kernels will launch until the data transfers in their respective streams complete. Current GPUs can simultaneously process asynchronous data transfers and execute kernels. GPUs with a single copy engine can perform one asynchronous data transfer and execute kernels whereas GPUs with two copy engines can simultaneously perform one asynchronous data transfer from the host to the device, one asynchronous data transfer from the device to the host, and execute kernels. The number of copy engines on a GPU is given by the asyncEngineCount field of the cudaDeviceProp structure, which is also listed in the output of the deviceQuery CUDA Sample. (It should be mentioned that it is not possible to overlap a blocking transfer with an asynchronous transfer, because the blocking transfer occurs in the default stream, so it will not begin until all previous CUDA calls complete. It will not allow any other CUDA call to begin until it has completed.) A diagram depicting the timeline of execution for the two code segments is shown in Figure 1, and nStreams is equal to 4 for Staged concurrent copy and execute in the bottom half of the figure. Figure 1 Timeline comparison for copy and kernel execution Sequential Concurrent For this example, it is assumed that the data transfer and kernel execution times are comparable. In such cases, and when the execution time (tE) exceeds the transfer time (tT), a rough estimate for the overall time is tE + tT/nStreams for the staged version versus tE + tT for the sequential version. If the transfer time exceeds the execution time, a rough estimate for the overall time is tT + tE/nStreams. 9.1.3. Zero Copy Zero copy is a feature that was added in version 2.2 of the CUDA Toolkit. It enables GPU threads to directly access host memory. For this purpose, it requires mapped pinned (non-pageable) memory. On integrated GPUs (i.e., GPUs with the integrated field of the CUDA device properties structure set to 1), mapped pinned memory is always a performance gain because it avoids superfluous copies as integrated GPU and CPU memory are physically the same. On discrete GPUs, mapped pinned memory is advantageous only in certain cases. Because the data is not cached on the GPU, mapped pinned memory should be read or written only once, and the global loads and stores that read and write the memory should be coalesced. Zero copy can be used in place of streams because kernel-originated data transfers automatically overlap kernel execution without the overhead of setting up and determining the optimal number of streams. Note Low Priority: Use zero-copy operations on integrated GPUs for CUDA Toolkit version 2.2 and later. The host code in Zero-copy host code shows how zero copy is typically set up. Zero-copy host code float *a_h, *a_map; ... cudaGetDeviceProperties(&prop, 0); if (!prop.canMapHostMemory) exit(0); cudaSetDeviceFlags(cudaDeviceMapHost); cudaHostAlloc(&a_h, nBytes, cudaHostAllocMapped); cudaHostGetDevicePointer(&a_map, a_h, 0); kernel<<<gridSize, blockSize>>>(a_map); In this code, the canMapHostMemory field of the structure returned by cudaGetDeviceProperties() is used to check that the device supports mapping host memory to the device’s address space. Page-locked memory mapping is enabled by calling cudaSetDeviceFlags() with cudaDeviceMapHost. Note that cudaSetDeviceFlags() must be called prior to setting a device or making a CUDA call that requires state (that is, essentially, before a context is created). Page-locked mapped host memory is allocated using cudaHostAlloc(), and the pointer to the mapped device address space is obtained via the function cudaHostGetDevicePointer(). In the code in Zero-copy host code, kernel() can reference the mapped pinned host memory using the pointer a_map in exactly the same was as it would if a_map referred to a location in device memory. Note Mapped pinned host memory allows you to overlap CPU-GPU memory transfers with computation while avoiding the use of CUDA streams. But since any repeated access to such memory areas causes repeated CPU-GPU transfers, consider creating a second area in device memory to manually cache the previously read host memory data. 9.1.4. Unified Virtual Addressing Devices of compute capability 2.0 and later support a special addressing mode called Unified Virtual Addressing (UVA) on 64-bit Linux and Windows. With UVA, the host memory and the device memories of all installed supported devices share a single virtual address space. Prior to UVA, an application had to keep track of which pointers referred to device memory (and for which device) and which referred to host memory as a separate bit of metadata (or as hard-coded information in the program) for each pointer. Using UVA, on the other hand, the physical memory space to which a pointer points can be determined simply by inspecting the value of the pointer using cudaPointerGetAttributes(). Under UVA, pinned host memory allocated with cudaHostAlloc() will have identical host and device pointers, so it is not necessary to call cudaHostGetDevicePointer() for such allocations. Host memory allocations pinned after-the-fact via cudaHostRegister(), however, will continue to have different device pointers than their host pointers, so cudaHostGetDevicePointer() remains necessary in that case. UVA is also a necessary precondition for enabling peer-to-peer (P2P) transfer of data directly across the PCIe bus or NVLink for supported GPUs in supported configurations, bypassing host memory. See the CUDA C++ Programming Guide for further explanations and software requirements for UVA and P2P. 9.2. Device Memory Spaces CUDA devices use several memory spaces, which have different characteristics that reflect their distinct usages in CUDA applications. These memory spaces include global, local, shared, texture, and registers, as shown in Figure 2. Figure 2 Memory spaces on a CUDA device Of these different memory spaces, global memory is the most plentiful; see Features and Technical Specifications of the CUDA C++ Programming Guide for the amounts of memory available in each memory space at each compute capability level. Global, local, and texture memory have the greatest access latency, followed by constant memory, shared memory, and the register file. The various principal traits of the memory types are shown in Table 1. Memory Location on/off chip Cached Access Scope Lifetime Register On n/a R/W 1 thread Thread Local Off Yes††R/W 1 thread Thread Shared On n/a R/W All threads in block Block Global Off †R/W All threads + host Host allocation Constant Off Yes R All threads + host Host allocation Texture Off Yes R All threads + host Host allocation † Cached in L1 and L2 by default on devices of compute capability 6.0 and 7.x; cached only in L2 by default on devices of lower compute capabilities, though some allow opt-in to caching in L1 as well via compilation flags. †† Cached in L1 and L2 by default except on devices of compute capability 5.x; devices of compute capability 5.x cache locals only in L2. In the case of texture access, if a texture reference is bound to a linear array in global memory, then the device code can write to the underlying array. Texture references that are bound to CUDA arrays can be written to via surface-write operations by binding a surface to the same underlying CUDA array storage). Reading from a texture while writing to its underlying global memory array in the same kernel launch should be avoided because the texture caches are read-only and are not invalidated when the associated global memory is modified. 9.2.1. Coalesced Access to Global Memory A very important performance consideration in programming for CUDA-capable GPU architectures is the coalescing of global memory accesses. Global memory loads and stores by threads of a warp are coalesced by the device into as few as possible transactions. Note High Priority: Ensure global memory accesses are coalesced whenever possible. The access requirements for coalescing depend on the compute capability of the device and are documented in the CUDA C++ Programming Guide. For devices of compute capability 6.0 or higher, the requirements can be summarized quite easily: the concurrent accesses of the threads of a warp will coalesce into a number of transactions equal to the number of 32-byte transactions necessary to service all of the threads of the warp. For certain devices of compute capability 5.2, L1-caching of accesses to global memory can be optionally enabled. If L1-caching is enabled on these devices, the number of required transactions is equal to the number of required 128-byte aligned segments. Note On devices of compute capability 6.0 or higher, L1-caching is the default, however the data access unit is 32-byte regardless of whether global loads are cached in L1 or not. On devices with GDDR memory, accessing memory in a coalesced way is even more important when ECC is turned on. Scattered accesses increase ECC memory transfer overhead, especially when writing data to global memory. Coalescing concepts are illustrated in the following simple examples. These examples assume compute capability 6.0 or higher and that accesses are for 4-byte words, unless otherwise noted. 9.2.1.1. A Simple Access Pattern The first and simplest case of coalescing can be achieved by any CUDA-enabled device of compute capability 6.0 or higher: the k-th thread accesses the k-th word in a 32-byte aligned array. Not all threads need to participate. For example, if the threads of a warp access adjacent 4-byte words (e.g., adjacent float values), four coalesced 32-byte transactions will service that memory access. Such a pattern is shown in Figure 3 <coalesced-access-figure>. Figure 3 Coalesced access This access pattern results in four 32-byte transactions, indicated by the red rectangles. If from any of the four 32-byte segments only a subset of the words are requested (e.g. if several threads had accessed the same word or if some threads did not participate in the access), the full segment is fetched anyway. Furthermore, if accesses by the threads of the warp had been permuted within or accross the four segments, still only four 32-byte transactions would have been performed by a device with compute capability 6.0 or higher. 9.2.1.2. A Sequential but Misaligned Access Pattern If sequential threads in a warp access memory that is sequential but not aligned with a 32-byte segment, five 32-byte segments will be requested, as shown in Figure 4. Figure 4 Misaligned sequential addresses that fall within five 32-byte segments Memory allocated through the CUDA Runtime API, such as via cudaMalloc(), is guaranteed to be aligned to at least 256 bytes. Therefore, choosing sensible thread block sizes, such as multiples of the warp size (i.e., 32 on current GPUs), facilitates memory accesses by warps that are properly aligned. (Consider what would happen to the memory addresses accessed by the second, third, and subsequent thread blocks if the thread block size was not a multiple of warp size, for example.) 9.2.1.3. Effects of Misaligned Accesses It is easy and informative to explore the ramifications of misaligned accesses using a simple copy kernel, such as the one in A copy kernel that illustrates misaligned accesses. A copy kernel that illustrates misaligned accesses __global__ void offsetCopy(float *odata, float* idata, int offset) { int xid = blockIdx.x * blockDim.x + threadIdx.x + offset; odata[xid] = idata[xid]; } In A copy kernel that illustrates misaligned accesses, data is copied from the input array idata to the output array, both of which exist in global memory. The kernel is executed within a loop in host code that varies the parameter offset from 0 to 32 (for example, Figure 4 corresponds to this misalignments). The effective bandwidth for the copy with various offsets on an NVIDIA Tesla V100 (compute capability 7.0) is shown in Figure 5. Figure 5 Performance of offsetCopy kernel For the NVIDIA Tesla V100, global memory accesses with no offset or with offsets that are multiples of 8 words result in four 32-byte transactions. The achieved bandwidth is approximately 790 GB/s. Otherwise, five 32-byte segments are loaded per warp, and we would expect approximately 4/5th of the memory throughput achieved with no offsets. In this particular example, the offset memory throughput achieved is, however, approximately 9/10th, because adjacent warps reuse the cache lines their neighbors fetched. So while the impact is still evident it is not as large as we might have expected. It would have been more so if adjacent warps had not exhibited such a high degree of reuse of the over-fetched cache lines. 9.2.1.4. Strided Accesses As seen above, in the case of misaligned sequential accesses, caches help to alleviate the performance impact. It may be different with non-unit-strided accesses, however, and this is a pattern that occurs frequently when dealing with multidimensional data or matrices. For this reason, ensuring that as much as possible of the data in each cache line fetched is actually used is an important part of performance optimization of memory accesses on these devices. To illustrate the effect of strided access on effective bandwidth, see the kernel strideCopy() in A kernel to illustrate non-unit stride data copy, which copies data with a stride of stride elements between threads from idata to odata. A kernel to illustrate non-unit stride data copy __global__ void strideCopy(float *odata, float* idata, int stride) { int xid = (blockIdx.x*blockDim.x + threadIdx.x)*stride; odata[xid] = idata[xid]; } Figure 6 illustrates such a situation; in this case, threads within a warp access words in memory with a stride of 2. This action leads to a load of eight L2 cache segments per warp on the Tesla V100 (compute capability 7.0). Figure 6 Adjacent threads accessing memory with a stride of 2 A stride of 2 results in a 50% of load/store efficiency since half the elements in the transaction are not used and represent wasted bandwidth. As the stride increases, the effective bandwidth decreases until the point where 32 32-byte segments are loaded for the 32 threads in a warp, as indicated in Figure 7. Figure 7 Performance of strideCopy kernel As illustrated in Figure 7, non-unit-stride global memory accesses should be avoided whenever possible. One method for doing so utilizes shared memory, which is discussed in the next section. 9.2.2. L2 Cache Starting with CUDA 11.0, devices of compute capability 8.0 and above have the capability to influence persistence of data in the L2 cache. Because L2 cache is on-chip, it potentially provides higher bandwidth and lower latency accesses to global memory. For more details refer to the L2 Access Management section in the CUDA C++ Programming Guide. 9.2.2.1. L2 Cache Access Window When a CUDA kernel accesses a data region in the global memory repeatedly, such data accesses can be considered to be persisting. On the other hand, if the data is only accessed once, such data accesses can be considered to be streaming. A portion of the L2 cache can be set aside for persistent accesses to a data region in global memory. If this set-aside portion is not used by persistent accesses, then streaming or normal data accesses can use it. The L2 cache set-aside size for persisting accesses may be adjusted, within limits: cudaGetDeviceProperties(&prop, device_id); cudaDeviceSetLimit(cudaLimitPersistingL2CacheSize, prop.persistingL2CacheMaxSize); /* Set aside max possible size of L2 cache for persisting accesses */ Mapping of user data to L2 set-aside portion can be controlled using an access policy window on a CUDA stream or CUDA graph kernel node. The example below shows how to use the access policy window on a CUDA stream. cudaStreamAttrValue stream_attribute; // Stream level attributes data structure stream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(ptr); // Global Memory data pointer stream_attribute.accessPolicyWindow.num_bytes = num_bytes; // Number of bytes for persisting accesses. // (Must be less than cudaDeviceProp::accessPolicyMaxWindowSize) stream_attribute.accessPolicyWindow.hitRatio = 1.0; // Hint for L2 cache hit ratio for persisting accesses in the num_bytes region stream_attribute.accessPolicyWindow.hitProp = cudaAccessPropertyPersisting; // Type of access property on cache hit stream_attribute.accessPolicyWindow.missProp = cudaAccessPropertyStreaming; // Type of access property on cache miss. //Set the attributes to a CUDA stream of type cudaStream_t cudaStreamSetAttribute(stream, cudaStreamAttributeAccessPolicyWindow, &stream_attribute); The access policy window requires a value for hitRatio and num_bytes. Depending on the value of the num_bytes parameter and the size of L2 cache, one may need to tune the value of hitRatio to avoid thrashing of L2 cache lines. 9.2.2.2. Tuning the Access Window Hit-Ratio The hitRatio parameter can be used to specify the fraction of accesses that receive the hitProp property. For example, if the hitRatio value is 0.6, 60% of the memory accesses in the global memory region [ptr..ptr+num_bytes) have the persisting property and 40% of the memory accesses have the streaming property. To understand the effect of hitRatio and num_bytes, we use a sliding window micro benchmark. This microbenchmark uses a 1024 MB region in GPU global memory. First, we set aside 30 MB of the L2 cache for persisting accesses using cudaDeviceSetLimit(), as discussed above. Then, as shown in the figure below, we specify that the accesses to the first freqSize * sizeof(int) bytes of the memory region are persistent. This data will thus use the L2 set-aside portion. In our experiment, we vary the size of this persistent data region from 10 MB to 60 MB to model various scenarios where data fits in or exceeds the available L2 set-aside portion of 30 MB. Note that the NVIDIA Tesla A100 GPU has 40 MB of total L2 cache capacity. Accesses to the remaining data of the memory region (i.e., streaming data) are considered normal or streaming accesses and will thus use the remaining 10 MB of the non set-aside L2 portion (unless part of the L2 set-aside portion is unused). Figure 8 Mapping Persistent data accesses to set-aside L2 in sliding window experiment Consider the following kernel code and access window parameters, as the implementation of the sliding window experiment. __global__ void kernel(int *data_persistent, int *data_streaming, int dataSize, int freqSize) { int tid = blockIdx.x * blockDim.x + threadIdx.x; /*Each CUDA thread accesses one element in the persistent data section and one element in the streaming data section. Because the size of the persistent memory region (freqSize * sizeof(int) bytes) is much smaller than the size of the streaming memory region (dataSize * sizeof(int) bytes), data in the persistent region is accessed more frequently*/ data_persistent[tid % freqSize] = 2 * data_persistent[tid % freqSize]; data_streaming[tid % dataSize] = 2 * data_streaming[tid % dataSize]; } stream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(data_persistent); stream_attribute.accessPolicyWindow.num_bytes = freqSize * sizeof(int); //Number of bytes for persisting accesses in range 10-60 MB stream_attribute.accessPolicyWindow.hitRatio = 1.0; //Hint for cache hit ratio. Fixed value 1.0 The performance of the above kernel is shown in the chart below. When the persistent data region fits well into the 30 MB set-aside portion of the L2 cache, a performance increase of as much as 50% is observed. However, once the size of this persistent data region exceeds the size of the L2 set-aside cache portion, approximately 10% performance drop is observed due to thrashing of L2 cache lines. Figure 9 The performance of the sliding-window benchmark with fixed hit-ratio of 1.0 In order to optimize the performance, when the size of the persistent data is more than the size of the set-aside L2 cache portion, we tune the num_bytes and hitRatio parameters in the access window as below. stream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(data_persistent); stream_attribute.accessPolicyWindow.num_bytes = 20*1024*1024; //20 MB stream_attribute.accessPolicyWindow.hitRatio = (20*1024*1024)/((float)freqSize*sizeof(int)); //Such that up to 20MB of data is resident. We fix the num_bytes in the access window to 20 MB and tune the hitRatio such that a random 20 MB of the total persistent data is resident in the L2 set-aside cache portion. The remaining portion of this persistent data will be accessed using the streaming property. This helps in reducing cache thrashing. The results are shown in the chart below, where we see good performance regardless of whether the persistent data fits in the L2 set-aside or not. Figure 10 The performance of the sliding-window benchmark with tuned hit-ratio 9.2.3. Shared Memory Because it is on-chip, shared memory has much higher bandwidth and lower latency than local and global memory - provided there are no bank conflicts between the threads, as detailed in the following section. 9.2.3.1. Shared Memory and Memory Banks To achieve high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (banks) that can be accessed simultaneously. Therefore, any memory load or store of n addresses that spans n distinct memory banks can be serviced simultaneously, yielding an effective bandwidth that is n times as high as the bandwidth of a single bank. However, if multiple addresses of a memory request map to the same memory bank, the accesses are serialized. The hardware splits a memory request that has bank conflicts into as many separate conflict-free requests as necessary, decreasing the effective bandwidth by a factor equal to the number of separate memory requests. The one exception here is when multiple threads in a warp address the same shared memory location, resulting in a broadcast. In this case, multiple broadcasts from different banks are coalesced into a single multicast from the requested shared memory locations to the threads. To minimize bank conflicts, it is important to understand how memory addresses map to memory banks and how to optimally schedule memory requests. On devices of compute capability 5.x or newer, each bank has a bandwidth of 32 bits every clock cycle, and successive 32-bit words are assigned to successive banks. The warp size is 32 threads and the number of banks is also 32, so bank conflicts can occur between any threads in the warp. See Compute Capability 5.x for further details. 9.2.3.2. Shared Memory in Matrix Multiplication (C=AB) Shared memory enables cooperation between threads in a block. When multiple threads in a block use the same data from global memory, shared memory can be used to access the data from global memory only once. Shared memory can also be used to avoid uncoalesced memory accesses by loading and storing data in a coalesced pattern from global memory and then reordering it in shared memory. Aside from memory bank conflicts, there is no penalty for non-sequential or unaligned accesses by a warp in shared memory. The use of shared memory is illustrated via the simple example of a matrix multiplication C = AB for the case with A of dimension Mxw, B of dimension wxN, and C of dimension MxN. To keep the kernels simple, M and N are multiples of 32, since the warp size (w) is 32 for current devices. A natural decomposition of the problem is to use a block and tile size of wxw threads. Therefore, in terms of wxw tiles, A is a column matrix, B is a row matrix, and C is their outer product; see Figure 11. A grid of N/w by M/w blocks is launched, where each thread block calculates the elements of a different tile in C from a single tile of A and a single tile of B. Figure 11 Block-column matrix multiplied by block-row matrix. Block-column matrix (A) multiplied by block-row matrix (B) with resulting product matrix (C). To do this, the simpleMultiply kernel (Unoptimized matrix multiplication) calculates the output elements of a tile of matrix C. Unoptimized matrix multiplication __global__ void simpleMultiply(float *a, float* b, float *c, int N) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; for (int i = 0; i < TILE_DIM; i++) { sum += a[row*TILE_DIM+i] * b[i*N+col]; } c[row*N+col] = sum; } In Unoptimized matrix multiplication, a, b, and c are pointers to global memory for the matrices A, B, and C, respectively; blockDim.x, blockDim.y, and TILE_DIM are all equal to w. Each thread in the wxw-thread block calculates one element in a tile of C. row and col are the row and column of the element in C being calculated by a particular thread. The for loop over i multiplies a row of A by a column of B, which is then written to C. The effective bandwidth of this kernel is 119.9 GB/s on an NVIDIA Tesla V100. To analyze performance, it is necessary to consider how warps access global memory in the for loop. Each warp of threads calculates one row of a tile of C, which depends on a single row of A and an entire tile of B as illustrated in Figure 12. Figure 12 Computing a row of a tile. Computing a row of a tile in C using one row of A and an entire tile of B. For each iteration i of the for loop, the threads in a warp read a row of the B tile, which is a sequential and coalesced access for all compute capabilities. However, for each iteration i, all threads in a warp read the same value from global memory for matrix A, as the index row*TILE_DIM+i is constant within a warp. Even though such an access requires only 1 transaction on devices of compute capability 2.0 or higher, there is wasted bandwidth in the transaction, because only one 4-byte word out of 8 words in a 32-byte cache segment is used. We can reuse this cache line in subsequent iterations of the loop, and we would eventually utilize all 8 words; however, when many warps execute on the same multiprocessor simultaneously, as is generally the case, the cache line may easily be evicted from the cache between iterations i and i+1. The performance on a device of any compute capability can be improved by reading a tile of A into shared memory as shown in Using shared memory to improve the global memory load efficiency in matrix multiplication. Using shared memory to improve the global memory load efficiency in matrix multiplication __global__ void coalescedMultiply(float *a, float* b, float *c, int N) { __shared__ float aTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x]; __syncwarp(); for (int i = 0; i < TILE_DIM; i++) { sum += aTile[threadIdx.y][i]* b[i*N+col]; } c[row*N+col] = sum; } In Using shared memory to improve the global memory load efficiency in matrix multiplication, each element in a tile of A is read from global memory only once, in a fully coalesced fashion (with no wasted bandwidth), to shared memory. Within each iteration of the for loop, a value in shared memory is broadcast to all threads in a warp. Instead of a __syncthreads()synchronization barrier call, a __syncwarp() is sufficient after reading the tile of A into shared memory because only threads within the warp that write the data into shared memory read this data. This kernel has an effective bandwidth of 144.4 GB/s on an NVIDIA Tesla V100. This illustrates the use of the shared memory as a user-managed cache when the hardware L1 cache eviction policy does not match up well with the needs of the application or when L1 cache is not used for reads from global memory. A further improvement can be made to how Using shared memory to improve the global memory load efficiency in matrix multiplication deals with matrix B. In calculating each of the rows of a tile of matrix C, the entire tile of B is read. The repeated reading of the B tile can be eliminated by reading it into shared memory once (Improvement by reading additional data into shared memory). Improvement by reading additional data into shared memory __global__ void sharedABMultiply(float *a, float* b, float *c, int N) { __shared__ float aTile[TILE_DIM][TILE_DIM], bTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x]; bTile[threadIdx.y][threadIdx.x] = b[threadIdx.y*N+col]; __syncthreads(); for (int i = 0; i < TILE_DIM; i++) { sum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x]; } c[row*N+col] = sum; } Note that in Improvement by reading additional data into shared memory, a __syncthreads() call is required after reading the B tile because a warp reads data from shared memory that were written to shared memory by different warps. The effective bandwidth of this routine is 195.5 GB/s on an NVIDIA Tesla V100. Note that the performance improvement is not due to improved coalescing in either case, but to avoiding redundant transfers from global memory. The results of the various optimizations are summarized in Table 2. Optimization NVIDIA Tesla V100 No optimization 119.9 GB/s Coalesced using shared memory to store a tile of A 144.4 GB/s Using shared memory to eliminate redundant reads of a tile of B 195.5 GB/s Note Medium Priority: Use shared memory to avoid redundant transfers from global memory. 9.2.3.3. Shared Memory in Matrix Multiplication (C=AAT) A variant of the previous matrix multiplication can be used to illustrate how strided accesses to global memory, as well as shared memory bank conflicts, are handled. This variant simply uses the transpose of A in place of B, so C = AAT. A simple implementation for C = AAT is shown in Unoptimized handling of strided accesses to global memory. Unoptimized handling of strided accesses to global memory __global__ void simpleMultiply(float *a, float *c, int M) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; for (int i = 0; i < TILE_DIM; i++) { sum += a[row*TILE_DIM+i] * a[col*TILE_DIM+i]; } c[row*M+col] = sum; } In the example above, the row-th, col-th element of C is obtained by taking the dot product of the row-th and col-th rows of A. The effective bandwidth for this kernel is 12.8 GB/s on an NVIDIA Tesla V100. These results are substantially lower than the corresponding measurements for the C = AB kernel. The difference is in how threads in a half warp access elements of A in the second term, a[col*TILE_DIM+i], for each iteration i. For a warp of threads, col represents sequential columns of the transpose of A, and therefore col*TILE_DIM represents a strided access of global memory with a stride of w, resulting in plenty of wasted bandwidth. The way to avoid strided access is to use shared memory as before, except in this case a warp reads a row of A into a column of a shared memory tile, as shown in An optimized handling of strided accesses using coalesced reads from global memory. An optimized handling of strided accesses using coalesced reads from global memory __global__ void coalescedMultiply(float *a, float *c, int M) { __shared__ float aTile[TILE_DIM][TILE_DIM], transposedTile[TILE_DIM][TILE_DIM]; int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; float sum = 0.0f; aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x]; transposedTile[threadIdx.x][threadIdx.y] = a[(blockIdx.x*blockDim.x + threadIdx.y)*TILE_DIM + threadIdx.x]; __syncthreads(); for (int i = 0; i < TILE_DIM; i++) { sum += aTile[threadIdx.y][i]* transposedTile[i][threadIdx.x]; } c[row*M+col] = sum; } An optimized handling of strided accesses using coalesced reads from global memory uses the shared transposedTile to avoid uncoalesced accesses in the second term in the dot product and the shared aTile technique from the previous example to avoid uncoalesced accesses in the first term. The effective bandwidth of this kernel is 140.2 GB/s on an NVIDIA Tesla V100.These results are lower than those obtained by the final kernel for C = AB. The cause of the difference is shared memory bank conflicts. The reads of elements in transposedTile within the for loop are free of conflicts, because threads of each half warp read across rows of the tile, resulting in unit stride across the banks. However, bank conflicts occur when copying the tile from global memory into shared memory. To enable the loads from global memory to be coalesced, data are read from global memory sequentially. However, this requires writing to shared memory in columns, and because of the use of wxw tiles in shared memory, this results in a stride between threads of w banks - every thread of the warp hits the same bank (Recall that w is selected as 32). These many-way bank conflicts are very expensive. The simple remedy is to pad the shared memory array so that it has an extra column, as in the following line of code. __shared__ float transposedTile[TILE_DIM][TILE_DIM+1]; This padding eliminates the conflicts entirely, because now the stride between threads is w+1 banks (i.e., 33 for current devices), which, due to modulo arithmetic used to compute bank indices, is equivalent to a unit stride. After this change, the effective bandwidth is 199.4 GB/s on an NVIDIA Tesla V100, which is comparable to the results from the last C = AB kernel. The results of these optimizations are summarized in Table 3. Optimization NVIDIA Tesla V100 No optimization 12.8 GB/s Using shared memory to coalesce global reads 140.2 GB/s Removing bank conflicts 199.4 GB/s These results should be compared with those in Table 2. As can be seen from these tables, judicious use of shared memory can dramatically improve performance. The examples in this section have illustrated three reasons to use shared memory: To enable coalesced accesses to global memory, especially to avoid large strides (for general matrices, strides are much larger than 32) To eliminate (or reduce) redundant loads from global memory To avoid wasted bandwidth 9.2.3.4. Asynchronous Copy from Global Memory to Shared Memory CUDA 11.0 introduces an async-copy feature that can be used within device code to explicitly manage the asynchronous copying of data from global memory to shared memory. This feature enables CUDA kernels to overlap copying data from global to shared memory with computation. It also avoids an intermediary register file access traditionally present between the global memory read and the shared memory write. For more details refer to the memcpy_async section in the CUDA C++ Programming Guide. To understand the performance difference between synchronous copy and asynchronous copy of data from global memory to shared memory, consider the following micro benchmark CUDA kernels for demonstrating the synchronous and asynchronous approaches. Asynchronous copies are hardware accelerated for NVIDIA A100 GPU. template <typename T> __global__ void pipeline_kernel_sync(T *global, uint64_t *clock, size_t copy_count) { extern __shared__ char s[]; T *shared = reinterpret_cast<T *>(s); uint64_t clock_start = clock64(); for (size_t i = 0; i < copy_count; ++i) { shared[blockDim.x * i + threadIdx.x] = global[blockDim.x * i + threadIdx.x]; } uint64_t clock_end = clock64(); atomicAdd(reinterpret_cast<unsigned long long *>(clock), clock_end - clock_start); } template <typename T> __global__ void pipeline_kernel_async(T *global, uint64_t *clock, size_t copy_count) { extern __shared__ char s[]; T *shared = reinterpret_cast<T *>(s); uint64_t clock_start = clock64(); //pipeline pipe; for (size_t i = 0; i < copy_count; ++i) { __pipeline_memcpy_async(&shared[blockDim.x * i + threadIdx.x], &global[blockDim.x * i + threadIdx.x], sizeof(T)); } __pipeline_commit(); __pipeline_wait_prior(0); uint64_t clock_end = clock64(); atomicAdd(reinterpret_cast<unsigned long long *>(clock), clock_end - clock_start); } The synchronous version for the kernel loads an element from global memory to an intermediate register and then stores the intermediate register value to shared memory. In the asynchronous version of the kernel, instructions to load from global memory and store directly into shared memory are issued as soon as __pipeline_memcpy_async() function is called. The __pipeline_wait_prior(0) will wait until all the instructions in the pipe object have been executed. Using asynchronous copies does not use any intermediate register. Not using intermediate registers can help reduce register pressure and can increase kernel occupancy. Data copied from global memory to shared memory using asynchronous copy instructions can be cached in the L1 cache or the L1 cache can be optionally bypassed. If individual CUDA threads are copying elements of 16 bytes, the L1 cache can be bypassed. This difference is illustrated in Figure 13. Figure 13 Comparing Synchronous vs Asynchronous Copy from Global Memory to Shared Memory We evaluate the performance of both kernels using elements of size 4B, 8B and 16B per thread i.e., using int, int2 and int4 for the template parameter. We adjust the copy_count in the kernels such that each thread block copies from 512 bytes up to 48 MB. The performance of the kernels is shown in Figure 14. Figure 14 Comparing Performance of Synchronous vs Asynchronous Copy from Global Memory to Shared Memory From the performance chart, the following observations can be made for this experiment. Best performance with synchronous copy is achieved when the copy_count parameter is a multiple of 4 for all three element sizes. The compiler can optimize groups of 4 load and store instructions. This is evident from the saw tooth curves. Asynchronous copy achieves better performance in nearly all cases. The async-copy does not require the copy_count parameter to be a multiple of 4, to maximize performance through compiler optimizations. Overall, best performance is achieved when using asynchronous copies with an element of size 8 or 16 bytes. 9.2.4. Local Memory Local memory is so named because its scope is local to the thread, not because of its physical location. In fact, local memory is off-chip. Hence, access to local memory is as expensive as access to global memory. In other words, the term local in the name does not imply faster access. Local memory is used only to hold automatic variables. This is done by the nvcc compiler when it determines that there is insufficient register space to hold the variable. Automatic variables that are likely to be placed in local memory are large structures or arrays that would consume too much register space and arrays that the compiler determines may be indexed dynamically. Inspection of the PTX assembly code (obtained by compiling with -ptx or -keep command-line options to nvcc) reveals whether a variable has been placed in local memory during the first compilation phases. If it has, it will be declared using the .local mnemonic and accessed using the ld.local and st.local mnemonics. If it has not, subsequent compilation phases might still decide otherwise, if they find the variable consumes too much register space for the targeted architecture. There is no way to check this for a specific variable, but the compiler reports total local memory usage per kernel (lmem) when run with the--ptxas-options=-v option. 9.2.5. Texture Memory The read-only texture memory space is cached. Therefore, a texture fetch costs one device memory read only on a cache miss; otherwise, it just costs one read from the texture cache. The texture cache is optimized for 2D spatial locality, so threads of the same warp that read texture addresses that are close together will achieve best performance. Texture memory is also designed for streaming fetches with a constant latency; that is, a cache hit reduces DRAM bandwidth demand, but not fetch latency. In certain addressing situations, reading device memory through texture fetching can be an advantageous alternative to reading device memory from global or constant memory. 9.2.5.1. Additional Texture Capabilities If textures are fetched using tex1D(),tex2D(), or tex3D() rather than tex1Dfetch(), the hardware provides other capabilities that might be useful for some applications such as image processing, as shown in Table 4. Feature Use Caveat Filtering Fast, low-precision interpolation between texels Valid only if the texture reference returns floating-point data Normalized texture coordinates Resolution-independent coding None Addressing modes Automatic handling of boundary cases1 Can be used only with normalized texture coordinates 1 The automatic handling of boundary cases in the bottom row of Table 4 refers to how a texture coordinate is resolved when it falls outside the valid addressing range. There are two options: clamp and wrap. If x is the coordinate and N is the number of texels for a one-dimensional texture, then with clamp, x is replaced by 0 if x < 0 and by 1-1/N if 1 <x. With wrap, x is replaced by frac(x) where frac(x) = x - floor(x). Floor returns the largest integer less than or equal to x. So, in clamp mode where N = 1, an x of 1.3 is clamped to 1.0; whereas in wrap mode, it is converted to 0.3 Within a kernel call, the texture cache is not kept coherent with respect to global memory writes, so texture fetches from addresses that have been written via global stores in the same kernel call return undefined data. That is, a thread can safely read a memory location via texture if the location has been updated by a previous kernel call or memory copy, but not if it has been previously updated by the same thread or another thread within the same kernel call. 9.2.6. Constant Memory There is a total of 64 KB constant memory on a device. The constant memory space is cached. As a result, a read from constant memory costs one memory read from device memory only on a cache miss; otherwise, it just costs one read from the constant cache. Accesses to different addresses by threads within a warp are serialized, thus the cost scales linearly with the number of unique addresses read by all threads within a warp. As such, the constant cache is best when threads in the same warp accesses only a few distinct locations. If all threads of a warp access the same location, then constant memory can be as fast as a register access. 9.2.7. Registers Generally, accessing a register consumes zero extra clock cycles per instruction, but delays may occur due to register read-after-write dependencies and register memory bank conflicts. The compiler and hardware thread scheduler will schedule instructions as optimally as possible to avoid register memory bank conflicts. An application has no direct control over these bank conflicts. In particular, there is no register-related reason to pack data into vector data types such as float4 or int4 types. 9.2.7.1. Register Pressure Register pressure occurs when there are not enough registers available for a given task. Even though each multiprocessor contains thousands of 32-bit registers (see Features and Technical Specifications of the CUDA C++ Programming Guide), these are partitioned among concurrent threads. To prevent the compiler from allocating too many registers, use the -maxrregcount=N compiler command-line option or the launch bounds kernel definition qualifier (see Execution Configuration of the CUDA C++ Programming Guide) to control the maximum number of registers to allocated per thread. 9.3. Allocation Device memory allocation and de-allocation via cudaMalloc() and cudaFree() are expensive operations. It is recommended to use cudaMallocAsync() and cudaFreeAsync() which are stream ordered pool allocators to manage device memory. 9.4. NUMA Best Practices Some recent Linux distributions enable automatic NUMA balancing (or “AutoNUMA”) by default. In some instances, operations performed by automatic NUMA balancing may degrade the performance of applications running on NVIDIA GPUs. For optimal performance, users should manually tune the NUMA characteristics of their application. The optimal NUMA tuning will depend on the characteristics and desired hardware affinities of each application and node, but in general applications computing on NVIDIA GPUs are advised to choose a policy that disables automatic NUMA balancing. For example, on IBM Newell POWER9 nodes (where the CPUs correspond to NUMA nodes 0 and 8), use: numactl --membind=0,8 to bind memory allocations to the CPUs. 10. Execution Configuration Optimizations One of the keys to good performance is to keep the multiprocessors on the device as busy as possible. A device in which work is poorly balanced across the multiprocessors will deliver suboptimal performance. Hence, it’s important to design your application to use threads and blocks in a way that maximizes hardware utilization and to limit practices that impede the free distribution of work. A key concept in this effort is occupancy, which is explained in the following sections. Hardware utilization can also be improved in some cases by designing your application so that multiple, independent kernels can execute at the same time. Multiple kernels executing at the same time is known as concurrent kernel execution. Concurrent kernel execution is described below. Another important concept is the management of system resources allocated for a particular task. How to manage this resource utilization is discussed in the final sections of this chapter. 10.1. Occupancy Thread instructions are executed sequentially in CUDA, and, as a result, executing other warps when one warp is paused or stalled is the only way to hide latencies and keep the hardware busy. Some metric related to the number of active warps on a multiprocessor is therefore important in determining how effectively the hardware is kept busy. This metric is occupancy. Occupancy is the ratio of the number of active warps per multiprocessor to the maximum number of possible active warps. (To determine the latter number, see the deviceQuery CUDA Sample or refer to Compute Capabilities.) Another way to view occupancy is the percentage of the hardware’s ability to process warps that is actively in use. Higher occupancy does not always equate to higher performance-there is a point above which additional occupancy does not improve performance. However, low occupancy always interferes with the ability to hide memory latency, resulting in performance degradation. Per thread resources required by a CUDA kernel might limit the maximum block size in an unwanted way. In order to maintain forward compatibility to future hardware and toolkits and to ensure that at least one thread block can run on an SM, developers should include the single argument __launch_bounds__(maxThreadsPerBlock) which specifies the largest block size that the kernel will be launched with. Failure to do so could lead to “too many resources requested for launch” errors. Providing the two argument version of __launch_bounds__(maxThreadsPerBlock,minBlocksPerMultiprocessor) can improve performance in some cases. The right value for minBlocksPerMultiprocessor should be determined using a detailed per kernel analysis. 10.1.1. Calculating Occupancy One of several factors that determine occupancy is register availability. Register storage enables threads to keep local variables nearby for low-latency access. However, the set of registers (known as the register file) is a limited commodity that all threads resident on a multiprocessor must share. Registers are allocated to an entire block all at once. So, if each thread block uses many registers, the number of thread blocks that can be resident on a multiprocessor is reduced, thereby lowering the occupancy of the multiprocessor. The maximum number of registers per thread can be set manually at compilation time per-file using the -maxrregcount option or per-kernel using the __launch_bounds__ qualifier (see Register Pressure). For purposes of calculating occupancy, the number of registers used by each thread is one of the key factors. For example, on devices of CUDA Compute Capability 7.0 each multiprocessor has 65,536 32-bit registers and can have a maximum of 2048 simultaneous threads resident (64 warps x 32 threads per warp). This means that in one of these devices, for a multiprocessor to have 100% occupancy, each thread can use at most 32 registers. However, this approach of determining how register count affects occupancy does not take into account the register allocation granularity. For example, on a device of compute capability 7.0, a kernel with 128-thread blocks using 37 registers per thread results in an occupancy of 75% with 12 active 128-thread blocks per multi-processor, whereas a kernel with 320-thread blocks using the same 37 registers per thread results in an occupancy of 63% because only four 320-thread blocks can reside on a multiprocessor. Furthermore, register allocations are rounded up to the nearest 256 registers per warp. The number of registers available, the maximum number of simultaneous threads resident on each multiprocessor, and the register allocation granularity vary over different compute capabilities. Because of these nuances in register allocation and the fact that a multiprocessor’s shared memory is also partitioned between resident thread blocks, the exact relationship between register usage and occupancy can be difficult to determine. The --ptxas options=v option of nvcc details the number of registers used per thread for each kernel. See Hardware Multithreading of the CUDA C++ Programming Guide for the register allocation formulas for devices of various compute capabilities and Features and Technical Specifications of the CUDA C++ Programming Guide for the total number of registers available on those devices. Alternatively, NVIDIA provides an occupancy calculator as part of Nsight Compute; refer to https://docs.nvidia.com/nsight-compute/NsightCompute/index.html#occupancy-calculator. Figure 15 Using the CUDA Occupancy Calculator to project GPU multiprocessor occupancy An application can also use the Occupancy API from the CUDA Runtime, e.g. cudaOccupancyMaxActiveBlocksPerMultiprocessor, to dynamically select launch configurations based on runtime parameters. 10.2. Hiding Register Dependencies Note Medium Priority: To hide latency arising from register dependencies, maintain sufficient numbers of active threads per multiprocessor (i.e., sufficient occupancy). Register dependencies arise when an instruction uses a result stored in a register written by an instruction before it. The latency of most arithmetic instructions is typically 4 cycles on devices of compute capability 7.0. So threads must wait approximatly 4 cycles before using an arithmetic result. However, this latency can be completely hidden by the execution of threads in other warps. See Registers for details. 10.3. Thread and Block Heuristics Note Medium Priority: The number of threads per block should be a multiple of 32 threads, because this provides optimal computing efficiency and facilitates coalescing. The dimension and size of blocks per grid and the dimension and size of threads per block are both important factors. The multidimensional aspect of these parameters allows easier mapping of multidimensional problems to CUDA and does not play a role in performance. As a result, this section discusses size but not dimension. Latency hiding and occupancy depend on the number of active warps per multiprocessor, which is implicitly determined by the execution parameters along with resource (register and shared memory) constraints. Choosing execution parameters is a matter of striking a balance between latency hiding (occupancy) and resource utilization. Choosing the execution configuration parameters should be done in tandem; however, there are certain heuristics that apply to each parameter individually. When choosing the first execution configuration parameter-the number of blocks per grid, or grid size - the primary concern is keeping the entire GPU busy. The number of blocks in a grid should be larger than the number of multiprocessors so that all multiprocessors have at least one block to execute. Furthermore, there should be multiple active blocks per multiprocessor so that blocks that aren’t waiting for a __syncthreads() can keep the hardware busy. This recommendation is subject to resource availability; therefore, it should be determined in the context of the second execution parameter - the number of threads per block, or block size - as well as shared memory usage. To scale to future devices, the number of blocks per kernel launch should be in the thousands. When choosing the block size, it is important to remember that multiple concurrent blocks can reside on a multiprocessor, so occupancy is not determined by block size alone. In particular, a larger block size does not imply a higher occupancy. As mentioned in Occupancy, higher occupancy does not always equate to better performance. For example, improving occupancy from 66 percent to 100 percent generally does not translate to a similar increase in performance. A lower occupancy kernel will have more registers available per thread than a higher occupancy kernel, which may result in less register spilling to local memory; in particular, with a high degree of exposed instruction-level parallelism (ILP) it is, in some cases, possible to fully cover latency with a low occupancy. There are many such factors involved in selecting block size, and inevitably some experimentation is required. However, a few rules of thumb should be followed: Threads per block should be a multiple of warp size to avoid wasting computation on under-populated warps and to facilitate coalescing. A minimum of 64 threads per block should be used, and only if there are multiple concurrent blocks per multiprocessor. Between 128 and 256 threads per block is a good initial range for experimentation with different block sizes. Use several smaller thread blocks rather than one large thread block per multiprocessor if latency affects performance. This is particularly beneficial to kernels that frequently call __syncthreads(). Note that when a thread block allocates more registers than are available on a multiprocessor, the kernel launch fails, as it will when too much shared memory or too many threads are requested. 10.4. Effects of Shared Memory Shared memory can be helpful in several situations, such as helping to coalesce or eliminate redundant access to global memory. However, it also can act as a constraint on occupancy. In many cases, the amount of shared memory required by a kernel is related to the block size that was chosen, but the mapping of threads to shared memory elements does not need to be one-to-one. For example, it may be desirable to use a 64x64 element shared memory array in a kernel, but because the maximum number of threads per block is 1024, it is not possible to launch a kernel with 64x64 threads per block. In such cases, kernels with 32x32 or 64x16 threads can be launched with each thread processing four elements of the shared memory array. The approach of using a single thread to process multiple elements of a shared memory array can be beneficial even if limits such as threads per block are not an issue. This is because some operations common to each element can be performed by the thread once, amortizing the cost over the number of shared memory elements processed by the thread. A useful technique to determine the sensitivity of performance to occupancy is through experimentation with the amount of dynamically allocated shared memory, as specified in the third parameter of the execution configuration. By simply increasing this parameter (without modifying the kernel), it is possible to effectively reduce the occupancy of the kernel and measure its effect on performance. 10.5. Concurrent Kernel Execution As described in Asynchronous and Overlapping Transfers with Computation, CUDA streams can be used to overlap kernel execution with data transfers. On devices that are capable of concurrent kernel execution, streams can also be used to execute multiple kernels simultaneously to more fully take advantage of the device’s multiprocessors. Whether a device has this capability is indicated by the concurrentKernels field of the cudaDeviceProp structure (or listed in the output of the deviceQuery CUDA Sample). Non-default streams (streams other than stream 0) are required for concurrent execution because kernel calls that use the default stream begin only after all preceding calls on the device (in any stream) have completed, and no operation on the device (in any stream) commences until they are finished. The following example illustrates the basic technique. Because kernel1 and kernel2 are executed in different, non-default streams, a capable device can execute the kernels at the same time. cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); kernel1<<<grid, block, 0, stream1>>>(data_1); kernel2<<<grid, block, 0, stream2>>>(data_2); 10.6. Multiple contexts CUDA work occurs within a process space for a particular GPU known as a context. The context encapsulates kernel launches and memory allocations for that GPU as well as supporting constructs such as the page tables. The context is explicit in the CUDA Driver API but is entirely implicit in the CUDA Runtime API, which creates and manages contexts automatically. With the CUDA Driver API, a CUDA application process can potentially create more than one context for a given GPU. If multiple CUDA application processes access the same GPU concurrently, this almost always implies multiple contexts, since a context is tied to a particular host process unless Multi-Process Service is in use. While multiple contexts (and their associated resources such as global memory allocations) can be allocated concurrently on a given GPU, only one of these contexts can execute work at any given moment on that GPU; contexts sharing the same GPU are time-sliced. Creating additional contexts incurs memory overhead for per-context data and time overhead for context switching. Furthermore, the need for context switching can reduce utilization when work from several contexts could otherwise execute concurrently (see also Concurrent Kernel Execution). Therefore, it is best to avoid multiple contexts per GPU within the same CUDA application. To assist with this, the CUDA Driver API provides methods to access and manage a special context on each GPU called the primary context. These are the same contexts used implicitly by the CUDA Runtime when there is not already a current context for a thread. // When initializing the program/library CUcontext ctx; cuDevicePrimaryCtxRetain(&ctx, dev); // When the program/library launches work cuCtxPushCurrent(ctx); kernel<<<...>>>(...); cuCtxPopCurrent(&ctx); // When the program/library is finished with the context cuDevicePrimaryCtxRelease(dev); Note NVIDIA-SMI can be used to configure a GPU for exclusive process mode, which limits the number of contexts per GPU to one. This context can be current to as many threads as desired within the creating process, and cuDevicePrimaryCtxRetain will fail if a non-primary context that was created with the CUDA driver API already exists on the device. 11. Instruction Optimization Awareness of how instructions are executed often permits low-level optimizations that can be useful, especially in code that is run frequently (the so-called hot spot in a program). Best practices suggest that this optimization be performed after all higher-level optimizations have been completed. 11.1. Arithmetic Instructions Single-precision floats provide the best performance, and their use is highly encouraged. The throughput of individual arithmetic operations is detailed in the CUDA C++ Programming Guide. 11.1.1. Division Modulo Operations Note Low Priority: Use shift operations to avoid expensive division and modulo calculations. Integer division and modulo operations are particularly costly and should be avoided or replaced with bitwise operations whenever possible: If \(n\) is a power of 2, ( \(i/n\) ) is equivalent to ( \(i \gg {log2}(n)\) ) and ( \(i\% n\) ) is equivalent to ( \(i\&\left( {n - 1} \right)\) ). The compiler will perform these conversions if n is literal. (For further information, refer to Performance Guidelines in the CUDA C++ Programming Guide). 11.1.2. Loop Counters Signed vs. Unsigned Note Low Medium Priority: Use signed integers rather than unsigned integers as loop counters. In the C language standard, unsigned integer overflow semantics are well defined, whereas signed integer overflow causes undefined results. Therefore, the compiler can optimize more aggressively with signed arithmetic than it can with unsigned arithmetic. This is of particular note with loop counters: since it is common for loop counters to have values that are always positive, it may be tempting to declare the counters as unsigned. For slightly better performance, however, they should instead be declared as signed. For example, consider the following code: for (i = 0; i < n; i++) { out[i] = in[offset + stride*i]; } Here, the sub-expression stride*i could overflow a 32-bit integer, so if i is declared as unsigned, the overflow semantics prevent the compiler from using some optimizations that might otherwise have applied, such as strength reduction. If instead i is declared as signed, where the overflow semantics are undefined, the compiler has more leeway to use these optimizations. 11.1.3. Reciprocal Square Root The reciprocal square root should always be invoked explicitly as rsqrtf() for single precision and rsqrt() for double precision. The compiler optimizes 1.0f/sqrtf(x) into rsqrtf() only when this does not violate IEEE-754 semantics. 11.1.4. Other Arithmetic Instructions Note Low Priority: Avoid automatic conversion of doubles to floats. The compiler must on occasion insert conversion instructions, introducing additional execution cycles. This is the case for: Functions operating on char or short whose operands generally need to be converted to an int Double-precision floating-point constants (defined without any type suffix) used as input to single-precision floating-point computations The latter case can be avoided by using single-precision floating-point constants, defined with an f suffix such as 3.141592653589793f, 1.0f, 0.5f. For single-precision code, use of the float type and the single-precision math functions are highly recommended. It should also be noted that the CUDA math library’s complementary error function, erfcf(), is particularly fast with full single-precision accuracy. 11.1.5. Exponentiation With Small Fractional Arguments For some fractional exponents, exponentiation can be accelerated significantly compared to the use of pow() by using square roots, cube roots, and their inverses. For those exponentiations where the exponent is not exactly representable as a floating-point number, such as 1/3, this can also provide much more accurate results, as use of pow() magnifies the initial representational error. The formulas in the table below are valid for x >= 0, x != -0, that is, signbit(x) == 0. Computation Formula x1/9 r = rcbrt(rcbrt(x)) x-1/9 r = cbrt(rcbrt(x)) x1/6 r = rcbrt(rsqrt(x)) x-1/6 r = rcbrt(sqrt(x)) x1/4 r = rsqrt(rsqrt(x)) x-1/4 r = sqrt(rsqrt(x)) x1/3 r = cbrt(x) x-1/3 r = rcbrt(x) x1/2 r = sqrt(x) x-1/2 r = rsqrt(x) x2/3 r = cbrt(x); r = r*r x-2/3 r = rcbrt(x); r = r*r x3/4 r = sqrt(x); r = r*sqrt(r) x-3/4 r = rsqrt(x); r = r*sqrt(r) x7/6 r = x*rcbrt(rsqrt(x)) x-7/6 r = (1/x) * rcbrt(sqrt(x)) x5/4 r = x*rsqrt(rsqrt(x)) x-5/4 r = (1/x)*sqrt(rsqrt(x)) x4/3 r = x*cbrt(x) x-4/3 r = (1/x)*rcbrt(x) x3/2 r = x*sqrt(x) x-3/2 r = (1/x)*rsqrt(x) 11.1.6. Math Libraries Note Medium Priority: Use the fast math library whenever speed trumps precision. Two types of runtime math operations are supported. They can be distinguished by their names: some have names with prepended underscores, whereas others do not (e.g., __functionName() versus functionName()). Functions following the __functionName() naming convention map directly to the hardware level. They are faster but provide somewhat lower accuracy (e.g., __sinf(x) and __expf(x)). Functions following functionName() naming convention are slower but have higher accuracy (e.g., sinf(x) and expf(x)). The throughput of __sinf(x), __cosf(x), and__expf(x) is much greater than that of sinf(x), cosf(x), and expf(x). The latter become even more expensive (about an order of magnitude slower) if the magnitude of the argument x needs to be reduced. Moreover, in such cases, the argument-reduction code uses local memory, which can affect performance even more because of the high latency of local memory. More details are available in the CUDA C++ Programming Guide. Note also that whenever sine and cosine of the same argument are computed, the sincos family of instructions should be used to optimize performance: __sincosf() for single-precision fast math (see next paragraph) sincosf() for regular single-precision sincos() for double precision The -use_fast_math compiler option of nvcc coerces every functionName() call to the equivalent __functionName() call. It also disables single-precision denormal support and lowers the precision of single-precision division in general. This is an aggressive optimization that can both reduce numerical accuracy and alter special case handling. A more robust approach is to selectively introduce calls to fast intrinsic functions only if merited by performance gains and where altered behavior can be tolerated. Note this switch is effective only on single-precision floating point. Note Medium Priority: Prefer faster, more specialized math functions over slower, more general ones when possible. For small integer powers (e.g., x2 or x3), explicit multiplication is almost certainly faster than the use of general exponentiation routines such as pow(). While compiler optimization improvements continually seek to narrow this gap, explicit multiplication (or the use of an equivalent purpose-built inline function or macro) can have a significant advantage. This advantage is increased when several powers of the same base are needed (e.g., where both x2 and x5 are calculated in close proximity), as this aids the compiler in its common sub-expression elimination (CSE) optimization. For exponentiation using base 2 or 10, use the functions exp2() or expf2() and exp10() or expf10() rather than the functions pow() or powf(). Both pow() and powf() are heavy-weight functions in terms of register pressure and instruction count due to the numerous special cases arising in general exponentiation and the difficulty of achieving good accuracy across the entire ranges of the base and the exponent. The functions exp2(), exp2f(), exp10(), and exp10f(), on the other hand, are similar to exp() and expf() in terms of performance, and can be as much as ten times faster than their pow()/powf() equivalents. For exponentiation with an exponent of 1/3, use the cbrt() or cbrtf() function rather than the generic exponentiation functions pow() or powf(), as the former are significantly faster than the latter. Likewise, for exponentation with an exponent of -1/3, use rcbrt() or rcbrtf(). Replace sin(π*<expr>) with sinpi(<expr>), cos(π*<expr>) with cospi(<expr>), and sincos(π*<expr>) with sincospi(<expr>). This is advantageous with regard to both accuracy and performance. As a particular example, to evaluate the sine function in degrees instead of radians, use sinpi(x/180.0). Similarly, the single-precision functions sinpif(), cospif(), and sincospif() should replace calls to sinf(), cosf(), and sincosf() when the function argument is of the form π*<expr>. (The performance advantage sinpi() has over sin() is due to simplified argument reduction; the accuracy advantage is because sinpi() multiplies by π only implicitly, effectively using an infinitely precise mathematical π rather than a single- or double-precision approximation thereof.) 11.1.7. Precision-related Compiler Flags By default, the nvcc compiler generates IEEE-compliant code, but it also provides options to generate code that somewhat less accurate but faster: -ftz=true (denormalized numbers are flushed to zero) -prec-div=false (less precise division) -prec-sqrt=false (less precise square root) Another, more aggressive, option is -use_fast_math, which coerces every functionName() call to the equivalent __functionName() call. This makes the code run faster at the cost of diminished precision and accuracy. See Math Libraries. 11.2. Memory Instructions Note High Priority: Minimize the use of global memory. Prefer shared memory access where possible. Memory instructions include any instruction that reads from or writes to shared, local, or global memory. When accessing uncached local or global memory, there are hundreds of clock cycles of memory latency. As an example, the assignment operator in the following sample code has a high throughput, but, crucially, there is a latency of hundreds of clock cycles to read data from global memory: __shared__ float shared[32]; __device__ float device[32]; shared[threadIdx.x] = device[threadIdx.x]; Much of this global memory latency can be hidden by the thread scheduler if there are sufficient independent arithmetic instructions that can be issued while waiting for the global memory access to complete. However, it is best to avoid accessing global memory whenever possible. 12. Control Flow 12.1. Branching and Divergence Note High Priority: Avoid different execution paths within the same warp. Flow control instructions (if, switch, do, for, while) can significantly affect the instruction throughput by causing threads of the same warp to diverge; that is, to follow different execution paths. If this happens, the different execution paths must be executed separately; this increases the total number of instructions executed for this warp. To obtain best performance in cases where the control flow depends on the thread ID, the controlling condition should be written so as to minimize the number of divergent warps. This is possible because the distribution of the warps across the block is deterministic as mentioned in SIMT Architecture of the CUDA C++ Programming Guide. A trivial example is when the controlling condition depends only on (threadIdx / WSIZE) where WSIZE is the warp size. In this case, no warp diverges because the controlling condition is perfectly aligned with the warps. For branches including just a few instructions, warp divergence generally results in marginal performance losses. For example, the compiler may use predication to avoid an actual branch. Instead, all instructions are scheduled, but a per-thread condition code or predicate controls which threads execute the instructions. Threads with a false predicate do not write results, and also do not evaluate addresses or read operands. Starting with the Volta architecture, Independent Thread Scheduling allows a warp to remain diverged outside of the data-dependent conditional block. An explicit __syncwarp() can be used to guarantee that the warp has reconverged for subsequent instructions. 12.2. Branch Predication Note Low Priority: Make it easy for the compiler to use branch predication in lieu of loops or control statements. Sometimes, the compiler may unroll loops or optimize out if or switch statements by using branch predication instead. In these cases, no warp can ever diverge. The programmer can also control loop unrolling using #pragma unroll For more information on this pragma, refer to the CUDA C++ Programming Guide. When using branch predication, none of the instructions whose execution depends on the controlling condition is skipped. Instead, each such instruction is associated with a per-thread condition code or predicate that is set to true or false according to the controlling condition. Although each of these instructions is scheduled for execution, only the instructions with a true predicate are actually executed. Instructions with a false predicate do not write results, and they also do not evaluate addresses or read operands. The compiler replaces a branch instruction with predicated instructions only if the number of instructions controlled by the branch condition is less than or equal to a certain threshold. 13. Deploying CUDA Applications Having completed the GPU acceleration of one or more components of the application it is possible to compare the outcome with the original expectation. Recall that the initial assess step allowed the developer to determine an upper bound for the potential speedup attainable by accelerating given hotspots. Before tackling other hotspots to improve the total speedup, the developer should consider taking the partially parallelized implementation and carry it through to production. This is important for a number of reasons; for example, it allows the user to profit from their investment as early as possible (the speedup may be partial but is still valuable), and it minimizes risk for the developer and the user by providing an evolutionary rather than revolutionary set of changes to the application. 14. Understanding the Programming Environment With each generation of NVIDIA processors, new features are added to the GPU that CUDA can leverage. Consequently, it’s important to understand the characteristics of the architecture. Programmers should be aware of two version numbers. The first is the compute capability, and the second is the version number of the CUDA Runtime and CUDA Driver APIs. 14.1. CUDA Compute Capability The compute capability describes the features of the hardware and reflects the set of instructions supported by the device as well as other specifications, such as the maximum number of threads per block and the number of registers per multiprocessor. Higher compute capability versions are supersets of lower (that is, earlier) versions, so they are backward compatible. The compute capability of the GPU in the device can be queried programmatically as illustrated in the deviceQuery CUDA Sample. The output for that program is shown in Figure 16. This information is obtained by calling cudaGetDeviceProperties() and accessing the information in the structure it returns. Figure 16 Sample CUDA configuration data reported by deviceQuery The major and minor revision numbers of the compute capability are shown on the seventh line of Figure 16. Device 0 of this system has compute capability 7.0. More details about the compute capabilities of various GPUs are in CUDA-Enabled GPUs and Compute Capabilities of the CUDA C++ Programming Guide. In particular, developers should note the number of multiprocessors on the device, the number of registers and the amount of memory available, and any special capabilities of the device. 14.2. Additional Hardware Data Certain hardware features are not described by the compute capability. For example, the ability to overlap kernel execution with asynchronous data transfers between the host and the device is available on most but not all GPUs irrespective of the compute capability. In such cases, call cudaGetDeviceProperties() to determine whether the device is capable of a certain feature. For example, the asyncEngineCount field of the device property structure indicates whether overlapping kernel execution and data transfers is possible (and, if so, how many concurrent transfers are possible); likewise, the canMapHostMemory field indicates whether zero-copy data transfers can be performed. 14.3. Which Compute Capability Target To target specific versions of NVIDIA hardware and CUDA software, use the -arch, -code, and -gencode options of nvcc. Code that uses the warp shuffle operation, for example, must be compiled with -arch=sm_30 (or higher compute capability). See Building for Maximum Compatibility for further discussion of the flags used for building code for multiple generations of CUDA-capable device simultaneously. 14.4. CUDA Runtime The host runtime component of the CUDA software environment can be used only by host functions. It provides functions to handle the following: Device management Context management Memory management Code module management Execution control Texture reference management Interoperability with OpenGL and Direct3D As compared to the lower-level CUDA Driver API, the CUDA Runtime greatly eases device management by providing implicit initialization, context management, and device code module management. The C++ host code generated by nvcc utilizes the CUDA Runtime, so applications that link to this code will depend on the CUDA Runtime; similarly, any code that uses the cuBLAS, cuFFT, and other CUDA Toolkit libraries will also depend on the CUDA Runtime, which is used internally by these libraries. The functions that make up the CUDA Runtime API are explained in the CUDA Toolkit Reference Manual. The CUDA Runtime handles kernel loading and setting up kernel parameters and launch configuration before the kernel is launched. The implicit driver version checking, code initialization, CUDA context management, CUDA module management (cubin to function mapping), kernel configuration, and parameter passing are all performed by the CUDA Runtime. It comprises two principal parts: A C-style function interface (cuda_runtime_api.h). C++-style convenience wrappers (cuda_runtime.h) built on top of the C-style functions. For more information on the Runtime API, refer to CUDA Runtime of the CUDA C++ Programming Guide. 15. CUDA Compatibility Developer’s Guide CUDA Toolkit is released on a monthly release cadence to deliver new features, performance improvements, and critical bug fixes. CUDA compatibility allows users to update the latest CUDA Toolkit software (including the compiler, libraries, and tools) without requiring update to the entire driver stack. The CUDA software environment consists of three parts: CUDA Toolkit (libraries, CUDA runtime and developer tools) - SDK for developers to build CUDA applications. CUDA driver - User-mode driver component used to run CUDA applications (e.g. libcuda.so on Linux systems). NVIDIA GPU device driver - Kernel-mode driver component for NVIDIA GPUs. On Linux systems, the CUDA driver and kernel mode components are delivered together in the NVIDIA display driver package. This is shown in Figure 1. Figure 17 Components of CUDA The CUDA compiler (nvcc), provides a way to handle CUDA and non-CUDA code (by splitting and steering compilation), along with the CUDA runtime, is part of the CUDA compiler toolchain. The CUDA Runtime API provides developers with high-level C++ interface for simplified management of devices, kernel executions etc., While the CUDA driver API provides (CUDA Driver API) a low-level programming interface for applications to target NVIDIA hardware. Built on top of these technologies are CUDA libraries, some of which are included in the CUDA Toolkit, while others such as cuDNN may be released independently of the CUDA Toolkit. 15.1. CUDA Toolkit Versioning Starting with CUDA 11, the toolkit versions are based on an industry-standard semantic versioning scheme: .X.Y.Z, where: .X stands for the major version - APIs have changed and binary compatibility is broken. .Y stands for the minor version - Introduction of new APIs, deprecation of old APIs, and source compatibility might be broken but binary compatibility is maintained. .Z stands for the release/patch version - new updates and patches will increment this. Each component in the toolkit is recommended to be semantically versioned. From CUDA 11.3 NVRTC is also semantically versioned. We will note some of them later on in the document. The versions of the components in the toolkit are available in this table. Compatibility of the CUDA platform is thus intended to address a few scenarios: NVIDIA driver upgrades to systems with GPUs running in production for enterprises or datacenters can be complex and may need advance planning. Delays in rolling out new NVIDIA drivers could mean that users of such systems may not have access to new features available in CUDA releases. Not requiring driver updates for new CUDA releases can mean that new versions of the software can be made available faster to users. Many software libraries and applications built on top of CUDA (e.g. math libraries or deep learning frameworks) do not have a direct dependency on the CUDA runtime, compiler or driver. In such cases, users or developers can still benefit from not having to upgrade the entire CUDA Toolkit or driver to use these libraries or frameworks. Upgrading dependencies is error-prone and time consuming, and in some corner cases, can even change the semantics of a program. Constantly recompiling with the latest CUDA Toolkit means forcing upgrades on the end-customers of an application product. Package managers facilitate this process but unexpected issues can still arise and if a bug is found, it necessitates a repeat of the above upgrade process. CUDA supports several compatibility choices: First introduced in CUDA 10, the CUDA Forward Compatible Upgrade is designed to allow users to get access to new CUDA features and run applications built with new CUDA releases on systems with older installations of the NVIDIA datacenter driver. First introduced in CUDA 11.1, CUDA Enhanced Compatibility provides two benefits: By leveraging semantic versioning across components in the CUDA Toolkit, an application can be built for one CUDA minor release (for example 11.1) and work across all future minor releases within the major family (i.e. 11.x). The CUDA runtime has relaxed the minimum driver version check and thus no longer requires a driver upgrade when moving to a new minor release. The CUDA driver ensures backward Binary Compatibility is maintained for compiled CUDA applications. Applications compiled with CUDA toolkit versions as old as 3.2 will run on newer drivers. 15.2. Source Compatibility We define source compatibility as a set of guarantees provided by the library, where a well-formed application built against a specific version of the library (using the SDK) will continue to build and run without errors when a newer version of the SDK is installed. Both the CUDA driver and the CUDA runtime are not source compatible across the different SDK releases. APIs can be deprecated and removed. Therefore, an application that compiled successfully on an older version of the toolkit may require changes in order to compile against a newer version of the toolkit. Developers are notified through deprecation and documentation mechanisms of any current or upcoming changes. This does not mean that application binaries compiled using an older toolkit will not be supported anymore. Application binaries rely on CUDA Driver API interface and even though the CUDA Driver API itself may also have changed across toolkit versions, CUDA guarantees Binary Compatibility of the CUDA Driver API interface. 15.3. Binary Compatibility We define binary compatibility as a set of guarantees provided by the library, where an application targeting the said library will continue to work when dynamically linked against a different version of the library. The CUDA Driver API has a versioned C-style ABI, which guarantees that applications that were running against an older driver (for example CUDA 3.2) will still run and function correctly against a modern driver (for example one shipped with CUDA 11.0). This means that even though an application source might need to be changed if it has to be recompiled against a newer CUDA Toolkit in order to use the newer features, replacing the driver components installed in a system with a newer version will always support existing applications and its functions. The CUDA Driver API thus is binary-compatible (the OS loader can pick up a newer version and the application continues to work) but not source-compatible (rebuilding your application against a newer SDK might require source changes). Figure 18 CUDA Toolkit and Minimum Driver Versions Before we proceed further on this topic, it’s important for developers to understand the concept of Minimum Driver Version and how that may affect them. Each version of the CUDA Toolkit (and runtime) requires a minimum version of the NVIDIA driver. Applications compiled against a CUDA Toolkit version will only run on systems with the specified minimum driver version for that toolkit version. Prior to CUDA 11.0, the minimum driver version for a toolkit was the same as the driver shipped with that version of the CUDA Toolkit. So, when an application is built with CUDA 11.0, it can only run on a system with an R450 or later driver. If such an application is run on a system with the R418 driver installed, CUDA initialization will return an error as can be seen in the example below. In this example, the deviceQuery sample is compiled with CUDA 11.1 and is run on a system with R418. In this scenario, CUDA initialization returns an error due to the minimum driver requirement. ubuntu@:~/samples/1_Utilities/deviceQuery $ make /usr/local/cuda-11.1/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -gencode arch=compute_86,code=compute_86 -o deviceQuery.o -c deviceQuery.cpp /usr/local/cuda-11.1/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -gencode arch=compute_86,code=compute_86 -o deviceQuery deviceQuery.o $ nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.165.02 Driver Version: 418.165.02 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 42C P0 28W / 70W | 0MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ $ samples/bin/x86_64/linux/release/deviceQuery samples/bin/x86_64/linux/release/deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) cudaGetDeviceCount returned 3 -> initialization error Result = FAIL Refer to the CUDA Toolkit Release Notes for details for the minimum driver version and the version of the driver shipped with the toolkit. 15.3.1. CUDA Binary (cubin) Compatibility A slightly related but important topic is one of application binary compatibility across GPU architectures in CUDA. CUDA C++ provides a simple path for users familiar with the C++ programming language to easily write programs for execution by the device. Kernels can be written using the CUDA instruction set architecture, called PTX, which is described in the PTX reference manual. It is however usually more effective to use a high-level programming language such as C++. In both cases, kernels must be compiled into binary code by nvcc (called cubins) to execute on the device. The cubins are architecture-specific. Binary compatibility for cubins is guaranteed from one compute capability minor revision to the next one, but not from one compute capability minor revision to the previous one or across major compute capability revisions. In other words, a cubin object generated for compute capability X.y will only execute on devices of compute capability X.z where z≥y. To execute code on devices of specific compute capability, an application must load binary or PTX code that is compatible with this compute capability. For portability, that is, to be able to execute code on future GPU architectures with higher compute capability (for which no binary code can be generated yet), an application must load PTX code that will be just-in-time compiled by the NVIDIA driver for these future devices. More information on cubins, PTX and application compatibility can be found in the CUDA C++ Programming Guide. 15.4. CUDA Compatibility Across Minor Releases By leveraging the semantic versioning, starting with CUDA 11, components in the CUDA Toolkit will remain binary compatible across the minor versions of the toolkit. In order to maintain binary compatibility across minor versions, the CUDA runtime no longer bumps up the minimum driver version required for every minor release - this only happens when a major release is shipped. One of the main reasons a new toolchain requires a new minimum driver is to handle the JIT compilation of PTX code and the JIT linking of binary code. In this section, we will review the usage patterns that may require new user workflows when taking advantage of the compatibility features of the CUDA platform. 15.4.1. Existing CUDA Applications within Minor Versions of CUDA $ nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 39C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ When our CUDA 11.1 application (i.e. cudart 11.1 is statically linked) is run on the system, we see that it runs successfully even when the driver reports a 11.0 version - that is, without requiring the driver or other toolkit components to be updated on the system. $ samples/bin/x86_64/linux/release/deviceQuery samples/bin/x86_64/linux/release/deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "Tesla T4" CUDA Driver Version / Runtime Version 11.0 / 11.1 CUDA Capability Major/Minor version number: 7.5 ...<snip>... deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.0, CUDA Runtime Version = 11.1, NumDevs = 1 Result = PASS By using new CUDA versions, users can benefit from new CUDA programming model APIs, compiler optimizations and math library features. The following sections discuss some caveats and considerations. 15.4.1.1. Handling New CUDA Features and Driver APIs A subset of CUDA APIs don’t need a new driver and they can all be used without any driver dependencies. For example, cuMemMap APIs or any of APIs introduced prior to CUDA 11.0, such as cudaDeviceSynchronize, do not require a driver upgrade. To use other CUDA APIs introduced in a minor release (that require a new driver), one would have to implement fallbacks or fail gracefully. This situation is not different from what is available today where developers use macros to compile out features based on CUDA versions. Users should refer to the CUDA headers and documentation for new CUDA APIs introduced in a release. When working with a feature exposed in a minor version of the toolkit, the feature might not be available at runtime if the application is running against an older CUDA driver. Users wishing to take advantage of such a feature should query its availability with a dynamic check in the code: static bool hostRegisterFeatureSupported = false; static bool hostRegisterIsDeviceAddress = false; static error_t cuFooFunction(int *ptr) { int *dptr = null; if (hostRegisterFeatureSupported) { cudaHostRegister(ptr, size, flags); if (hostRegisterIsDeviceAddress) { qptr = ptr; } else { cudaHostGetDevicePointer(&qptr, ptr, 0); } } else { // cudaMalloc(); // cudaMemcpy(); } gemm<<<1,1>>>(dptr); cudaDeviceSynchronize(); } int main() { // rest of code here cudaDeviceGetAttribute( &hostRegisterFeatureSupported, cudaDevAttrHostRegisterSupported, 0); cudaDeviceGetAttribute( &hostRegisterIsDeviceAddress, cudaDevAttrCanUseHostPointerForRegisteredMem, 0); cuFooFunction(/* malloced pointer */); } Alternatively the application’s interface might not work at all without a new CUDA driver and then its best to return an error right away: #define MIN_VERSION 11010 cudaError_t foo() { int version = 0; cudaGetDriverVersion(&version); if (version < MIN_VERSION) { return CUDA_ERROR_INSUFFICIENT_DRIVER; } // proceed as normal } A new error code is added to indicate that the functionality is missing from the driver you are running against: cudaErrorCallRequiresNewerDriver. 15.4.1.2. Using PTX PTX defines a virtual machine and ISA for general purpose parallel thread execution. PTX programs are translated at load time to the target hardware instruction set via the JIT Compiler which is part of the CUDA driver. As PTX is compiled by the CUDA driver, new toolchains will generate PTX that is not compatible with the older CUDA driver. This is not a problem when PTX is used for future device compatibility (the most common case), but can lead to issues when used for runtime compilation. For codes continuing to make use of PTX, in order to support compiling on an older driver, your code must be first transformed into device code via the static ptxjitcompiler library or NVRTC with the option of generating code for a specific architecture (e.g. sm_80) rather than a virtual architecture (e.g. compute_80). For this workflow, a new nvptxcompiler_static library is shipped with the CUDA Toolkit. We can see this usage in the following example: char* compilePTXToNVElf() { nvPTXCompilerHandle compiler = NULL; nvPTXCompileResult status; size_t elfSize, infoSize, errorSize; char *elf, *infoLog, *errorLog; int minorVer, majorVer; const char* compile_options[] = { "--gpu-name=sm_80", "--device-debug" }; nvPTXCompilerGetVersion(&majorVer, &minorVer); nvPTXCompilerCreate(&compiler, (size_t)strlen(ptxCode), ptxCode); status = nvPTXCompilerCompile(compiler, 2, compile_options); if (status != NVPTXCOMPILE_SUCCESS) { nvPTXCompilerGetErrorLogSize(compiler, (void*)&errorSize); if (errorSize != 0) { errorLog = (char*)malloc(errorSize+1); nvPTXCompilerGetErrorLog(compiler, (void*)errorLog); printf("Error log: %s\n", errorLog); free(errorLog); } exit(1); } nvPTXCompilerGetCompiledProgramSize(compiler, &elfSize)); elf = (char*)malloc(elfSize); nvPTXCompilerGetCompiledProgram(compiler, (void*)elf); nvPTXCompilerGetInfoLogSize(compiler, (void*)&infoSize); if (infoSize != 0) { infoLog = (char*)malloc(infoSize+1); nvPTXCompilerGetInfoLog(compiler, (void*)infoLog); printf("Info log: %s\n", infoLog); free(infoLog); } nvPTXCompilerDestroy(&compiler); return elf; } 15.4.1.3. Dynamic Code Generation NVRTC is a runtime compilation library for CUDA C++. It accepts CUDA C++ source code in character string form and creates handles that can be used to obtain the PTX. The PTX string generated by NVRTC can be loaded by cuModuleLoadData and cuModuleLoadDataEx. Dealing with relocatable objects is not yet supported, therefore the cuLink* set of APIs in the CUDA driver will not work with enhanced compatibility. An upgraded driver matching the CUDA runtime version is currently required for those APIs. As mentioned in the PTX section, the compilation of PTX to device code lives along with the CUDA driver, hence the generated PTX might be newer than what is supported by the driver on the deployment system. When using NVRTC, it is recommended that the resulting PTX code is first transformed to the final device code via the steps outlined by the PTX user workflow. This ensures your code is compatible. Alternatively, NVRTC can generate cubins directly starting with CUDA 11.1. Applications using the new API can load the final device code directly using driver APIs cuModuleLoadData and cuModuleLoadDataEx. NVRTC used to support only virtual architectures through the option -arch, since it was only emitting PTX. It will now support actual architectures as well to emit SASS. The interface is augmented to retrieve either the PTX or cubin if an actual architecture is specified. The example below shows how an existing example can be adapted to use the new features, guarded by the USE_CUBIN macro in this case: #include <nvrtc.h> #include <cuda.h> #include <iostream> void NVRTC_SAFE_CALL(nvrtcResult result) { if (result != NVRTC_SUCCESS) { std::cerr << "\nnvrtc error: " << nvrtcGetErrorString(result) << '\n'; std::exit(1); } } void CUDA_SAFE_CALL(CUresult result) { if (result != CUDA_SUCCESS) { const char *msg; cuGetErrorName(result, &msg); std::cerr << "\ncuda error: " << msg << '\n'; std::exit(1); } } const char *hello = " \n\ extern \"C\" __global__ void hello() { \n\ printf(\"hello world\\n\"); \n\ } \n"; int main() { nvrtcProgram prog; NVRTC_SAFE_CALL(nvrtcCreateProgram(&prog, hello, "hello.cu", 0, NULL, NULL)); #ifdef USE_CUBIN const char *opts[] = {"-arch=sm_70"}; #else const char *opts[] = {"-arch=compute_70"}; #endif nvrtcResult compileResult = nvrtcCompileProgram(prog, 1, opts); size_t logSize; NVRTC_SAFE_CALL(nvrtcGetProgramLogSize(prog, &logSize)); char *log = new char[logSize]; NVRTC_SAFE_CALL(nvrtcGetProgramLog(prog, log)); std::cout << log << '\n'; delete[] log; if (compileResult != NVRTC_SUCCESS) exit(1); size_t codeSize; #ifdef USE_CUBIN NVRTC_SAFE_CALL(nvrtcGetCUBINSize(prog, &codeSize)); char *code = new char[codeSize]; NVRTC_SAFE_CALL(nvrtcGetCUBIN(prog, code)); #else NVRTC_SAFE_CALL(nvrtcGetPTXSize(prog, &codeSize)); char *code = new char[codeSize]; NVRTC_SAFE_CALL(nvrtcGetPTX(prog, code)); #endif NVRTC_SAFE_CALL(nvrtcDestroyProgram(&prog)); CUdevice cuDevice; CUcontext context; CUmodule module; CUfunction kernel; CUDA_SAFE_CALL(cuInit(0)); CUDA_SAFE_CALL(cuDeviceGet(&cuDevice, 0)); CUDA_SAFE_CALL(cuCtxCreate(&context, 0, cuDevice)); CUDA_SAFE_CALL(cuModuleLoadDataEx(&module, code, 0, 0, 0)); CUDA_SAFE_CALL(cuModuleGetFunction(&kernel, module, "hello")); CUDA_SAFE_CALL(cuLaunchKernel(kernel, 1, 1, 1, 1, 1, 1, 0, NULL, NULL, 0)); CUDA_SAFE_CALL(cuCtxSynchronize()); CUDA_SAFE_CALL(cuModuleUnload(module)); CUDA_SAFE_CALL(cuCtxDestroy(context)); delete[] code; } 15.4.1.4. Recommendations for building a minor-version compatible library We recommend that the CUDA runtime be statically linked to minimize dependencies. Verify that your library doesn’t leak dependencies, breakages, namespaces, etc. outside your established ABI contract. Follow semantic versioning for your library’s soname. Having a semantically versioned ABI means the interfaces need to be maintained and versioned. The library should follow semantic rules and increment the version number when a change is made that affects this ABI contract. Missing dependencies is also a binary compatibility break, hence you should provide fallbacks or guards for functionality that depends on those interfaces. Increment major versions when there are ABI breaking changes such as API deprecation and modifications. New APIs can be added in minor versions. Conditionally use features to remain compatible against older drivers. If no new features are used (or if they are used conditionally with fallbacks provided) you’ll be able to remain compatible. Don’t expose ABI structures that can change. A pointer to a structure with a size embedded is a better solution. When linking with dynamic libraries from the toolkit, the library must be equal to or newer than what is needed by any one of the components involved in the linking of your application. For example, if you link against the CUDA 11.1 dynamic runtime, and use functionality from 11.1, as well as a separate shared library that was linked against the CUDA 11.2 dynamic runtime that requires 11.2 functionality, the final link step must include a CUDA 11.2 or newer dynamic runtime. 15.4.1.5. Recommendations for taking advantage of minor version compatibility in your application Certain functionality might not be available so you should query where applicable. This is common for building applications that are GPU architecture, platform and compiler agnostic. However we now add “the underlying driver” to that mix. As with the previous section on library building recommendations, if using the CUDA runtime, we recommend linking to the CUDA runtime statically when building your application. When using the driver APIs directly, we recommend using the new driver entry point access API (cuGetProcAddress) documented here: CUDA Driver API :: CUDA Toolkit Documentation. When using a shared or static library, follow the release notes of said library to determine if the library supports minor version compatibility. 16. Preparing for Deployment 16.1. Testing for CUDA Availability When deploying a CUDA application, it is often desirable to ensure that the application will continue to function properly even if the target machine does not have a CUDA-capable GPU and/or a sufficient version of the NVIDIA Driver installed. (Developers targeting a single machine with known configuration may choose to skip this section.) Detecting a CUDA-Capable GPU When an application will be deployed to target machines of arbitrary/unknown configuration, the application should explicitly test for the existence of a CUDA-capable GPU in order to take appropriate action when no such device is available. The cudaGetDeviceCount() function can be used to query for the number of available devices. Like all CUDA Runtime API functions, this function will fail gracefully and return cudaErrorNoDevice to the application if there is no CUDA-capable GPU or cudaErrorInsufficientDriver if there is not an appropriate version of the NVIDIA Driver installed. If cudaGetDeviceCount() reports an error, the application should fall back to an alternative code path. A system with multiple GPUs may contain GPUs of different hardware versions and capabilities. When using multiple GPUs from the same application, it is recommended to use GPUs of the same type, rather than mixing hardware generations. The cudaChooseDevice() function can be used to select the device that most closely matches a desired set of features. Detecting Hardware and Software Configuration When an application depends on the availability of certain hardware or software capabilities to enable certain functionality, the CUDA API can be queried for details about the configuration of the available device and for the installed software versions. The cudaGetDeviceProperties() function reports various features of the available devices, including the CUDA Compute Capability of the device (see also the Compute Capabilities section of the CUDA C++ Programming Guide). See Version Management for details on how to query the available CUDA software API versions. 16.2. Error Handling All CUDA Runtime API calls return an error code of type cudaError_t; the return value will be equal to cudaSuccess if no errors have occurred. (The exceptions to this are kernel launches, which return void, and cudaGetErrorString(), which returns a character string describing the cudaError_t code that was passed into it.) The CUDA Toolkit libraries (cuBLAS, cuFFT, etc.) likewise return their own sets of error codes. Since some CUDA API calls and all kernel launches are asynchronous with respect to the host code, errors may be reported to the host asynchronously as well; often this occurs the next time the host and device synchronize with each other, such as during a call to cudaMemcpy() or to cudaDeviceSynchronize(). Always check the error return values on all CUDA API functions, even for functions that are not expected to fail, as this will allow the application to detect and recover from errors as soon as possible should they occur. To check for errors occurring during kernel launches using the <<<...>>> syntax, which does not return any error code, the return code of cudaGetLastError() should be checked immediately after the kernel launch. Applications that do not check for CUDA API errors could at times run to completion without having noticed that the data calculated by the GPU is incomplete, invalid, or uninitialized. Note The CUDA Toolkit Samples provide several helper functions for error checking with the various CUDA APIs; these helper functions are located in the samples/common/inc/helper_cuda.h file in the CUDA Toolkit. 16.3. Building for Maximum Compatibility Each generation of CUDA-capable device has an associated compute capability version that indicates the feature set supported by the device (see CUDA Compute Capability). One or more compute capability versions can be specified to the nvcc compiler while building a file; compiling for the native compute capability for the target GPU(s) of the application is important to ensure that application kernels achieve the best possible performance and are able to use the features that are available on a given generation of GPU. When an application is built for multiple compute capabilities simultaneously (using several instances of the -gencode flag to nvcc), the binaries for the specified compute capabilities are combined into the executable, and the CUDA Driver selects the most appropriate binary at runtime according to the compute capability of the present device. If an appropriate native binary (cubin) is not available, but the intermediate PTX code (which targets an abstract virtual instruction set and is used for forward-compatibility) is available, then the kernel will be compiled Just In Time (JIT) (see Compiler JIT Cache Management Tools) from the PTX to the native cubin for the device. If the PTX is also not available, then the kernel launch will fail. Windows nvcc.exe -ccbin "C:\vs2008\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT" -gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75 --compile -o "Release\mykernel.cu.obj" "mykernel.cu" Mac/Linux /usr/local/cuda/bin/nvcc -gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75 -O2 -o mykernel.o -c mykernel.cu Alternatively, the nvcc command-line option -arch=sm_XX can be used as a shorthand equivalent to the following more explicit -gencode= command-line options described above: -gencode=arch=compute_XX,code=sm_XX -gencode=arch=compute_XX,code=compute_XX However, while the -arch=sm_XX command-line option does result in inclusion of a PTX back-end target by default (due to the code=compute_XX target it implies), it can only specify a single target cubin architecture at a time, and it is not possible to use multiple -arch= options on the same nvcc command line, which is why the examples above use -gencode= explicitly. 16.4. Distributing the CUDA Runtime and Libraries CUDA applications are built against the CUDA Runtime library, which handles device, memory, and kernel management. Unlike the CUDA Driver, the CUDA Runtime guarantees neither forward nor backward binary compatibility across versions. It is therefore best to redistribute the CUDA Runtime library with the application when using dynamic linking or else to statically link against the CUDA Runtime. This will ensure that the executable will be able to run even if the user does not have the same CUDA Toolkit installed that the application was built against. Note When statically linking to the CUDA Runtime, multiple versions of the runtime can peacably coexist in the same application process simultaneously; for example, if an application uses one version of the CUDA Runtime, and a plugin to that application is statically linked to a different version, that is perfectly acceptable, as long as the installed NVIDIA Driver is sufficient for both. Statically-linked CUDA Runtime The easiest option is to statically link against the CUDA Runtime. This is the default if using nvcc to link in CUDA 5.5 and later. Static linking makes the executable slightly larger, but it ensures that the correct version of runtime library functions are included in the application binary without requiring separate redistribution of the CUDA Runtime library. Dynamically-linked CUDA Runtime If static linking against the CUDA Runtime is impractical for some reason, then a dynamically-linked version of the CUDA Runtime library is also available. (This was the default and only option provided in CUDA versions 5.0 and earlier.) To use dynamic linking with the CUDA Runtime when using the nvcc from CUDA 5.5 or later to link the application, add the --cudart=shared flag to the link command line; otherwise the statically-linked CUDA Runtime library is used by default. After the application is dynamically linked against the CUDA Runtime, this version of the runtime library should be bundled with the application. It can be copied into the same directory as the application executable or into a subdirectory of that installation path. Other CUDA Libraries Although the CUDA Runtime provides the option of static linking, some libraries included in the CUDA Toolkit are available only in dynamically-linked form. As with the dynamically-linked version of the CUDA Runtime library, these libraries should be bundled with the application executable when distributing that application. 16.4.1. CUDA Toolkit Library Redistribution The CUDA Toolkit’s End-User License Agreement (EULA) allows for redistribution of many of the CUDA libraries under certain terms and conditions. This allows applications that depend on these libraries to redistribute the exact versions of the libraries against which they were built and tested, thereby avoiding any trouble for end users who might have a different version of the CUDA Toolkit (or perhaps none at all) installed on their machines. Please refer to the EULA for details. Note This does not apply to the NVIDIA Driver; the end user must still download and install an NVIDIA Driver appropriate to their GPU(s) and operating system. 16.4.1.1. Which Files to Redistribute When redistributing the dynamically-linked versions of one or more CUDA libraries, it is important to identify the exact files that need to be redistributed. The following examples use the cuBLAS library from CUDA Toolkit 5.5 as an illustration: Linux In a shared library on Linux, there is a string field called the SONAME that indicates the binary compatibility level of the library. The SONAME of the library against which the application was built must match the filename of the library that is redistributed with the application. For example, in the standard CUDA Toolkit installation, the files libcublas.so and libcublas.so.5.5 are both symlinks pointing to a specific build of cuBLAS, which is named like libcublas.so.5.5.x, where x is the build number (e.g., libcublas.so.5.5.17). However, the SONAME of this library is given as “libcublas.so.5.5”: $ objdump -p /usr/local/cuda/lib64/libcublas.so | grep SONAME SONAME libcublas.so.5.5 Because of this, even if -lcublas (with no version number specified) is used when linking the application, the SONAME found at link time implies that “libcublas.so.5.5” is the name of the file that the dynamic loader will look for when loading the application and therefore must be the name of the file (or a symlink to the same) that is redistributed with the application. The ldd tool is useful for identifying the exact filenames of the libraries that the application expects to find at runtime as well as the path, if any, of the copy of that library that the dynamic loader would select when loading the application given the current library search path: $ ldd a.out | grep libcublas libcublas.so.5.5 => /usr/local/cuda/lib64/libcublas.so.5.5 Mac In a shared library on Mac OS X, there is a field called the install name that indicates the expected installation path and filename the library; the CUDA libraries also use this filename to indicate binary compatibility. The value of this field is propagated into an application built against the library and is used to locate the library of the correct version at runtime. For example, if the install name of the cuBLAS library is given as @rpath/libcublas.5.5.dylib, then the library is version 5.5 and the copy of this library redistributed with the application must be named libcublas.5.5.dylib, even though only -lcublas (with no version number specified) is used at link time. Furthermore, this file should be installed into the @rpath of the application; see Where to Install Redistributed CUDA Libraries. To view a library’s install name, use the otool -L command: $ otool -L a.out a.out: @rpath/libcublas.5.5.dylib (...) Windows The binary compatibility version of the CUDA libraries on Windows is indicated as part of the filename. For example, a 64-bit application linked to cuBLAS 5.5 will look for cublas64_55.dll at runtime, so this is the file that should be redistributed with that application, even though cublas.lib is the file that the application is linked against. For 32-bit applications, the file would be cublas32_55.dll. To verify the exact DLL filename that the application expects to find at runtime, use the dumpbin tool from the Visual Studio command prompt: $ dumpbin /IMPORTS a.exe Microsoft (R) COFF/PE Dumper Version 10.00.40219.01 Copyright (C) Microsoft Corporation. All rights reserved. Dump of file a.exe File Type: EXECUTABLE IMAGE Section contains the following imports: ... cublas64_55.dll ... 16.4.1.2. Where to Install Redistributed CUDA Libraries Once the correct library files are identified for redistribution, they must be configured for installation into a location where the application will be able to find them. On Windows, if the CUDA Runtime or other dynamically-linked CUDA Toolkit library is placed in the same directory as the executable, Windows will locate it automatically. On Linux and Mac, the -rpath linker option should be used to instruct the executable to search its local path for these libraries before searching the system paths: Linux/Mac nvcc -I $(CUDA_HOME)/include -Xlinker "-rpath '$ORIGIN'" --cudart=shared -o myprogram myprogram.cu Windows nvcc.exe -ccbin "C:\vs2008\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT" --cudart=shared -o "Release\myprogram.exe" "myprogram.cu" Note It may be necessary to adjust the value of -ccbin to reflect the location of your Visual Studio installation. To specify an alternate path where the libraries will be distributed, use linker options similar to those below: Linux/Mac nvcc -I $(CUDA_HOME)/include -Xlinker "-rpath '$ORIGIN/lib'" --cudart=shared -o myprogram myprogram.cu Windows nvcc.exe -ccbin "C:\vs2008\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT /DELAY" --cudart=shared -o "Release\myprogram.exe" "myprogram.cu" For Linux and Mac, the -rpath option is used as before. For Windows, the /DELAY option is used; this requires that the application call SetDllDirectory() before the first call to any CUDA API function in order to specify the directory containing the CUDA DLLs. Note For Windows 8, SetDefaultDLLDirectories() and AddDllDirectory() should be used instead of SetDllDirectory(). Please see the MSDN documentation for these routines for more information. 17. Deployment Infrastructure Tools 17.1. Nvidia-SMI The NVIDIA System Management Interface (nvidia-smi) is a command line utility that aids in the management and monitoring of NVIDIA GPU devices. This utility allows administrators to query GPU device state and, with the appropriate privileges, permits administrators to modify GPU device state. nvidia-smi is targeted at Tesla and certain Quadro GPUs, though limited support is also available on other NVIDIA GPUs. nvidia-smi ships with NVIDIA GPU display drivers on Linux, and with 64-bit Windows Server 2008 R2 and Windows 7. nvidia-smi can output queried information as XML or as human-readable plain text either to standard output or to a file. See the nvidia-smi documenation for details. Please note that new versions of nvidia-smi are not guaranteed to be backward-compatible with previous versions. 17.1.1. Queryable state Both correctable single-bit and detectable double-bit errors are reported. Error counts are provided for both the current boot cycle and the lifetime of the GPU. Current utilization rates are reported for both the compute resources of the GPU and the memory interface. The list of active processes running on the GPU is reported, along with the corresponding process name/ID and allocated GPU memory. Max and current clock rates are reported for several important clock domains, as well as the current GPU performance state (pstate). The current GPU core temperature is reported, along with fan speeds for products with active cooling. The current board power draw and power limits are reported for products that report these measurements. Various dynamic and static information is reported, including board serial numbers, PCI device IDs, VBIOS/Inforom version numbers and product names. 17.1.2. Modifiable state Enable and disable ECC reporting. Clear single-bit and double-bit ECC error counts. Indicate whether compute processes can run on the GPU and whether they run exclusively or concurrently with other compute processes. Indicate whether the NVIDIA driver stays loaded when no applications are connected to the GPU. It is best to enable this option in most circumstances. Reinitialize the GPU hardware and software state via a secondary bus reset. 17.2. NVML The NVIDIA Management Library (NVML) is a C-based interface that provides direct access to the queries and commands exposed via nvidia-smi intended as a platform for building 3rd-party system management applications. The NVML API is shipped with the CUDA Toolkit (since version 8.0) and is also available standalone on the NVIDIA developer website as part of the GPU Deployment Kit through a single header file accompanied by PDF documentation, stub libraries, and sample applications; see https://developer.nvidia.com/gpu-deployment-kit. Each new version of NVML is backward-compatible. An additional set of Perl and Python bindings are provided for the NVML API. These bindings expose the same features as the C-based interface and also provide backwards compatibility. The Perl bindings are provided via CPAN and the Python bindings via PyPI. All of these products (nvidia-smi, NVML, and the NVML language bindings) are updated with each new CUDA release and provide roughly the same functionality. See https://developer.nvidia.com/nvidia-management-library-nvml for additional information. 17.3. Cluster Management Tools Managing your GPU cluster will help achieve maximum GPU utilization and help you and your users extract the best possible performance. Many of the industry’s most popular cluster management tools support CUDA GPUs via NVML. For a listing of some of these tools, see https://developer.nvidia.com/cluster-management. 17.4. Compiler JIT Cache Management Tools Any PTX device code loaded by an application at runtime is compiled further to binary code by the device driver. This is called just-in-time compilation (JIT). Just-in-time compilation increases application load time but allows applications to benefit from latest compiler improvements. It is also the only way for applications to run on devices that did not exist at the time the application was compiled. When JIT compilation of PTX device code is used, the NVIDIA driver caches the resulting binary code on disk. Some aspects of this behavior such as cache location and maximum cache size can be controlled via the use of environment variables; see Just in Time Compilation of the CUDA C++ Programming Guide. 17.5. CUDA_VISIBLE_DEVICES It is possible to rearrange the collection of installed CUDA devices that will be visible to and enumerated by a CUDA application prior to the start of that application by way of the CUDA_VISIBLE_DEVICES environment variable. Devices to be made visible to the application should be included as a comma-separated list in terms of the system-wide list of enumerable devices. For example, to use only devices 0 and 2 from the system-wide list of devices, set CUDA_VISIBLE_DEVICES=0,2 before launching the application. The application will then enumerate these devices as device 0 and device 1, respectively. 18. Recommendations and Best Practices This chapter contains a summary of the recommendations for optimization that are explained in this document. 18.1. Overall Performance Optimization Strategies Performance optimization revolves around three basic strategies: Maximizing parallel execution Optimizing memory usage to achieve maximum memory bandwidth Optimizing instruction usage to achieve maximum instruction throughput Maximizing parallel execution starts with structuring the algorithm in a way that exposes as much parallelism as possible. Once the parallelism of the algorithm has been exposed, it needs to be mapped to the hardware as efficiently as possible. This is done by carefully choosing the execution configuration of each kernel launch. The application should also maximize parallel execution at a higher level by explicitly exposing concurrent execution on the device through streams, as well as maximizing concurrent execution between the host and the device. Optimizing memory usage starts with minimizing data transfers between the host and the device because those transfers have much lower bandwidth than internal device data transfers. Kernel access to global memory also should be minimized by maximizing the use of shared memory on the device. Sometimes, the best optimization might even be to avoid any data transfer in the first place by simply recomputing the data whenever it is needed. The effective bandwidth can vary by an order of magnitude depending on the access pattern for each type of memory. The next step in optimizing memory usage is therefore to organize memory accesses according to the optimal memory access patterns. This optimization is especially important for global memory accesses, because latency of access costs hundreds of clock cycles. Shared memory accesses, in counterpoint, are usually worth optimizing only when there exists a high degree of bank conflicts. As for optimizing instruction usage, the use of arithmetic instructions that have low throughput should be avoided. This suggests trading precision for speed when it does not affect the end result, such as using intrinsics instead of regular functions or single precision instead of double precision. Finally, particular attention must be paid to control flow instructions due to the SIMT (single instruction multiple thread) nature of the device. 19. nvcc Compiler Switches 19.1. nvcc The NVIDIA nvcc compiler driver converts .cu files into C++ for the host system and CUDA assembly or binary instructions for the device. It supports a number of command-line parameters, of which the following are especially useful for optimization and related best practices: -maxrregcount=N specifies the maximum number of registers kernels can use at a per-file level. See Register Pressure. (See also the__launch_bounds__ qualifier discussed in Execution Configuration of the CUDA C++ Programming Guide to control the number of registers used on a per-kernel basis.) --ptxas-options=-v or -Xptxas=-v lists per-kernel register, shared, and constant memory usage. -ftz=true (denormalized numbers are flushed to zero) -prec-div=false (less precise division) -prec-sqrt=false (less precise square root) -use_fast_math compiler option of nvcc coerces every functionName() call to the equivalent __functionName() call. This makes the code run faster at the cost of diminished precision and accuracy. See Math Libraries. 20. Notices 20.1. Notice This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality. NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice. Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete. NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document. NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk. NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs. No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA. Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices. THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product. 20.2. OpenCL OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc. 20.3. Trademarks NVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact Copyright © 2007-2025, NVIDIA Corporation & affiliates. All rights reserved. Last updated on Jan 21, 2025.
The AI CUDA Engineer: Agentic CUDA Kernel Discovery, Optimization and Composition Note: Updated on February 21, 2025. At Sakana AI, we believe the path to develop much stronger AI systems is to automate the development of AI using AI. We aim to develop AI systems that can create even more capable and efficient AI systems. In the past year, we introduced an AI system that can automate the creation of new AI foundation models, at a fraction of the cost. We showed that LLMs can invent more efficient methods to train LLMs. Recently, we proposed the first comprehensive agentic framework for fully automating the entire AI research process in The AI Scientist. This led us to the question: If AI can be used to conduct AI research, can we use AI to research ways to make AI run faster? Introduction Just like the human brain, modern AI systems also rely heavily on parallel processing, enabled by hardware accelerators such as GPUs. But unlike the human brain which is evolved (biologically and culturally) to operate efficiently under resource constraints, recent advances in AI foundation models have led to large-scale deployment and ever-growing inference time and energy demand, leading to exponentially increasing resource requirements to train and deploy AI models. We believe that fundamentally, modern AI systems can and should be as efficient as the human brain, and that the best path to achieve this efficiency is to use AI to make AI more efficient! Inspired by our earlier work on The AI Scientist, we are proud to announce The AI CUDA Engineer, the first comprehensive agentic framework for fully automatic CUDA kernel discovery and optimization. CUDA is a low-level software layer that gives direct access to the NVIDIA GPU’s hardware instruction set for parallel computation. CUDA kernels are functions written in the CUDA language that run on GPUs. By writing instructions directly at the CUDA kernel level, we can achieve much higher performance for AI algorithms. However, working with CUDA requires quite a bit of GPU knowledge, and in practice, most machine learning algorithms are written in a higher level abstraction layer such as PyTorch or JAX. The AI CUDA Engineer is an agentic framework that leverages frontier LLMs with the goal of automating the conversion of standard PyTorch code into highly optimized CUDA kernels. Through the use of evolutionary optimization, and leveraging concepts in evolutionary computation, such as ‘crossover’ operations and ‘innovation archive’ to discover promising ‘stepping stone’ kernels, our proposed framework is able to not only automate the process of converting PyTorch modules to CUDA kernels, but our highly optimized CUDA kernels often achieve speedups that have significantly faster runtime. We believe this technology can enable speedups that will accelerate both the training and running (inference) of foundation models like LLMs or other generative AI models, eventually making AI models run much faster on NVIDIA hardware. The AI CUDA Engineer is able to generate CUDA Kernels with speedups of 10—100x over common PyTorch operations. Our framework is also able to produce highly optimized CUDA Kernels that are much faster than existing CUDA Kernels that are already commonly used in production (up to 5x speedups). Stage 1 and 2 (Conversion and Translation):  The AI CUDA Engineer first translates PyTorch code into functioning CUDA kernels. We already observe initial runtime improvements without explicitly targeting these. Stage 3 (Evolutionary Optimization):  Inspired by biological evolution, our framework utilizes evolutionary optimization (‘survival of the fittest’) to ensure only the best CUDA kernels are produced. Furthermore, we introduce a novel kernel crossover prompting strategy to combine multiple optimized kernels in a complementary fashion. Stage 4 (Innovation Archive):  Just as how cultural evolution shaped our human intelligence with knowhow from our ancestors through millennia of civilization, The AI CUDA Engineer also takes advantage of what it learned from past innovations and discoveries it made (Stage 4), building an Innovation Archive from the ancestry of known high-performing CUDA Kernels, which uses previous stepping stones to achieve further translation and performance gains. Kernel Runtime Speedups Discovered by the AI CUDA Engineer The AI CUDA Engineer robustly discovered CUDA kernels used for common machine learning operations, with speedups as high as 10—100x faster than native and compiled kernels in PyTorch. Our approach is also able to convert entire machine learning architectures into optimized CUDA kernels. Here we highlight a couple of significant speedup discoveries made completely autonomously: Our approach finds more efficient CUDA kernels for fundamental operations such as matrix multiplications to common deep learning operations, and as of writing, the performance of our discovered CUDA kernels achieved state-of-the-art performance on KernelBench. Technical Report and Dataset Release We believe that this is just the beginning of the great optimization of AI! We’re excited to release our new paper, The AI CUDA Engineer: Agentic CUDA Kernel Discovery and Optimization. In our report: We introduce an end-to-end agentic workflow capable of translating PyTorch code to working CUDA kernels, optimizing CUDA runtime performance, and automatically fusing multiple kernels. Furthermore, we construct various techniques for enhancing the consistency and performance of the pipeline including LLM ensembling, an iterative profiling feedback loop, local kernel code-editing, and crossover kernel optimization. We show that The AI CUDA Engineer robustly translates more than 230 out of 250 considered torch operations and achieves strong runtime performance improvements for the majority of kernels. Furthermore, our approach is capable of efficiently fusing various kernel operations and can outperform several existing accelerated operations. We release a dataset of over 17,000 verified kernels for operations covering a wide range of PyTorch operations. We highlight some notable examples of discovered CUDA kernels that achieved significant speedups on key computation operations in AI models. Highlighted AI CUDA Engineer-Discovered Kernels Leveraging our novel LLM-driven evolutionary kernel optimization procedure we robustly obtain speedups for a diverse range of considerations. More specifically, we outperform PyTorch Native runtimes for 81% out of 229 considered tasks. Furthermore, 20% of all discovered CUDA kernels are at least twice as fast as their PyTorch implementations. Below we show a subset of kernels. They highlight the diversity of different operations for which the AI CUDA Engineer can successfully be deployed. This includes normalization methods, loss functions, special matrix multiplications and even entire neural network architectures: The AI CUDA Engineer Archive: A Dataset of 17,000+ Verified CUDA Kernels A Text Embedding visualization of the AI CUDA Engineer Archive shows that the discovered kernels group into tasks (e.g. MatMul, Pooling, Convolution) and implementation strategies (unrolling, fusing, vectorization). The Archive is openly accessible and can be used for downstream fine-tuning of LLMs. Along with this paper, we release The AI CUDA Engineer archive, a dataset consisting of more than 30,000 CUDA kernels generated by The AI CUDA Engineer. It is released under the CC-By-4.0 license and is accessible via HuggingFace. The dataset includes a torch reference implementation, torch, NCU and Clang-tidy profiling data, multiple kernels per task, error messages and speedup scores against torch native and compile runtimes. Summary statistics of the AI CUDA Engineer Archive consisting of more than 30,000 kernels and more than 17,000 correct verified implementations. Approximately, 50% of all kernels improve over the torch native runtime. We envision that this dataset can enable post-training of open-source models to perform better CUDA-enabling modules. This includes offline Reinforcement Learning, preference optimization, and standard supervised fine-tuning. Explore 17,000+ Kernels in The AI CUDA Engineer Archive We also published an interactive website for interactively inspecting more than 17,000 verified kernels and their profiles including torch, NCU and Clang-Tidy data. You can access our interactive website here. The website allows you to explore various high-performing kernels across 230 tasks. It comes with a custom leaderboard that can be used to inspect related kernels across experiments and LLMs. Furthermore, you can visualize the kernel, retrieve related kernels, download code to verify the implementation and speedup as well as view the obtained profiling data. Finally, you can take an in-depth look at the optimization experiment. Detailed view of an optimized kernel including profiling data, downloading of evaluation scripts, related kernels and discovery experiment details. Limitations and Bloopers Combining evolutionary optimization with LLMs is powerful but can also find ways to trick the verification sandbox. We are fortunate to have readers, like @main_horse test our CUDA kernels, to identify that the system had found a way to “cheat”. For example, the system had found a memory exploit in the evaluation code which, in a number of cases, allowed it to avoid checking for correctness. We have since made the evaluation harness more robust to eliminate this loophole and have updated our results. Furthermore, we find the system could also find other novel exploits in the benchmark’s tasks. We are in the process of revising our paper and updating results, with further imporvements to the evaluation and runtime profiling harness, to reflect and discuss the effects, and mitigation of LLM reward hacking for CUDA kernel optimization. In addition, we observed limitations in frontier LLMs’ ability to effectively utilize TensorCore WMMA capabilities. While LLMs could generate basic CUDA code, they often struggled to implement the specialized matrix multiplication acceleration features offered by modern GPU architectures. This suggests a potential gap in the training data or the models’ understanding of advanced hardware-specific optimizations. As frontier LLMs, especially those with advanced coding reasoning capabilities become more capable, we expect code-optimization systems, such as ours, will continue to face these challenges. We envision a future where it is the role of human engineers to work with code optimization systems as tools, to produce the best and most reliable results. Future Implications of The AI CUDA Engineer The AI revolution is just getting started, and we are just at the very beginning of the transformation cycle. It is our view that today’s LLMs are our generation’s “Mainframe Computers”. We are still in the very early stages of AI, and it is inevitable, due to market competition and global innovation (especially from those innovating with resource constraints), that this technology will become a million times more efficient. Currently, our AI systems consume immense resources, and if the technology continues to scale without thought for efficiency and energy consumption, the result will not be sustainable. There is no fundamental reason why our AI systems can’t be as efficient (or even more efficient) than human intelligence. We believe that the best path to achieve this greater efficiency is to use AI to make AI more efficient. This is the direction that Sakana AI is pursuing, and this project is an important step towards making AI a million times faster. Just like the evolution of early clunky mainframe computers to modern computing, how we use AI today will look very different in a few years, compared to today’s ‘clunky’, inefficient LLMs. Sakana AI Want to make the AI that improves AI? Please see our Careers page for more information.
ROCm Documentation Install How to Conceptual Reference Contribute Optimizing with Composable Kernel Contents Optimizing with Composable Kernel# 2025-01-27 19 min read time The AMD ROCm Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for machine learning workloads. It generates a general-purpose kernel during the compilation phase through a C++ template, enabling developers to achieve operation fusions on different data precisions. This article gives a high-level overview of CK General Matrix Multiplication (GEMM) kernel based on the design example of 03_gemm_bias_relu. It also outlines the steps to construct the kernel and run it. Moreover, the article provides a detailed implementation of running SmoothQuant quantized INT8 models on AMD Instinct MI300X accelerators using CK. High-level overview: a CK GEMM instance# GEMM is a fundamental block in linear algebra, machine learning, and deep neural networks. It is defined as the operation: \(E = α \times (A \times B) + β \times (D)\), with A and B as matrix inputs, α and β as scalar inputs, and D as a pre-existing matrix. Take the commonly used linear transformation in a fully connected layer as an example. These terms correspond to input activation (A), weight (B), bias (D), and output (E), respectively. The example employs a DeviceGemmMultipleD_Xdl_CShuffle struct from CK library as the fundamental instance to explore the compute capability of AMD Instinct accelerators for the computation of GEMM. The implementation of the instance contains two phases: Template parameter definition Instantiating and running the templated kernel Template parameter definition# The template parameters of the instance are grouped into four parameter types: Parameters for determining matrix data precision Parameters for determining matrix data layout Parameters for determining extra operations on matrix elements Performance-oriented tunable parameters The template parameters of the selected GEMM kernel are classified into four groups. These template parameter groups should be defined properly before running the instance.# Matrix data precision# A, B, D, and E are defined as half-precision floating-point datatypes. The multiply-add results of matrix A and B are added with a pre-existing matrix D (half-precision), and the final GEMM results are also half-precision floating-points. using ADataType = F16; using BDataType = F16; using AccDataType = F32; using CShuffleDataType = F16; using DDataType = F16; using EDataType = F16; ADataType and BDataType denote the data precision of the A and B input matrices. AccDataType determines the data precision used for representing the multiply-add results of A and B elements. These results are stored in a CShuffle module in local data share (LDS), a low-latency and high-bandwidth explicitly-addressed memory used for synchronization within a workgroup LDS for later use. CShuffleDataType denotes the data precision of CShuffle in LDS. DDataType denotes the data precision of the pre-existing D matrix stored in GPU global memory, while EDatatype denotes the data precision of the final output. The CK kernel supports a fusion strategy so that CShuffle can be added with a single pre-existing matrix in the same GPU kernel for better performance. Matrix data layout# using ALayout = Row; using BLayout = Col; using DLayout = Row; using ELayout = Row; Following the convention of various linear algebra libraries, CK assumes that the input matrix A is an M x K matrix, meaning the matrix has M rows and K columns. Similarly, matrix B is assumed to be K x N, meaning it has K rows and N columns. In computing, row-major order and column-major order are commonly used ways to store matrices in linear storage. After understanding the matrix storage pattern, the underlying optimized memory access manner can be applied to achieve better performance depending on the storage ordering of these matrices. Matrix element operation# using AElementOp = PassThrough; using BElementOp = PassThrough; using CDEElementOp = AddRelu; CK supports the pre-processing of the matrix before calculating GEMM, that is, C = AElementOp(A) * BElementOp(B). It similarly supports the post-processing of GEMM results the same way, that is, E = CDEElementOp(C, D). AElementOp and BElementOp determine the operation applied to matrix A and B separately before GEMM, which is achieved by binding the operation with a C++ struct function. The above PassThrough denotes no operations are performed on the target matrix. CDEELementOp determines the operations applied to CShuffle output and matrix D. The following binding struct AddRelu shows an example of adding the CShuffle output and matrix D, and ReLU (Rectified Linear Unit) operations to the addition result. It then passes the results to matrix E. struct AddRelu { __host__ __device__ void operator()(ck::half_t& e, const ck::half_t& c, const ck::half_t& d) const { const ck::half_t x = c + d; e = x > 0 ? x : 0; } }; Tunable parameters# The CK instance includes a series of tunable template parameters to control the parallel granularity of the workload to achieve load balancing on different hardware platforms. These parameters include Block Size, M/N/K Per Block, M/N per XDL, AK1, BK1, etc. Block Size determines the number of threads in the thread block. M/N/K Per Block determines the size of tile that each thread block is responsible for calculating. M/N Per XDL refers to M/N size for Instinct accelerator Matrix Fused Multiply Add (MFMA) instructions operating on a per-wavefront basis. A/B K1 is related to the data type. It can be any value ranging from 1 to K Per Block. To achieve the optimal load/store performance, 128bit per load is suggested. In addition, the A/B loading parameters must be changed accordingly to match the A/B K1 value; otherwise, it will result in compilation errors. Conditions for achieving computational load balancing on different hardware platforms can vary. Instantiating and running the templated kernel# After determining the template parameters, we instantiate the kernel with actual arguments. Do one of the following: Use GetDeviceBuffer from CK’s custom struct DeviceMem to pass the element values of the matrices that need to be calculated. Allocate device buffer via hipMalloc. Ensure the device buffer size can fit the matrix size. Pass matrix elements through the data_ptr method in the Tensor object if the matrix to be calculated is of Tensor type. The row and column, and stride information of input matrices are also passed to the instance. For batched GEMM, you must pass in additional batch count and batch stride values. The extra operations for pre and post-processing are also passed with an actual argument; for example, α and β for GEMM scaling operations. Afterward, the instantiated kernel is launched by the invoker, as illustrated in Figure 3. Templated kernel launching consists of kernel instantiation, making arguments by passing in actual application parameters, creating an invoker, and running the instance through the invoker.# Developing fused INT8 kernels for SmoothQuant models# SmoothQuant (SQ) is a quantization algorithm that enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLM. The required GPU kernel functionalities used to accelerate the inference of SQ models on Instinct accelerators are shown in the following table. Functionality descriptions Corresponding wrappers \(E = α \times (A \times B) + β \times (D)\), where A, B, D, E are INT8 2-D tensors; E = Linear_ABDE_I8(A, B, D, \(\alpha\), \(\beta\)) \(E = RELU (α \times (A \times B) + β \times (D))\), where A, B, D, E are INT8 2-D tensors; E = Linear_ReLU_ABDE_I8(A, B, D, \(\alpha\), \(\beta\)) \(E = α \times (A \times B) + β \times (D)\), where A, B are INT8 2-D tensors, D and E are FP32 2-D tensors; E = Linear_AB_I8_DE_F32(A, B, D, \(\alpha\), \(\beta\)) \(E = α \times (A \times B)\), where A, B, E are INT8 3-D tensors; E = BMM_ABE_I8(A, B, \(\alpha\)) \(E = α \times (A \times B)\), where A, B are INT8 3-D tensors, E is FP32 3-D tensor; E = BMM_AB_I8_E_F32(A, B, \(\alpha\)) Operation flow analysis# The following section discusses the analysis of the operation flow of Linear_ReLU_ABDE_I8. The rest of the wrappers in Table 1 can be analyzed similarly. The first operation in the process is to perform the multiplication of input matrices A and B. The resulting matrix C is then scaled with α to obtain T1. At the same time, the process performs a scaling operation on D elements to obtain T2. Afterward, the process performs matrix addition between T1 and T2, element activation calculation using ReLU, and element rounding sequentially. The operations to generate E1, E2, and E are encapsulated and completed by a user-defined template function in CK (given in the next sub-section). This template function is integrated into the fundamental instance directly during the compilation phase so that all these steps can be fused in a single GPU kernel. Operation flow.# The CK library contains many fundamental instances that implement different functions. Familiarize yourself with the names of various CK instances and determine whether they meet the target functional requirements. Second, consider whether the format of input data meets your actual calculation needs. For SQ models, the 8-bit integer data format (INT8) is applied for matrix calculations. Third, consider the platform for implementing CK instances. The instances suffixed with xdl only run on AMD Instinct accelerators after being compiled and cannot run on Radeon-series GPUs. This is due to the underlying device-specific instruction sets for implementing these basic instances. Here, we use DeviceBatchedGemmMultiD_Xdl as the fundamental instance to implement the functionalities in the previous table. Use the ‘DeviceBatchedGemmMultiD_Xdl’ instance as a root.# The DeviceBatchedGemmMultiD_Xdl instance realizes the batched GEMM BMM_ABE_I8 and BMM_AB_I8_E_F32 kernels directly by using the proper input and output data precision types. Based on the two batched GEMM kernels, GEMM kernel Linear_ABDE_I8 and Linear_AB_I8_DE_F32 can be implemented by expanding their input 2-D tensors to 3-D tensors. Then, the 3-D output tensors produced by the root instance are squeezed back to 2-D output tensors before returning back. For example, unsqueeze A (M, K) to A (1, M, K) before assigning it into the root instance and squeeze E (1, M, N) to (M, N) after the calculations of the root instance return back. Linear_ReLU_ABDE_I8 is implemented by adding a ReLU operation on the result output of Linear_ABDE_I8. Developing the complete function# The inference of SQ quantized models relies on using PyTorch and Transformer libraries, and a tensor type is used to represent matrices and vectors in torch, the C++ data types in CK need to be replaced with the torch::tensor type. The data types of the input and output matrices should be a tensor type. In GEMM, the A and B inputs are two-dimensional matrices, and the required input matrices of the selected fundamental CK instance are three-dimensional matrices. Therefore, we must convert the input 2-D tensors to 3-D tensors, by using tensor’s unsqueeze() method before passing these matrices to the instance. For batched GEMM in the preceding table, ignore this step. // Function input and output torch::Tensor linear_relu_abde_i8( torch::Tensor A_, torch::Tensor B_, torch::Tensor D_, float alpha, float beta) { // Convert torch::Tensor A_ (M, K) to torch::Tensor A (1, M, K) auto A = A_.unsqueeze(0); // Convert torch::Tensor B_ (K, N) to torch::Tensor A (1, K, N) auto B = B_.unsqueeze(0); ... As shown in the following code block, we obtain M, N, and K values using input tensor size values. This stride size information is used to reshape the input vector D and allocate the storage space of tensor E. Stride reflects the exact size of continuous elements in memory, which are passed as important parameters to the fundamental instance for GPU kernel use. // Return the batch count from the size of dimension 0 int batch_count = A.size(0); // Return the M, N, K from the size of dimension 1 & 2 int M = A.size(1); int N = B.size(1); int K = A.size(2); // Initialize the stride size for A, B, D and E int stride_A = K; int stride_B = K; int stride_D0 = N; int stride_E = N; // Initialize the stride size for batched A, B, D and E long long int batch_stride_A = M * K; long long int batch_stride_B = K * N; long long int batch_stride_D0 = M * N; long long int batch_stride_E = M * N; // Convert the tensor of 2-D to 3-D auto D = D_.view({1,-1}).repeat({M, 1}); // Allocate memory for E auto E = torch::empty({batch_count, M, N}, torch::dtype(torch::kInt8).device(A.device())); In the following code block, ADataType, BDataType and D0DataType are used to denote the data precision of the input tensors A, B and D, respectively. EDataType is used to denote the data precision of output tensor E. These parameters are specified to I8 data format (8-bit integer data format) to meet the kernel’s design requirements. AccDataType determines the data precision used to represent the multiply-add results of A and B elements. Generally, a larger range data type is applied to store the multiply-add results of A and B to avoid result overflow; I32 is applied in this case. The CShuffleDataType I32 data type indicates that the multiply-add results continue to be stored in LDS as an I32 data format. All of this is implemented through the following code block. // Data precision using ADataType = I8; using BDataType = I8; using AccDataType = I32; using CShuffleDataType = I32; using D0DataType = I8; using DsDataType = ck::Tuple<D0DataType>; using EDataType = I8; Following the convention of various linear algebra libraries, row-major and column-major orders are used to denote the ways of storing matrices in linear storage. The advantage of specifying matrix B as column major is that all the relevant matrix elements are stored continuously in GPU global memory when a row in A is multiplied by a column in B, which can help GPU achieve data consistency access to improve access performance. // Specify tensor order using ALayout = RowMajor; using BLayout = ColumnMajor; using D0Layout = RowMajor; using DsLayout = ck::Tuple<D0Layout>; using ELayout = RowMajor; In CK, PassThrough is a struct denoting if an operation is applied to the tensor it binds to. To fuse the operations between E1, E2, and E introduced in section Operation flow analysis, we define a custom C++ struct, ScaleScaleAddRelu, and bind it to CDEELementOp. It determines the operations that will be applied to CShuffle (A×B results), tensor D, α, and β. // No operations bound to the elements of A and B using AElementOp = PassThrough; using BElementOp = PassThrough; // Operations bound to the elements of C, D and E using CDEElementOp = ScaleScaleAddRelu; In the binding struct, operator() performs an addition operation between CShuffle and matrix D, a ReLU operation on the addition results, and a rounding operation on the output elements. It then returns the results to E. struct ScaleScaleAddRelu { template <> __host__ __device__ constexpr void operator()<I8, I32, I8>(I8& e, const I32& c, const I8& d) const { // Scale AxB result with alpha const F32 c_scale = ck::type_convert<F32>(c) * alpha; // Scale D with beta const F32 d_scale = ck::type_convert<F32>(d) * beta; // Perform addition operation F32 temp = c_scale + d_scale; // Perform RELU operation temp = temp > 0 ? temp : 0; // Perform rounding operation temp = temp > 127 ? 127 : temp; // Return to E e = ck::type_convert<I8>(temp); } F32 alpha; F32 beta; }; The original input tensors need to be padded to meet GPU tile-based parallelism. static constexpr auto GemmDefault = ck::tensor_operation::device::GemmSpecialization::MNKPadding; The template parameters of the target fundamental instance are initialized with the above parameters and includes default tunable parameters. For specific tuning methods, see Tunable parameters. using DeviceOpInstance = ck::tensor_operation::device::DeviceBatchedGemmMultiD_Xdl< // Tensor layout ALayout, BLayout, DsLayout, ELayout, // Tensor data type ADataType, BDataType, AccDataType, CShuffleDataType, DsDataType, EDataType, // Tensor operation AElementOp, BElementOp, CDEElementOp, // Padding strategy GemmDefault, // Tunable parameters tunable parameters>; Return the address of the first element of tensors: auto A_ref = A.data_ptr<ADataType>(); auto B_ref = B.data_ptr<BDataType>(); auto D0_ref = D.data_ptr<D0DataType>(); auto E_ref = E.data_ptr<EDataType>(); The fundamental instance is then initialized and run with actual arguments: auto device_op = DeviceOpInstance{}; auto invoker = device_op.MakeInvoker(); auto argument = device_op.MakeArgument( A_ref, B_ref, {D0_ref}, E_ref, M, N, K, batch_count, stride_A, stride_B, {stride_D0}, stride_E, batch_stride_A, batch_stride_B, {batch_stride_D0}, batch_stride_E, AElementOp{}, BElementOp{}, CDEElementOp{alpha, beta}); invoker.Run(argument, StreamConfig{nullptr, 0}); The output of the fundamental instance is a calculated batched matrix E (batch, M, N). Before the return, it needs to be converted to a 2-D matrix if a normal GEMM result is required. // Convert (1, M, N) to (M, N) return E.squeeze(0); Binding to Python# Since these functions are written in C++ and torch::Tensor, you can use pybind11 to bind the functions and import them as Python modules. For the example, the necessary binding code for exposing the functions in the table spans but a few lines. #include <torch/extension.h> PYBIND11_MODULE(TORCH_EXTENSION_NAME, m){ m.def("linear_ab_i8_de_f32", &linear_ab_i8_de_f32); m.def("linear_relu_abde_i8", &linear_relu_abde_i8); m.def("linear_abde_i8", &linear_abde_i8); m.def("bmm_abe_i8", &bmm_abe_i8); m.def("bmm_ab_i8_e_f32", &bmm_ab_i8_e_f32); } Build the C++ extension by writing a setup.py script that uses setuptools to compile the C++ code. A reference implementation of the setup.py script is as follows. import os from setuptools import setup, find_packages from torch.utils import cpp_extension from torch.utils.cpp_extension import BuildExtension os.environ["CC"] = "hipcc" os.environ["CXX"] = "hipcc" sources = [ 'torch_int/kernels/linear.cpp', 'torch_int/kernels/bmm.cpp', 'torch_int/kernels/pybind.cpp', ] include_dirs = ['torch_int/kernels/include'] extra_link_args = ['libutility.a'] extra_compile_args = ['-O3','-DNDEBUG', '-std=c++17', '--offload-arch=gfx942', '-DCK_ENABLE_INT8', '-D__HIP_PLATFORM_AMD__=1'] setup( name='torch_int', ext_modules=[ cpp_extension.CUDAExtension( name='torch_int.rocm', sources=sources, include_dirs=include_dirs, extra_link_args=extra_link_args, extra_compile_args=extra_compile_args ), ], cmdclass={ 'build_ext': BuildExtension.with_options(use_ninja=False) }, packages=find_packages( exclude=['notebook', 'scripts', 'tests']), ) Run python setup.py install to build and install the extension. It should look something like Figure 6: Compilation and installation of the INT8 kernels.# INT8 model inference and performance# The implementation architecture of running SmoothQuant models on MI300X GPUs is illustrated in Figure 7, where (a) shows the decoder layer composition components of the target model, (b) shows the major implementation class for the decoder layer components, and (c) denotes the underlying GPU kernels implemented by CK instance. The implementation architecture of running SmoothQuant models on AMD MI300X accelerators.# For the target SQ quantized model, each decoder layer contains three major components: attention calculation, layer normalization, and linear transformation in fully connected layers. The corresponding implementation classes for these components are: Int8OPTAttention W8A8B8O8LinearReLU W8A8BF32OF32Linear These classes’ underlying implementation logits will harness the functions in previous table. Note that for the example, the LayerNormQ module is implemented by the torch native module. Testing environment: The hardware platform used for testing equips with 256 AMD EPYC 9534 64-Core Processor, 8 AMD Instinct MI300X accelerators and 1.5T memory. The testing was done in a publicly available Docker image from Docker Hub: rocm/pytorch:rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2 The tested models are OPT-1.3B, 2.7B, 6.7B and 13B FP16 models and the corresponding SmoothQuant INT8 OPT models were obtained from Hugging Face. Note that since the default values were used for the tunable parameters of the fundamental instance, the performance of the INT8 kernel is suboptimal. Figure 8 shows the performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X accelerator. The GPU memory footprints of SmoothQuant-quantized models are significantly reduced. It also indicates the per-sample inference latency is significantly reduced for all SmoothQuant-quantized OPT models (illustrated in (b)). Notably, the performance of the CK instance-based INT8 kernel steadily improves with an increase in model size. Performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X accelerator.# For accuracy comparisons between the original FP16 and INT8 models, the evaluation is done by using the first 1,000 samples from the LAMBADA dataset’s validation set. We employ the same Last Token Prediction Accuracy method introduced in SmoothQuant Real-INT8 Inference for PyTorch as our evaluation metric. The comparison results are shown in Table 2. Models Hugging Face FP16 model accuracy SmoothQuant quantized INT8 model accuracy opt-1.3B 0.72 0.70 opt-2.7B 0.76 0.75 opt-6.7B 0.80 0.79 opt-13B 0.79 0.77 Conclusion# CK provides a rich set of template parameters for generating flexible accelerated computing kernels for difference application scenarios. CK supports multiple instruction sets of AMD Instinct GPUs, operator fusion and different data precisions. Its composability helps users quickly construct operator performance verification. With CK, you can build more effective AI applications with higher flexibility and better performance on different AMD accelerator platforms. previous Model acceleration libraries next Optimizing Triton kernels
The world’s leading publication for data science, AI, and ML professionals. Unleashing the Power of Triton: Mastering GPU Kernel Optimization in Python Accelerating AI/ML Model Training with Custom Operators – Part 2 According to Greek mythology, Triton, a god of the sea, would calm or stir the sea waters by using his conch shell to control its tides and waves. In one story, in particular, Triton is depicted as having used his powers to guide the Argonauts through particularly dangerous sea waters. In this post, we similarly call upon Triton for navigation through complex journeys, although this time we refer to the Triton language and compiler for writing Deep Learning (DL) kernels and to our journeys through the world of AI/ML development. This is a sequel to a previous post on the topic of accelerating AI/ML applications with custom operators in which we demonstrated the potential for performance optimization by developing custom CUDA kernels. One of our intentions was to emphasize the accessibility of custom kernel development and the opportunities it provides even for non-expert CUDA developers. However, there are challenges to CUDA development that may prove insurmountable for some. For one, while many a modern-day AI/ML developer are well-versed in Python, they may not feel comfortable developing in C++. Furthermore, tuning a CUDA kernel to take full advantage of the GPU’s capabilities requires an intimate understanding of the underlying HW architecture and could take a non-trivial amount of work. This is particularly true if you want your kernel to run optimally on a variety of GPU architectures. Much of the complexity results from CUDA’s "thread-based" development model in which the developer is responsible for designing and optimizing all elements of the GPU kernel threads, including all details related to the use of GPU memory, thread-concurrency, TensorCore scheduling, and much more. The Power of Triton The Triton library aims to democratize and simplify Gpu kernel development in two primary ways. First, it provides an API for building custom operators in Python (rather than C++). Second, it enables kernel development at the block level (rather than the thread level) thereby abstracting away and automating all issues related to optimizing performance within CUDA thread blocks. Rather than taking the laborious steps of programming the details of the thread invocation, including the intricacies related to memory management, scheduling of on-chip acceleration engines, thread-synchronization, etc., kernel developers can rely on Triton to do it all for them. One important byproduct of the high-level API abstraction of Triton’s programming model is that it reduces the burden of needing to tune the kernel for multiple different GPU types and architectures. Of course, as is usually the case when up-leveling an API, the Triton programming model does have its disadvantages. Some kernels might benefit from the thread-level control enabled by CUDA (e.g., they might benefit from the conditional execution flow discussed in our previous post). Other kernels might require very specialized and delicate treatment to reach peak performance and may suffer from the automated result of the Triton compiler. But even in cases such as these, where the development of a CUDA kernel may ultimately be required, the ability to quickly and easily create a temporary Triton kernel could greatly facilitate development and boost productivity. For more on the motivations behind Triton and on the details of its programming model, see the Triton announcement, the official Triton documentation, and the original Triton white-paper. Disclaimers Similar to our [previous post](https://chaimrand.medium.com/accelerating-ai-ml-model-training-with-custom-operators-163ef2a04b12), our intention is to provide a simple demonstration of the opportunity offered by Triton. Please do not view this post as a replacement for the official Triton documentation or its associated tutorials. We will use the same face-detection model as in our previous post as a basis for our demonstration and perform our experiments in the same Google Cloud environment – a g2-standard-16 VM (with a single L4 GPU) with a dedicated deep learning VM image and Pytorch 2.4.0. As before, we make no effort to optimize our examples and/or verify their robustness, durability, or accuracy. It should be noted that although we will perform our experiments on a PyTorch model and on an NVIDIA GPU, Triton kernel development is supported by additional frameworks and underlying HWs. Triton as a Component of Torch Compilation In previous posts (e.g., here) we demonstrated the use of PyTorch compilation and its potential impact on runtime performance. The default compiler used by the [torch.compile](https://pytorch.org/docs/stable/generated/torch.compile.html)r is TorchInductor which relies heavily on Triton kernels for its GPU acceleration. Thus, it seems only appropriate that we begin our Triton exploration by assessing the automatic Triton-backed optimization afforded by torch.compile. The code block below includes the same forward pass of the face detection model we introduced in our previous post along with the compiled GIOU loss function. For the sake of brevity, we have omitted some of the supporting code. Please refer to our previous post for the full implementation. def loss_with_padding(pred, targets): mask = (targets[...,3] > 0).to(pred.dtype) total_boxes = mask.sum() loss = generalized_box_iou(targets, pred) masked_loss = loss*mask loss_sum = masked_loss.sum() return loss_sum/torch.clamp(total_boxes, 1) device = torch.device("cuda:0") model = torch.compile(Net()).to(device).train() loss_fn = torch.compile(loss_with_padding) # forward portion of training loop wrapped with profiler object with torch.profiler.profile( schedule=torch.profiler.schedule(wait=5, warmup=5, active=10, repeat=1) ) as prof: for step, data in enumerate(train_loader): with torch.profiler.record_function('copy data'): images, boxes = data_to_device(data, device) torch.cuda.synchronize(device) with torch.profiler.record_function('forward'): with torch.autocast(device_type='cuda', dtype=torch.bfloat16): outputs = model(images) torch.cuda.synchronize(device) with torch.profiler.record_function('calc loss'): loss = loss_fn(outputs, boxes) torch.cuda.synchronize(device) prof.step() if step > 30: break # filter and print profiler results event_list = prof.key_averages() for i in range(len(event_list) - 1, -1, -1): if event_list[i].key not in ['forward', 'calc loss', 'copy data']: del event_list[i] print(event_list.table()) The performance results (averaged over multiple runs) are captured below: ------------- ------------ ------------ Name CPU total CPU time avg ------------- ------------ ------------ copy data 56.868ms 5.687ms forward 1.329s 132.878ms calc loss 8.282ms 828.159us ------------- ------------ ------------ Recall that the average time of the original loss function (on padded input) was 1.844ms. Thus the performance boost resulting from torch compilation is greater than 2X(!!). The Triton kernels automatically generated by torch.compile can actually be viewed by setting the TORCH_LOGS environment variable, as explained in this PyTorch tutorial. In fact, some have proposed the use of these kernels as a starting point for Triton development (e.g., see here). However, in our experience these kernels can be somewhat difficult to decipher. In the next section we will attempt to further improve on the results of PyTorch compilation by implementing a GIOU Triton kernel. Creating a Custom Triton Kernel A great place to start your Triton development journey is with the official Triton tutorials. The tutorials are introduced in incremental order of complexity, with each one expanding on one or more of Triton’s unique features. Our GIOU Triton kernel most closely resembles the most basic vector addition example. As in our CUDA implementation, we assign a block to each sample in the input batch, and program it to operate on all of the bounding boxes in the sample. Note the use of tl.load and tl.store for reading and writing data from and to memory, as well as the block programs use of vectorized arithmetic. import triton import triton.language as tl @triton.jit def giou_kernel(preds_ptr, targets_ptr, output_ptr, valid_ptr, BLOCK_SIZE: tl.constexpr): pid = tl.program_id(axis=0) box_id = tl.arange(0, BLOCK_SIZE) box_offsets = pid * BLOCK_SIZE + box_id preds_left = tl.load(preds_ptr + 0 + 4 * box_offsets) preds_top = tl.load(preds_ptr + 1 + 4 * box_offsets) preds_right = tl.load(preds_ptr + 2 + 4 * box_offsets) preds_bottom = tl.load(preds_ptr + 3 + 4 * box_offsets) gt_left = tl.load(targets_ptr + 0 + 4 * box_offsets) gt_top = tl.load(targets_ptr + 1 + 4 * box_offsets) gt_right = tl.load(targets_ptr + 2 + 4 * box_offsets) gt_bottom = tl.load(targets_ptr + 3 + 4 * box_offsets) epsilon = 1e-5 # Compute the area of each box area1 = (preds_right - preds_left) * (preds_bottom - preds_top) area2 = (gt_right - gt_left) * (gt_bottom - gt_top) # Compute the intersection left = tl.maximum(preds_left, gt_left) top = tl.maximum(preds_top, gt_top) right = tl.minimum(preds_right, gt_right) bottom = tl.minimum(preds_bottom, gt_bottom) inter_w = tl.maximum(right - left, 0) inter_h = tl.maximum(bottom - top, 0) inter_area = inter_w * inter_h union_area = area1 + area2 - inter_area iou_val = inter_area / tl.maximum(union_area, epsilon) # Compute the smallest enclosing box enclose_left = tl.minimum(preds_left, gt_left) enclose_top = tl.minimum(preds_top, gt_top) enclose_right = tl.maximum(preds_right, gt_right) enclose_bottom = tl.maximum(preds_bottom, gt_bottom) enclose_w = tl.maximum(enclose_right - enclose_left, 0) enclose_h = tl.maximum(enclose_bottom - enclose_top, 0) enclose_area = enclose_w * enclose_h # Compute GIOU delta_area = (enclose_area - union_area) enclose_area = tl.maximum(enclose_area, epsilon) giou = iou_val - delta_area / enclose_area # Store results tl.store(output_ptr + (box_offsets), tl.where(gt_bottom > 0, giou, 0)) tl.store(valid_ptr + (box_offsets), gt_bottom > 0) def loss_with_triton(pred, targets): batch_size = pred.shape[0] n_boxes = pred.shape[1] # convert to float32 (remove to keep original dtypes) pred = pred.to(torch.float32) targets = targets.to(torch.float32) # allocate output tensors output = torch.empty_strided(pred.shape[0:2], stride=(n_boxes,1), dtype = pred.dtype, device = pred.device) valid = torch.empty_strided(pred.shape[0:2], stride=(n_boxes,1), dtype = torch.bool, device = pred.device) # call Triton kernel giou_kernel[(batch_size,)](pred, targets, output, valid, BLOCK_SIZE=n_boxes) total_valid = valid.sum() loss_sum = output.sum() return loss_sum/total_valid.clamp(1) The results of running with our Triton kernel are captured below. While somewhat worse than in our previous experiment, this could be a result of additional optimizations performed by torch.compile. ------------- ------------ ------------ Name CPU total CPU time avg ------------- ------------ ------------ copy data 57.089ms 5.709ms forward 1.338s 133.771ms calc loss 8.908ms 890.772us ------------- ------------ ------------ Following the recommendation of PyTorch’s documentation on the use of Triton kernels, we further assess the performance of our kernel, this time in combination with PyTorch compilation. The results (averaged over multiple runs) are slightly better than the auto-compiled loss of our first experiment. ------------- ------------ ------------ Name CPU total CPU time avg ------------- ------------ ------------ copy data 57.008ms 5.701ms forward 1.330s 132.951ms calc loss 7.189ms 718.869us ------------- ------------ ------------ When developing our custom GIOU CUDA kernel, we noted the overhead of converting the input tensors to float32, and the need to enhance our kernel to support various input types in order to avoid this conversion. In the case of our Triton kernel this can be accomplished quite easily by simply removing the conversion operations. The custom kernel will be auto-generated (JIT-compiled) with the original types. ------------- ------------ ------------ Name CPU total CPU time avg ------------- ------------ ------------ copy data 57.034ms 5.703ms forward 1.325s 132.456ms calc loss 6.219ms 621.950us ------------- ------------ ------------ Our final results are on par with CUDA kernel results that we saw in our previous post. Results The following table summarizes the results of our experimentation. The results were averaged over multiple runs due to some variance that we observed. We have included the results of our custom CUDA kernel from our previous post, for reference. Keep in mind that the comparative results are likely to vary greatly based on the details of the kernel and the runtime environment. While our first Triton kernel experiment resulted in reduced performance, compared to our custom CUDA operator, by applying compilation and removing the data type conversions, we were able to match its speed. These findings are in line with what one might expect from Triton: On the one hand, its high-level API abstraction implies a certain loss of control over the low-level flow which could result in reduced runtime performance. On the other hand, the (relative) simplicity and power of its APIs enable users to close the performance gap by implementing features with much greater ease than in CUDA. One could make a strong argument that the Triton kernel we chose to evaluate is what the documentation would refer to as "embarrassingly parallel", i.e., comprised of element-wise operations, and that as such, is a terrible kernel on which to demonstrate the value of Triton. Indeed, a more complex program, requiring more sophisticated memory management, scheduling, synchronization, etc., may be required to showcase the full power of Triton. Next Steps Several additional steps are required to complete our task. These include tuning our custom kernel and implementing the backward function. 1. Kernel Optimization Although, Triton abstracts away a lot of the low-level kernel optimization, there remain many controls that could greatly impact runtime performance. These include the size of each block, the number of thread warps to use (as demonstrated in the softmax tutorial), and how L2 memory is accessed (see the [matrix multiplication tutorial](https://triton-lang.org/main/getting-started/tutorials/03-matrix-multiplication.html) for an example of swizzling). Triton includes an autotuning feature for optimizing the choice of hyper-parameters (as demonstrated in the matrix multiplication tutorial and in the PyTorch Triton example). Although we have omitted autotuning from our example, it is an essential step of Triton kernel development. 2. Backward Pass Implementation We have limited our example to just the forward pass of the GIOU loss function. A full solution would require creating a kernel for the backward pass, as well (as demonstrated in the layer normalization tutorial). This is usually a bit more complicated than the forward pass. One may wonder why the high-level kernel development API exposed by Triton does not address this challenge by supporting automatic differentiation. As it turns out, for reasons that are beyond the scope of this post (e.g., see here), automatic differentiation of custom kernels is extremely difficult to implement. Nonetheless, this would be an absolute killer of a feature for Triton and we can only hope that this will be supported at some point in the future. Summary Triton is easily one of the most important and impactful AI/ML libraries of the past few years. While it is difficult to assess the amount of innovation and progress it has enabled in the field of AI, its footprints can be found everywhere – from the core implementation of PyTorch 2 and its dependencies, to the specialized attention layers within the advanced LLM models that are slowly perforating our every day lives. Triton’s popularity is owed to its innovative programming model for kernel development. Once limited to the domain of CUDA experts, Triton makes creating customized DL primitives accessible to every Python developer. In this post we have only touched the surface of Triton and its capabilities. Be sure to check out the Triton’s online documentation and other resources to learn more. Written By Topics: Share this article: Related Articles Implementing Convolutional Neural Networks in TensorFlow Step-by-step code guide to building a Convolutional Neural Network What Do Large Language Models “Understand”? A deep dive on the meaning of understanding and how it applies to LLMs How to Forecast Hierarchical Time Series A beginner’s guide to forecast reconciliation Deep Dive into LSTMs & xLSTMs by Hand ✍️ Explore the wisdom of LSTM leading into xLSTMs - a probable competition to the present-day LLMs Does Your Company Have a Data Strategy? This sophistication matrix can show you where you need to go Speeding Up the Vision Transformer with BatchNorm How integrating Batch Normalization in an encoder-only Transformer architecture can lead to reduced training time… The Math Behind Keras 3 Optimizers: Deep Understanding and Application This is a bit different from what the books say. Build Your Own Modular Audio Course on AI Ethics and Safety A hand-picked “listening list” on the questions and stakes at the forefront of artificial intelligence… Latest picks: The ethics of AI Your daily dose of data science Your home for data science and Al. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals. Sign up to our newsletter
Scaling Intelligence Lab KernelBench: Can LLMs Write GPU Kernels? Anne Ouyang* Stanford Simon Guo* Stanford Azalia Mirhoseini Stanford A benchmark designed to evaluate the ability of LLMs to generate efficient GPU kernels for optimizing neural network performance TL;DR We introduce KernelBench, a benchmark designed to evaluate the ability of large language models (LLMs) to generate efficient GPU kernels for optimizing neural network performance. With 250 well-defined neural network tasks spanning foundational operators, simple fusion patterns, and full ML architectures, the benchmark tasks LLMs to replace PyTorch implementations with custom kernels that are correct and performant. KernelBench highlights the potential for agentic optimization for computer systems with dense feedback signal, where systems iteratively refine kernel designs using profiling tools and tight feedback loops to achieve near-peak hardware utilization. As models scale, well-optimized kernels have far-reaching implications, from reducing the massive energy demands of AI systems to enabling fair and efficient comparisons of novel architectures. By providing aspirational tasks and focusing on agentic approaches, KernelBench envisions a future where LLMs can autonomously drive innovation in GPU programming and ML system optimization. Kernels are the kernel of deep learning. …but writing kernels sucks. Consider a machine learning researcher with a promising new attention mechanism that could improve LLM efficiency by 30%. To actually test out this idea, they need to: In an ideal world, you could: This future (hopefully) isn’t science fiction: we think it’s possible. To measure progress, we’re introducing KernelBench, a dataset of 250 well-defined neural network operations with reference implementations given in Pytorch. KernelBench measures the ability of LLMs to write custom GPU kernels that implement and accelerate these operations. Beyond the 250 core tasks in KernelBench, we also introduce 20 aspirational tasks from HuggingFace models to benchmark the ability of LLM systems in not only just writing GPU kernels but also working on integrating GPU code optimizations in a software library setting. Why Are Kernels Important? As models grow larger and become more embedded into our daily lives, having fine-grained control over hardware resources to extract the most performance out of GPUs directly translates to significant energy and cost reductions. For example, ChatGPT alone is estimated to consume over half a million kilowatt-hours daily — roughly equivalent to the power usage of 180,000 U.S. households. At this scale, a 5% speedup isn’t just a number on a benchmark, it’s real energy and money saved. Beyond savings, optimized GPU kernels also allow machine learning researchers to fairly evaluate and compare new model architectures, and efficiency often means unlocking new capabilities that push the field of AI forward. Big O is not all you need. In algorithm classes we are taught to view Big O as the gold standard for measuring the efficiency of algorithms. In ML research, new model architectures may have better theoretical complexity, implying they should outperform traditional architectures in speed or efficiency, but when it comes down to real-world performance, these newer models can struggle to keep up with established architectures. (Meme credit to Michael Zhang) Why doesn’t Big O analysis match actual performance? Established architectures benefit from years of optimization in their underlying kernels. These kernels are tailored to run efficiently on specific hardware, exploiting all the features of the hardware to maximize performance. On the flip side, newer models often lack this level of optimization and lack adequate hardware utilization, which can result in disappointing performance despite their appealing theoretical claims. Optimized GPU kernels are important for designing ML architectures. The lack of well-written GPU kernels makes it difficult to do apples-to-apples model architecture comparisons given a fixed compute budget and a fixed hardware platform, so we cannot effectively determine the effectiveness of an architecture. Consider the following example scenarios: Can we use LLMs to generate correct and performant GPU kernels? Unlike many other coding tasks, writing efficient GPU kernels is challenging due to the need for parallelization scheme design, memory management, and hardware-specific optimizations. GPU programming isn’t just about writing syntactically correct code; it requires a deep understanding of GPU architecture to ensure that code is both correct (produces the right output) and performant (fully utilizes the GPU’s capabilities). These factors make GPU programming a rich problem for LLMs, as it involves a bigger optimization search space beyond basic syntax or logic generation. Recent work on inference scaling laws shows that when you have automatic verifiers, throwing more compute at generation can dramatically improve success rates. Our lab’s recent work (Large Language Monkeys) showed that with coding tasks, going from 1 to 250 samples boosted the solve rate from 15.9% to 56% on SWE-Bench Lite with DeepSeek-Coder-V2-Instruct. GPU programming is a task with strong verification mechanisms and clear feedback signals. For correctness, ground truth is determined by running the generated code on random inputs and comparing the outputs with those of the baseline to check if they match. Performance is measured as the wallclock time and comparing speedup over the reference baseline. GPU programming is also great for agentic and RL approaches, as the system can iteratively refine kernel designs with reliable, measurable outcomes. Profiling tools like NVIDIA’s Nsight Compute (NCU) provide in-depth feedback on performance bottlenecks, memory usage, and thread utilization, which gives the agent a lot of data to adjust optimizations and improve efficiency. Together, these qualities create a structured environment and tight feedback loop where an agent has verifiable correctness and performance metrics to iterate toward increasingly optimized and correct kernel code. KernelBench We introduce KernelBench, a collection of 250 PyTorch neural network operations that we think systems should be able to automatically write optimized kernels for. KernelBench provides the reference implementation in PyTorch, and the task for an LLM is to replace torch layers with custom implementations. We currently only focus on the forward pass as a first step. The core tasks in KernelBench are divided into three levels, with an additional level 4 of 20 aspirational tasks: We only provide baseline evaluations for the first three levels. Level 4 is currently a far-reaching aim and we do not provide baseline evaluations for this level; however, we believe that this level could ultimately play a significant role in advancing the capabilities of LLMs to interact with complex, real-world codebases, where they not only assist with code generation but can also drive architectural improvements and optimizations in widely-used frameworks. The tasks in KernelBench are a mix of written manually, generated by an LLM or script, and collected from Github. All tasks are manually cleaned up and verified. Each problem has a class named Model to denote which torch-based architecture we want optimized. The torch reference implementations are cleaned up to be self-contained in one file, with the modules containing only the init and forward functions (and helper functions called in the init and forward functions). In addition to the torch module, we also provide functions get_inputs() and get_init_inputs() for generating random parameters for the forward pass and the initialization, respectively. The shapes of random inputs for testing are manually chosen. We also modify the architecture manually to eliminate operations such as dropout to make the results deterministic (within a generous tolerance threshold). While the tasks (architecture implementations) are given in Pytorch, KernelBench is language agnostic, allowing the solutions to use any libraries and DSLs (including Triton, ThunderKittens, CUTLASS, …) such that different levels of abstractions for GPU programming can be explored. It is also fully flexible for the LLMs to determine the optimizations to apply (e.g. making decisions such as kernel fusion). Here’s a simple example of vector addition to illustrate our task format and a CUDA-based solution: import torch import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() def forward(self, a, b): return a + b def get_inputs(): # randomly generate input tensors based on the model architecture a = torch.randn(1, 128).cuda() b = torch.randn(1, 128).cuda() return [a, b] def get_init_inputs(): # randomly generate tensors required for initialization based on the model architecture return [] Here’s an example of a CUDA based solution using custom CUDA C++ operators in torch via load_inline(). This entire file is LLM-generated. The custom CUDA code is supplied as a string and JIT compiled. In this example, the torch addition expression for two vectors is swapped out with the custom elementwise_add_kernel. import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.cpp_extension import load_inline # Define the custom CUDA kernel for element-wise addition elementwise_add_source = """ #include <torch/extension.h> #include <cuda_runtime.h> __global__ void elementwise_add_kernel(const float* a, const float* b, float* out, int size) { int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx < size) { out[idx] = a[idx] + b[idx]; } } torch::Tensor elementwise_add_cuda(torch::Tensor a, torch::Tensor b) { auto size = a.numel(); auto out = torch::zeros_like(a); const int block_size = 256; const int num_blocks = (size + block_size - 1) / block_size; elementwise_add_kernel<<<num_blocks, block_size>>>(a.data_ptr<float>(), b.data_ptr<float>(), out.data_ptr<float>(), size); return out; } """ elementwise_add_cpp_source = "torch::Tensor elementwise_add_cuda(torch::Tensor a, torch::Tensor b);" # Compile the inline CUDA code for element-wise addition elementwise_add = load_inline( name='elementwise_add', cpp_sources=elementwise_add_cpp_source, cuda_sources=elementwise_add_source, functions=['elementwise_add_cuda'], verbose=True, extra_cflags=[''], extra_ldflags=[''] ) class ModelNew(nn.Module): def __init__(self) -> None: super().__init__() self.elementwise_add = elementwise_add def forward(self, a, b): return self.elementwise_add.elementwise_add_cuda(a, b) Evaluation When evaluating GPU kernels, we focus on three criteria with each building upon the previous: In our context, correctness specifically means, given randomized input values for a predefined set of shapes, the optimized kernel should yield outputs that are numerically equivalent (within an acceptable margin of error, if necessary for floating-point operations) to those produced by the baseline implementation. We choose our numerical equivalent threshold as having absolute and relative tolerances being 1e-02, a generous threshold enabling precision changes and alternative algorithms. A common tradeoff in GPU kernel design is specialization versus generality. Specialized kernels, tuned for particular input shapes or patterns, can often achieve significant performance gains; general-purpose kernels, by contrast, aim for broader compatibility but may sacrifice peak performance. For our purpose, since the aim of our project is to cheaply and quickly generate specialized kernels, we choose to constrain our correctness checks to specified input shapes without requiring broad generalization across all possible shapes. We generate 5 sets of random inputs with fixed shapes, and the kernel is considered to be correct if it produces the numerical equivalent outputs as the unoptimized baseline for all 5 inputs. It is possible to have a stricter measurement of correctness by using more random inputs (the number of correctness trials is a customizable parameter), but we capped it at 5 due to evaluation speed. Caption: Illustration of KernelBench design In KernelBench, we made the decision not to provide a predefined train/validation/test set split; however, users are welcome to create their own splits based on their specific needs and goals. Our benchmark doesn’t include additional information to distinguish between training and testing examples, as the focus is on real-world challenges that demand open-ended, high-performance solutions—writing custom GPU kernels and optimizing them to their absolute limits (you’re only constrained by the speed of light!). These tasks, which revolve around foundational operators for machine learning, are designed to have a meaningful impact. Improving these kernels, in any way, can potentially lead to substantial real-world benefits. Initial Evaluation Baseline Greedy Evaluation We evaluate the 250 problems from Levels 1 to 3 from KernelBench on various frontier models, with greedy decoding parameters (temperature = 0). Compilation and Correctness Across the 3 levels of the problems, most models do generate compilable CUDA code. However, maintaining correctness (same output as torch reference code) becomes increasingly challenging as the reference torch code gets more complex (simple operators in level 1 to fused operators in level 2 to whole model architecture in level 3). Comparing across various models, we note while some models do well on Level 1 tasks, correctness quickly drops off for Level 2 and 3 tasks. Larger models of the same family also seem to get more correct solutions. It is also particularly interesting that the o1 model does significantly better than gpt-4o on correctness for more challenging Level 2 and Level 3 problems, highlighting scaling inference time compute might have played a role here. Caption: Percent of Correct Samples across 3 Levels of problems across models Pass@k Beyond greedy decoding, we are also interested in pass@k, having at least 1 correct (and successfully compiled) solution given k attempts, as introduced in the HumanEval paper. We sample models with high decoding temperature (deepseek-coder with temp=1.6, and Llama 3.1 70b-Instruct with temp=0.8) for more diverse samples, compute pass@1,3,5,10 with N=100 samples. Pass@k is defined as \(\text{pass@$k$} := \mathop{\mathbb{E}}_{\text{problems}} \left[ 1 - \frac{\binom{n - c}{k}}{\binom{n}{k}} \right]\) where $n$ is the total number of samples and $c$ is the number of correct samples. Caption: Pass@k performance for Deepseek-coder and Llama 3.1 70B Instruct As we increase k, correctness improves, suggesting it might be easier to solve such tasks with more parallel samples (as introduced in the Large Language Monkeys paper). However, we see a stark difference between deepseek and llama 3.1 70b performance, highlighting the importance of base model capability even when conducting inference time scaling. Tradeoff between Correctness and Performance We only analyze correctness in the section above. However, in the case of kernel engineering, we care deeply about performance. Looking at the generated kernels, we found there is a tradeoff between correctness and performance, two objectives that are often at odds with each other. Code with more optimization could give better performance gain, but could also risk making more errors and hence likely to fail correctness. Optimizing for performant code while guaranteeing correctness creates a new direction for code generation, while most existing benchmarks and methodologies focus on passing correctness; we are excited to keep exploring that. Performance: Percentiles of Speedups When evaluating performance, we prioritize correctness, as incorrect but fast code is not useful. Therefore, speedups are calculated using only the correct samples. To present a comprehensive view of performance, we report speedups in percentiles. The count of correct samples for each model is indicated in parentheses after the model name in the table below. In addition to the baseline PyTorch implementation, we also compare speedups against torch.compile() using its default mode. The speedup is defined as \(\frac{t\_baseline}{t\_generated}\) Caption: Percentile of Speedups vs. Baseline for both Torch and Torch Compile across 3 levels Among the samples that are correct, we see that most generated kernels exhibit relatively slow speedups over torch and torch.compile baseline, but a few are notably faster as outliers! This piqued our interest and led us to the following investigations. “Kernelsseum” –– A Per Problem Leaderboard To better understand the LLM-generated kernels, we also present a leaderboard to inspect the kernels generated by greedy evaluation on KernelBench. This shows the top 5 LLM-generated kernels per problem, and some problems might lack any correct solutions. Note the performance result is hardware-dependent and currently evaluated on the Nvidia L40S GPU. You can click on entries to see the generated code for each kernel. Right now, the leaderboard only features solutions generated through greedy evaluation. In the future, we aim to make it an open submission leaderboard to allow contributions from the broader community. Interesting Kernels Diagonal Matrix Multiplication Problem 13 in level 1 involves multiplying a matrix by another diagonal matrix: torch.diag(A) @ B torch.diag() takes in a vector of the diagonal elements of a matrix and returns a 2-D square tensor with the elements of input as the diagonal. The result is a matrix-matrix multiplication. Mathematically, multiplying a matrix by a diagonal matrix is equivalent to scaling each row (or column, if the diagonal matrix is on the right side) of the original matrix by the corresponding diagonal element. As a result, the diagonal matrix doesn’t need to be explicitly constructed, reducing both memory usage and computational overhead. This is the problem that gets the >12x speedup over torch and torch.compile() in level 1 for multiple models, one example of these generated CUDA kernel is below: __global__ void diag_matmul_kernel( const float* diag, const float* mat, float* out, const int N, const int M) { const int row = blockIdx.y * blockDim.y + threadIdx.y; const int col = blockIdx.x * blockDim.x + threadIdx.x; if (row < N && col < M) { out[row * M + col] = diag[row] * mat[row * M + col]; } } Kernel Fusion Problem 14 in level 2 performs a matrix multiplication, division, summation, and then scaling: x = torch.matmul(x, self.weight.T) # Gemm x = x / 2 # Divide x = torch.sum(x, dim=1, keepdim=True) # Sum x = x * self.scaling_factor # Scaling There’s a solution generated by claude-3.5-sonnet that has an approximately 3x speed up over both torch and torch compile: // Fused kernel for matmul + divide + sum + scale __global__ void fused_ops_kernel( const float* input, const float* weight, float* output, const float scaling_factor, const int batch_size, const int input_size, const int hidden_size ) { // Each thread handles one element in the batch const int batch_idx = blockIdx.x * blockDim.x + threadIdx.x; if (batch_idx < batch_size) { float sum = 0.0f; // Compute matmul and divide for this batch element for(int h = 0; h < hidden_size; h++) { float elem = 0.0f; for(int i = 0; i < input_size; i++) { elem += input[batch_idx * input_size + i] * weight[h * input_size + i]; } // Divide by 2 as we go sum += (elem / 2.0f); } // Scale and store final result output[batch_idx] = sum * scaling_factor; } } The solution fuses all four operations into a single GPU kernel, eliminating the overhead of writing intermediate results to memory and reading them back for subsequent operations. Additionally, combining the matrix multiplication with the dimension-wise summation reduces the size of the final output, minimizing memory bandwidth usage. Next steps and public involvement In the Scaling Intelligence Lab, we plan to continue extending this work to enable LLMs to write efficient GPU kernels. The initial results show significant room for improvement, and we are optimistic about the potential for significant advancements in future iterations. There is a lot of interest in the community in GPU programming and LLMs for GPU code generation. In particular, Project Popcorn of the GPU Mode Discord aims to build “an LLM that can actually write good GPU code”. There is also interest in running GPU programming competitions for humans as a way to collect high quality training tokens for the LLM. We look forward to seeing how KernelBench can contribute to these initiatives. Our vision for the longer term future is to simplify the generation of high-performance kernels that seamlessly adapt to diverse hardware architectures, enabling developers to achieve optimal performance with minimal effort. By accelerating the iteration cycles for machine learning model architecture design, we aim to empower researchers and practitioners to explore, prototype, and optimize ideas faster than ever. In addition, the ability to generate kernels quickly is very important for adapting to new hardware architectures, which is often a barrier for adoptions of new computing platforms. We think KernelBench and related techniques could enable faster development cycles for new hardware by lowering the amount of human engineering effort to write new kernels for architecture. Let’s make writing high-performance kernels far more accessible and convenient! FAQ Why not a compiler? The current development cycle—from efficient implementations to generalizations to compiler integration—is lengthy. Efficient compilers often lag behind new GPU architectures by over two years: approximately one year for CUDA experts to develop optimized implementations and another year to generalize these optimizations into compilers. Traditional compilers excel in generating provably-correct, robust, and general-purpose solutions, making them indispensable for a wide range of applications. However, developing compilers remains a labor-intensive and time-consuming process. Many design patterns and optimizations are reusable across GPU kernels –– fundamental principles such as overlapping, fusion, efficient memory access, and maximizing occupancy. Our approach seeks to complement traditional compilers by focusing on a different objective. Rather than striving for general-purpose, provably-correct compiler solutions, we aim to distill human intuition directly into specialized, high-performance code with correctness tested empirically. This enables the generation of code highly optimized for specific input shapes and computational patterns, a level of specialization that would otherwise require extensive pattern-matching rules and manual engineering in traditional compilers. Acknowledgements We would like to thank Aaryan Singhal, AJ Root, Allen Nie, Anjiang Wei, Benjamin Spector, Bilal Khan, Bradley Brown, Dylan Patel, Genghan Zhang, Hieu Pham, Hugh Leather, John Yang, Jon Saad-Falcon, Jordan Juravsky, Mark Saroufim, Michael Zhang, Ryan Ehrlich, Sahan Paliskara, Sahil Jain, Shicheng (George) Liu, Simran Arora, Suhas Kotha, Vikram Sharma Mailthody, and Yangjun Ruan for insightful discussions and constructive feedback in shaping this work. We would also like to thank SWEBench for its inspiration and reference, which greatly contributed to the development of this work. Citing @misc{ouyang2024kernelbench, title={KernelBench: Can LLMs Write GPU Kernels?}, author={Anne Ouyang and Simon Guo and Azalia Mirhoseini}, year={2024}, url={https://scalingintelligence.stanford.edu/blogs/kernelbench/}, } Materials
ROCm blogs GEMM Kernel Optimization For AMD GPUs Contents GEMM Kernel Optimization For AMD GPUs# Matrix multiplication underlies critical computational pathways in AI, with General Matrix Multiplication (GEMM) operations serving as performance-critical kernels in neural network architectures. From fully connected layers to convolutions and transformer attention mechanisms, GEMMs consume substantial computational and memory resources in large language models (LLMs). This blog explores GEMM optimization techniques for AMD GPUs, demonstrating methodologies to significantly enhance computational efficiency and performance scaling. ROCm software tools for GEMM Tuning# To assist AMD GPU developers in efficiently discovering the best GEMM solutions, the ROCm software suite offers multiple tools designed to tune GEMM operation performance. Developers can select the appropriate tool based on their specific use case, as illustrated in the diagram below. Let’s dive into the various GEMM tuning tools available for AMD GPU developers to use. GEMM Tuning Techniques on AMD Instinct GPUs# Technique 1: Optimizing Performance with Pre-Tuned GEMM Operations# AMD provides an optimized ROCm docker for an out-of-the-box experience including a pre-tuned GEMM, in the vLLM docker. This rocm/vllm Docker has already integrated the pre-tuned GEMM solution with BLAS libraries, supporting most GEMM shapes for LLM inference. We highly recommend GPU developers to try AMD optimized Docker first. To get started, follow the detail steps below: 1). Pull the optimized docker image from the ROCm/vLLM Docker Hub website. docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6 2). Run the LLM performance benchmark using the vLLM benchmarking tool. Since the pre-tuned GEMM configuration files (.csv) are integrated into the optimized Docker, use the vLLM benchmarking tool, it automatically utilize the pre-tuned GEMM for optimal performance. We use vllm latency benchmarking tool as the example, and the detailed info of vllm benchmarking tool can be found from vLLM benchmark. python /app/vllm/benchmarks/benchmark_latency.py \ --model ${model_path} \ --trust-remote-code \ --num-iters-warmup 3 \ --num-iters 5 \ --dtype float16 \ --input-len {in_len} \ --output-len {out_len} \ --batch-size ${bs} \ --tensor-parallel-size ${tp_nums} \ --num-scheduler-steps 10 Technique 2: Optimizing Performance with PyTorch TunableOp (Framework Level GEMM Tuning)# PyTorch TunableOp provides a GEMM tuning wrapper for both rocBLAS and hipBLASLt. Instead of relying on default GEMMs, TunableOp automatically searches for the optimal solution by querying the underlying BLAS library for all available solutions for a given GEMM, benchmarking each one, and selecting the fastest. The chosen solution is then stored on disk for use in subsequent runs. For applications leveraging popular frameworks like PyTorch and vLLM, users can leverage PyTorch TunableOp online tuning. This process allows tuning to occur seamlessly while running training or inference workloads, requiring only a few environment setting adjustments. Detailed information about these environment variables can be found from PyTorch TunableOp. To optimize performance with tuned GEMM operations at the framework level, follow below steps: 1). Configure the related settings to enable PyTorch TunableOp export PYTORCH_TUNABLEOP_ENABLED=1 export PYTORCH_TUNABLEOP_TUNING=1 export PYTORCH_TUNABLEOP_VERBOSE=1 export PYTORCH_TUNABLEOP_FILENAME=/dockerx/tunableop-config.csv 2). GEMM tuning results will be saved to above tunableop-config.csv file. The GEMM tuning, described in the CSV file, will be integrated into the specific workload associated with your application. 3). Now, turn off tuning before running your application. export PYTORCH_TUNABLEOP_ENABLED=1 export PYTORCH_TUNABLEOP_TUNING=0 export PYTORCH_TUNABLEOP_VERBOSE=1 export PYTORCH_TUNABLEOP_FILENAME=/dockerx/tunableop-config.csv 4). Run your application. The tuning result integration will work automatically. With native PyTorch support for AMD ROCm, developers can seamlessly leverage the PyTorch TuneableOps flow. In our experiments, this approach has yielded over 20% performance improvement in GEMM operations. Developers can check the details from TunableOp Blog.If developers meet questions or issues about TunableOp GEMM tuning,please submit them in PyTorch issues. Technique 3: Optimizing Performance with Tuned GEMM Operations at Ops/Library Level# AMD offers rocBLAS, the AMD library for Basic Linear Algebra Subprograms (BLAS), internally uses Tensile, which supplies the high-performance implementation of GEMM. Additionally, hipBLASLt is a library that provides general matrix-matrix operations. Based on a developer’s preference they can choose either of the two Ops/Librariesfor GEMM tuning tools, rocBLAS tuning tool (rocblas-gemm-tune) or hipBLASLt tuning tool (hipblaslt-bench). First, use the logging scheme of either rocBLAS or hipBLASLt (depending on the library in use) to capture the required GEMM shape information. Then, apply the respective GEMM tuning tools (rocblas-gemm-tune or hipblaslt-bench) to optimize performance. GEMM Tuning with rocblas-gemm-tune# The rocblas-gemm-tune tool works by using Tensile to heuristically search through various kernel parameters in order to find the optimal configuration that provides high GPU performance for performing GEMM operations. The detail steps are as below: 1). Installing/rocBLAS Setup: In the ROCm Docker image, the rocBLAS library is pre-installed but if the rocBLAS client related executable bin files (rocblas-bench and rocblas-gemm-tune) are not pre-installed, you may need to build them from source code. 2). Generating GEMM Problem Sizes: rocBLAS provides the logging scheme to dump GEMM shapes info for further performance tuning, which is enabled by rocBLAS environment settings. - Environment variable `ROCBLAS_LAYER=4` turns on log_profile, and outputs a YAML description of each rocBLAS function called, along with its arguments and number of times it is called. This list of entries can be used directly as input to `rocblas-gemm-tune` utility to do performance tuning. - Use environment variable `ROCBLAS_LOG_PATH` to set the full path name for all logs, and store the grabbed GEMM shapes information into a YAML file, `ROCBLAS_LOG_PATH=~/dir/rocblas_gemms.YAML` By using the two settings described above, developers can yield the GEMM shape information. ROCBLAS_LAYER=4 ROCBLAS_LOG_PATH=./rocblas_gemm.YAML ./gemm-app 3). GEMM Tuning with rocblas-gemm-tune: At this stage, use the dumped YAML file to run GEMM tuning by running rocblas-gemm-tune. The sample command: ```bash /opt/rocm/bin/rocblas-gemm-tune --YAML /home/rocblas_gemms.YAML ``` Running this will output the fastest solutions for each GEMM in the YAML file. Each solution is identified by an unique solutions index. It generates a CSV file by aggregating the output solution index, and the CSV file form looks like: transA,transB,M,N,batch_count,K,alpha,beta,lda,ldb,ldc,input_type,output_type,comput_type,solution_index N, N, 320,588,1,4096,1,0,320,6144,320,f32_r,f32_r,f32_r,3788 N, N, 512,3096,1,512,1,0,512,512,512,f16_r,f16_r,f16_r,4566 4). Integration: Now we have a list of faster solutions for all the GEMM problems, users can integrate this into the application, to pick these faster implementations in rocBLAS by setting the environment variable. Use below example command: export ROCBLAS_TENSILE_GEMM_OVERRIDE_PATH = csv_file_path If developers meet questions or issues about rocblas-gemm-tune,please submit them in rocBLAS issues. GEMM Tuning with hipblaslt-bench# hipBLASLt-bench is another GEMM tuning tool within hipBLASLt library and can be used to search the best-performing GEMM kernel for a given set of GEMM problems. To use hipBLASLt, follow below steps: 1). Installing hipBLASLt: In the ROCm Docker image, the hipBLASLt library is pre-installed, however the hipBLASLt client executables, such as hipblaslt-bench, may not be included by default and you may need to build these executables from source. 2). Generating GEMM Problem Size: Similar with rocBLAS, hipBLASLt can also dump the required GEMM problem/shape sizes by its own logging scheme. Detailed info about hipBlASLt logging scheme in logging-heuristics. Use below sample command to generate the GEMM problem sized YAML file: HIPBLASLT_LOG_MASK=32 HIPBLASLT_LOG_FILE=log_file_name.log ./application_bin To organize the output logs further, you can get unique calls with call counts like below shell command: cat log_file_name.log | sort | uniq -c > unique_log_file.log 3). GEMM Tuning with hipblaslt-bench: Set the environment variable HIPBLASLT_TUNING_FILE=<file_name> to tune and store the tuning result of the best solution indices for the GEMM problems. The <file_name> points to the tuning file. GEMM tuning will be completed by launching hipblaslt-bench, which input parameters can be set according to the log file of step 2. A sample command to save file with below user-defined name in the current working directory: export HIPBLASLT_TUNING_FILE=tuning.txt /opt/rocm/bin/hipblaslt-bench --api_method c -m 28672 -n 8192 -k 8192 --lda 8192 --ldb 8192 --ldc 28672 --ldd 28672 --stride_a 0 --stride_b 0 --stride_c 0 --stride_d 0 --alpha 1.000000 --beta 0.000000 --transA T --transB N --batch_count 1 --scaleA 1 --scaleB 1 --a_type f8_r --b_type bf8_r --c_type bf16_r --d_type bf16_r --scale_type f32_r --bias_type f32_r --compute_type f32_r --initialization trig_float -i 100 -j 100 --flush --rotating 512 --algo_method all 4). Integration: Unset tuning file name once tuning is complete: unset HIPBLASLT_TUNING_FILE Override the hipBLASLt library with the tuned file info: export HIPBLASLT_TUNING_OVERRIDE_FILE=tuning.txt Now we can replace the default GEMM kernel with the tuned GEMM kernel.If developers meet questions or issues about hipblaslt-bench GEMM tuning,please submit them in hipBLASLt issues. Summary# Given the pivotal role of GEMM operations in AI workloads, particularly for LLM applications, AMD offers a suite of powerful tuning tools, including rocblas-gemm-tune, hipblaslt-bench, and PyTorch TuneableOps. These tools provide GPU developers with the flexibility to optimize GEMM performance, allowing precise fine-tuning for maximum efficiency on AMD GPUs. By leveraging these resources, developers can enhance workload performance, ensuring optimal execution and superior results in AI-driven tasks. Additional Resources# Optimized docker hub: https://hub.docker.com/r/rocm/vllm/tags Optimized docker image: rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6 The opimized docker blog: https://www.amd.com/en/developer/resources/technical-articles/how-to-use-prebuilt-amd-rocm-vllm-docker-image-with-amd-instinct-mi300x-accelerators.html PyTorch TunableOp: https://pytorch.org/docs/stable/cuda.tunable.html Improve Performance :Accelerating models on ROCm using PyTorch TunableOp — ROCm Blogs rocBLAS: https://rocm.docs.amd.com/projects/rocBLAS/en/latest/index.html hipBLASLt: https://rocm.docs.amd.com/projects/hipBLASLt/en/latest/
Reasoning about performance from first principles When trying to optimize performance on a computer, we can simplify the hardware into two things [1] For the RTX 3090 this is around 35.5 TFLOPs with a max global memory bandwidth of 936 GB/s. We can idealize the program running on the computer as loading K bytes and performing N operations with each byte, giving us the arithmetic intensity. When trying to understand what the upper bound of tasks that can be performed per second, we can use the equation below. $$P = \min\left(\frac{\text{flop}}{s}, \frac{N}{K} \times \frac{\text{bytes}}{s}\right)$$ By multiplying the memory bandwidth by the arithmetic intensity we adjust for the fact that each loaded byte results in K operations. The resulting number P is the minimum of the memory bandwidth (adjusted for arithmetic intensity) and the theoeretical FLOPS/s. This tells us the upper bound for performance as well as whether the program will be compute-bound or memory-bound. This beautiful abstraction is true for any machine based on the Von Neumann architecture, not just GPUs. ‍ ‍ The plot above shows a blue line, which is AI * Mem-Bandwidth, as well as a FLOPS/s line which is horizontal. The point at which the blue line crosses the red-line shows what the arithmetic intensity of a program would need to be to fully saturate the 3090s compute units. We need to perform ~38 operations per loaded byte to get to this point! Lets look at arithmetic intensity for vector addition/matrix multiplication, and quantify their AI. ‍ FP32 Vector Addition (4 bytes per element) 1) Load \(4 N\) bytes for Vector A. 2) Load \(4 N\) bytes for Vector B. 2) Perform ~\(N\)  FP32 add operations to add Vector A & B. 3) Store \(4 N\) bytes. Arithmetic Intensity (AI) is \(\frac{N}{12N}\) operations/bytes or 0.0833. Vector addition is heavily memory bound, since the arithmetic intensity is so low. ‍ FP32 Matrix Multiplication (all matrix dimensions `N` for simplicity) 1) Load \(4 N^2\) bytes for Matrix A. 2) Load \(4 N^2\) bytes for Matrix B. 3) Perform \(2 N^3\) operations (there are \(N^2\) outputs, each output requires a dot-product of vectors with size \(N\), each dot-product requires ~\(2 N\) additions & multiplies). 3) Store \(4 N^2\) bytes. Arithmetic Intensity (AI) is `\(\frac{2 N^3}{12 N^2} = 0.167 N\) ops/bytes. Matrix multiplication is interesting because its AI is a linear function of the sizes of the input matrices. As a result, the program is memory bound at smaller sizes but becomes compute bound at larger ones. ‍ This analysis gives us the ‘light-speed’ for a given program. In reality, we won't load at theoretical bandwidth, we won't compute at theroetical max FLOPs, we may perform more than the ideal number of operations specified during program execution, there will be wind up/wind down effects, etc. By comparing an actual kernel’s effective ops/s to the idealized we can invest our efforts in tackling the right bottle-neck, and better understand how far we are from theoretical limits. Memory Optimizations The optimizations below are some of the most important for getting good performance. If the compute units aren’t getting a high throughput stream of bytes to crunch on, the fact that the GPU has an absurd number of compute units won’t matter. [2] Coalesced and Aligned Global Memory Access [3] When accessing global memory always do your best to have each thread access sequential addresses where the array is aligned, ideally to 128B (using cuda malloc will default align to 256B). When an RTX 3090 performs a cached GMEM load, it will pull in cache-lines of size 128byte. If a warp (32 threads) accesses 32-bit addresses the entire load can be processed in a single transaction. Deviating from the sequential nature of the access, or misaligning memory in GMEM during kernel launch will reduce effective GMEM utilization. As we learned in the intro post, unaligned access can either cause a) excessive pre-charging of row-buffers due to having to pull in multiple DRAM rows b) excessive memory controller overhead from having to pull multiple cache-lines within the same DRAM row Both lead to unnecessary overhead. If strided access is necessary, use shared memory as an intermediary to allowing for coalescing. Use Vectorized Memory Access Instructions [4] Lines issuing memory loads in CUDA typically compile to a 32-bit (one word) load using the LD.E/ST.E instructions in SASS. If you know the thread will require multiple sequential words of memory, and have a piece of data that is aligned in memory to that multiple, you can issue vectorized instructions to load multiple words in a single transaction. Loading in this way reduces instruction overhead and can be combined with coalescing to improve memory throughput/latency. Vectorized loads are accomplished by using vector data types (float4, float2, int4, int2, etc) and typecasting. An example of a kernel that uses vectorizes loading of two consecutive 32-bit integers is shown below. ‍ __global__ void device_copy_vector2_kernel(int* d_in, int* d_out, int N) { int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx < N / 2) { reinterpret_cast<int2*>(d_out)[idx] = reinterpret_cast<int2*>(d_in)[idx]; } // Only one thread processes the final element, if N is odd if (idx == 0 && N % 2 == 1) { d_out[N - 1] = d_in[N - 1]; } } ‍ Avoid Shared Memory Bank Conflicts Shared memory is divided into 32 banks of SRAM cells, with a controller for each bank that can serve 1byte/clock cycle. When multiple threads access a bank simultaneously, these accesses will serialize. The easiest way to avoid shared memory bank conflicts is to access sequential shared memory addresses with each thread within a warp (similar to coalesced GMEM access). If assigning each thread to a separate bank isn’t feasible, memory padding can be used to introduce an offset that eliminates the conflict. The padding is at the cost of higher shared memory utilization per block which may impact occupancy. Keep Re-Used Data in the fastest Memory Stores Many operations involve data re-use. When performing a matrix multiplication for example, we load 2N^2 elements, but perform 2N^3 operations, meaning each element has N operations associated with it. It would be extremely wasteful to go back to global memory for each of the N operations. When performing operations with the same data multiple times, consider keeping it in shared memory or thread registers. Simon Boehm's post on optimizing GEMM makes heavy use of this method and I would highly reccomend giving it a thorough read. NVIDIA GPUs also have constant and texture memory, which to be totally honest I have not used. But from what I understand, these stores are read-only and can provide efficient access if many threads will need to access the same memory address many times [7]. When initially thinking about kernel architecture, spend a fair amount of time understanding the graph of memory dependencies for each output element that your kernel produces. Visualizing this graph can provide a clearer picture of what bytes should be put where. Avoid Register Spilling [8] GPUs have a maximum number of registers that can be allocated to each thread. When a thread violates this limit (255 for RTX 3090), the data will spill to ‘local memory’ which is actually just a section global memory set aside for that thread! If we are lucky, L2 cache will intercept the read/writes and we won’t have to pay full latency associated with GMEM. But if we are not careful, we can inadvertently perform tons of slow GMEM accesses and delude ourselves into thinking we are using fast thread registers. When compiling a CUDA kernel, you can add ‘-Xptxas -v’ to your nvcc command to see a print out of register use per thread and make sure you aren’t close to any limits. ‍ Compute Optimizations Getting bytes to compute units efficiently is important, and so is make sure the compute units themself are adequately saturated with arithmetic instructions that operate on the incoming bytes. [2] Maximize Number of Active Warps (Occupancy) [9] I mentioned in the previous post that its up to us to make sure all of compute units are executing useful operations during the course of the kernel execution. This can be done by thinking carefully about memory resources and thread count in each block, since this is the fundamental limiter to how much warps can be active at the same time. On an RTX 3090, each SM can support: ‍ ‍ Notice that while the number of max active threads is 1536, there are only 128 CUDA cores on an SM. Whats happening here is the warp scheduler tries to make as many warps ‘active’ as possible by assigning them the registers and shared memory they request. When warps are stalled due to memory latency, the scheduler can swap active threads in and out of CUDA cores to make sure the hardware is fully utilized. This is what occupancy measures: how many of the maximum possible active warps are able to be added to the pool of warps ready to run on a core? In this way the GPU hides memory latency by oversubscribing the hardware. The GPU can only oversubscribe if its fed kernels that make availiable blocks with the appropriate number of threads/warps, and don’t hog all the registers/shared memory. While low occupancy will certainly hurt performance, high occupancy doesn’t guarante compute unit saturation. According to this post on NVIDIA forums, 50% - 75% occupancy is usually acceptable. Also note that while per SM occupancy is important, we also want to saturate all SMs with work. There are 84 SMs on an RTX 3090 so we need atleast 84 blocks to make sure each has something to do. ‍ Tangent I haven’t tried this out myself but I do think going forward it would be best to write kernels that are as agnostic to register/shared-mem use/block dimension as possible in order to let an auto-tuner figure out what allocation of resources is optimal for performance. Sometimes it can be better to have fewer threads per block and more work per thread. This opens up more thread-level ILP (instruction level parallelism) and can enable more thread registers per thread. This is particularly true when performing block-level reductions as fewer threads means less thread → thread comms. This could lead to lower occupancy but better over all performance. Use Tensor Cores & FMA Units [14] [15] Tensor cores are designed specficially to accelerate matrix-multipy-accumulate operations on GPUs. Use them whenever an operation can be represented as an MMA. On a similar grain of thought, scalar-multiply-accumulates are also hardware accelerated and can be called using the fmaf() function in CUDA. The NVCC compiler typically optimizes operations of the format ‘a = a + (b*c)’ into FMA instructions anyway but using the function call can make this explicit. One thing to keep in mind though, is they don’t benefit memory-bound workloads. For example, a convolution can be performed as an implicit GEMM in order to utilize tensor cores, but the memory overhead of the transforms needed to achieve this may far outweigh efficiency gains from tensor core utilization for low arithmetic intensity workloads. Don’t worry if this statement is confusing for now, future posts go into roof-line models and their implications. Minimize Warp Divergence [10] When each thread in a warp of 32 threads executes, control flow overhead is minimized when each thread is performing the exact same operation as the same time. Certain types of data-dependent control flow can cause this to no longer be the case. In these situations, the threads will effectively split into multiple diverged execution paths, with each chunk executing independent of each other. Obviously this hurts hardware utilization as we are running fewer than 32 threads per warp. Try to minimize warp divergence to the extent possible by making sure all threads in a warp will follow the same execution path. Unroll Loops [11] When CUDA code is compiling to PTX, loops with loop counts that are defined at compiled time will get unrolled. Unrolling eliminates overhead associated with checking loop conditionals/incrementing loop count and enables instruction level parallelism in the case of loops that unroll to multiple indepedent instructions. Lets take a look an example with a simple for-loop. We can hint to the compiler to unroll a loop by using ‘#pragma unroll’. Appending an integer after pragra unroll tells the compiler how many to iterations to unroll. By putting a 1 after pragma unroll we can effectively prevent the compiler from unrolling the loop. Standard Loop CUDA‍ float temp = data[idx]; #pragma unroll 1 for (int i = 0; i < 10; ++i) { temp += i; } data[idx] = temp; ‍ Standard Loop PTX $L__BB0_1:        cvt.rn.f32.s32  %f4, %r5; //convert value in r5 to float and move to f4        add.f32         %f5, %f5, %f4; //add value from f4 to f5 and store in f5        add.s32         %r5, %r5, 1; //increment loop counter in r5 by 1        setp.ne.s32     %p1, %r5, 10; //compare loop counter (r5) to 10 and set predicate register p1 to True if it is        @%p1 bra        $L__BB0_1; //branch conditionally based on the value in p1        st.global.f32   [%rd1], %f5;        ret; Unrolled Loop PTX (CUDA code uses #pragma unroll instead of #pragma unroll 1) add.f32         %f2, %f1, 0f00000000;add.f32         %f3, %f2, 0f3F800000;add.f32         %f4, %f3, 0f40000000;add.f32         %f5, %f4, 0f40400000;add.f32         %f6, %f5, 0f40800000;add.f32         %f7, %f6, 0f40A00000;add.f32         %f8, %f7, 0f40C00000;add.f32         %f9, %f8, 0f40E00000;add.f32         %f10, %f9, 0f41000000;add.f32         %f11, %f10, 0f41100000;st.global.f32   [%rd4], %f11;ret; Note that in the unrolled case the compiler turned the loop into 10 distinct add.f32 instructions, and derived the constant values based on the loop count at compile time. This change eliminates loop related overhead. In this case there is a dependency between each instruction but in some cases independent instructions resulting from unrolling can also allow the thread to utilize greater instruction level parallelism. The CUDA compiler is quite good at spotting loops that can be unrolled but throw in a ‘#pragma unroll’ for loops you think could benefit from unrolling. Use Signed-Ints for Loop Counters [12] Unsigned integers have defined overflow behavior as they are expected to loop back around, whereas signed ints result in undefined behavior at runtime. Ensuring the former holds true reduces the compiler’s ability to optimize loop execution. As a result you may see a small pef improvement in hot-loops by using signed-ints for loop counters. Use Fast Math Library (when precision isn’t critical) [13] CUDA provides a pretty extensive math library for operations that execute on the special functions unit. Some examples of functions include - sin(x), cos(x), log(x), exp(x), etc. If you don’t care as much about precision and can accept some rounding errors (SIDE BAR ABOUT HOW INT8 works so rounding errs prob fine), using the fast version of these calls can improve performance. Examples - __sinf(x), __cosf(x) vs. sin(x), cos(x) Maximize Instruction Level Parallelism via Dual-Issue Instruction Dispatch According to discussion in this thread, dual-instruction dispatch on NVIDIA GPUs isn’t a huge driver of improved performance and not worth too much thought. But it is worth noting that the warp scheduler can issue up to two instructions per cycle IF the are multiple instructions with no data or control flow dependencies. Writing a kernel such that there are fewer dependencies and diversity in the types of execution units being used (FP32/tensor core/load-store units/etc) may enable higher instruction dispatch per cycle. ‍ References [1] https://people.eecs.berkeley.edu/~kubitron/cs252/handouts/papers/RooflineVyNoYellow.pdf [2] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/contents.html [3] https://developer.nvidia.com/blog/how-access-global-memory-efficiently-cuda-c-kernels/ [4] https://developer.nvidia.com/blog/cuda-pro-tip-increase-performance-with-vectorized-memory-access/ [5] http://homepages.math.uic.edu/~jan/mcs572f16/mcs572notes/lec35.html [6] https://slideplayer.com/slide/12553635/ [7] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#constant-memory [8] https://developer.download.nvidia.com/CUDA/training/register_spilling.pdf [9] https://on-demand.gputechconf.com/gtc-express/2011/presentations/cuda_webinars_WarpsAndOccupancy.pdf [10] https://people.maths.ox.ac.uk/gilesm/cuda/lecs/lec3-2x2.pdf [11] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#branch-predication [12] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#loop-counters-signed-vs-unsigned [13] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#math-libraries [14] https://developer.nvidia.com/blog/programming-tensor-cores-cuda-9/ [15] https://forums.developer.nvidia.com/t/fma/32965
OpenCL Kernel Memory Optimization - Local vs. Global Memory Hi, I’m new to OpenCL and I consider using it for some graphics computation where using an OpenGL shader seems not to be natural. Before I actually do so I thought I’d try how much of a performance improvement I could get using OpenCL on my Nvidia GTX 460 over my CPU. For this reason, I implemented a simple skeleton skinning algorithm, once on the CPU, without multithreading but using the Eigen library, which provides SSE-optimized vector and matrix libraries, and once in an OpenCL kernel executing on the GPU. The vertices, bone matrices etc. are generated randomly on application start. I repeat the whole skinning several times so that it executes long enough to get meaningful timing results. First I simply tried a kernel where I have as much work-items as I have vertices, each one generating one output vertex. I quickly saw that this is not a good idea because performance was even worse than on the CPU. I figured this was in essence a problem of too many memory accesses, mainly to the bone matrices, which are an array of float16-vectors that is addressed four times in each work-item. Then I changed the algorithm so that each work-item handles multiple output vertices, one after the other, so that I have less work-items. In each work-group I create a copy of the bone matrices in local space, and further accesses to these matrices come from local space. The interesting part of my C++ code looks like this: #define NUM_BONES 30 #define NUM_VERTICES 30000 #define NUM_VERTICES_PER_WORK_ITEM 100 #define NUM_ANIM_REPEAT 1000 uint64_t PerformOpenCLSkeletalAnimation(Matrix4* boneMats, Vector4* vertices, float* weights, uint32_t* indices, Vector4* resVertices) { File kernelFile("/home/alemariusnexus/test/skelanim.cl"); char opts[256]; sprintf(opts, "-D NUM_VERTICES=%u -D NUM_REPEAT=%u -D NUM_BONES=%u -D NUM_VERTICES_PER_WORK_ITEM=%u", NUM_VERTICES, NUM_ANIM_REPEAT, NUM_BONES, NUM_VERTICES_PER_WORK_ITEM); cl_program prog = BuildOpenCLProgram(kernelFile, opts); cl_kernel kernel = clCreateKernel(prog, "skelanim", NULL); cl_mem boneMatBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_BONES*sizeof(Matrix4), boneMats, NULL); cl_mem vertexBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*sizeof(Vector4), vertices, NULL); cl_mem weightBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*4*sizeof(float), weights, NULL); cl_mem indexBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*4*sizeof(uint32_t), indices, NULL); cl_mem resVertexBuf = clCreateBuffer(ctx, CL_MEM_WRITE_ONLY, NUM_VERTICES*sizeof(Vector4), NULL, NULL); uint64_t s, e; s = GetTickcount(); clSetKernelArg(kernel, 0, sizeof(cl_mem), &boneMatBuf); clSetKernelArg(kernel, 1, sizeof(cl_mem), &vertexBuf); clSetKernelArg(kernel, 2, sizeof(cl_mem), &weightBuf); clSetKernelArg(kernel, 3, sizeof(cl_mem), &indexBuf); clSetKernelArg(kernel, 4, sizeof(cl_mem), &resVertexBuf); size_t globalWorkSize[] = { NUM_VERTICES / NUM_VERTICES_PER_WORK_ITEM }; size_t localWorkSize[] = { NUM_BONES }; for (size_t i = 0 ; i < NUM_ANIM_REPEAT ; i++) { clEnqueueNDRangeKernel(cq, kernel, 1, NULL, globalWorkSize, localWorkSize, 0, NULL, NULL); } clEnqueueReadBuffer(cq, resVertexBuf, CL_TRUE, 0, NUM_VERTICES*sizeof(Vector4), resVertices, 0, NULL, NULL); e = GetTickcount(); return e-s; } The associated program/kernel looks like this: inline float4 MultiplyMatrixVector(float16 m, float4 v) { return (float4) ( dot(m.s048C, v), dot(m.s159D, v), dot(m.s26AE, v), dot(m.s37BF, v) ); } kernel void skelanim(global const float16* boneMats, global const float4* vertices, global const float4* weights, global const uint4* indices, global float4* resVertices) { int gid = get_global_id(0); int lid = get_local_id(0); local float16 lBoneMats[NUM_BONES]; lBoneMats[lid] = boneMats[lid]; barrier(CLK_LOCAL_MEM_FENCE); for (int i = 0 ; i < NUM_VERTICES_PER_WORK_ITEM ; i++) { int vidx = gid*NUM_VERTICES_PER_WORK_ITEM + i; float4 vertex = vertices[vidx]; float4 w = weights[vidx]; uint4 idx = indices[vidx]; resVertices[vidx] = (MultiplyMatrixVector(lBoneMats[idx.x], vertex * w.x) + MultiplyMatrixVector(lBoneMats[idx.y], vertex * w.y) + MultiplyMatrixVector(lBoneMats[idx.z], vertex * w.z) + MultiplyMatrixVector(lBoneMats[idx.w], vertex * w.w)); } } Now, per work-item I have only one access to the global boneMats, when I create the local copy, and it’s even a lot less work-items executing altogether. Then I have NUM_VERTICES_PER_WORK_ITEM*4 accesses to the local array afterwards. As I understand, local memory should be way faster than global memory, so I thought this would greatly improve performance. Well, the opposite is the cause: When I let lBoneMats alias to the global boneMats instead, I get actually better performance than with the kernel listed above. What did I get wrong here? Thanks in advance! I have still not found a solution for this problem. Does nobody have any idea, or anything I could try? There is the Nvidia Visual Compute Profiler which can tell you some performance information. Looking at your launch parameters, you are launching 300 work items arranged into 10 groups of 30 work items each. On Nvidia GPUs, threads are grouped into warps - a group of 32 threads. Each multi processor executes hundreds of threads. Such a large number of threads are needed to hide the latency involved in accessing either global or local memory (although local memory accesses are not as costly). There are several multi-processors, hence you need thousands of threads to adequately use a GPU. That is why your global memory version is faster I think. I would suggest changing your launch settings so that there are more threads (keep the shared memory though, just do less work per work item). Also, use multiples of 32 threads. a couple of things: a) even if the gpu is slower than a cpu, not having to copy the data to/from the device could help. (i.e. go straight from opencl to opengl buffer or whatever) b) looks like your loop is accessing memory (vidx) in pretty much worst-case access pattern. each work-item should access adjacent values where possible. To me it looks like it would be best implemented as a one-work-item-per-output algorithm as you said you first tried. Or even use 4 work items per result (one for each of idx.xyzw). Assuming the indexing is correct (i.e. set vidx == get_global_id(0)), i would have expected that to be faster. I think you’re also confusing what local work size is - it is just the modulo of the total work size which is allocated to a given work-unit (i.e. shares LDS and some other resources). It isn’t a separate dimension from global work size. It is only a ‘coincidence’ that your code is working and num_vertices/num_vertices_per_work_item is a multiple of num_bones. LDS is way faster than uncached global memory, but if you’ve only accessing 30 ‘bones’, then it should fit into L1 cache, in which case LDS isn’t that much of a boost (it depends on the hardware, not sure what it is on nvidia). There is the Nvidia Visual Compute Profiler which can tell you some performance information. I have tried it with CUDA 4.2, but couldn’t really see what it was trying to tell me. With CUDA 5.0 I can’t get OpenCL profiling to work at all. As I’ve read, OpenCL profiling seems to be broken in 5.0, and the driver of 4.2 does not compile on my 3.5 kernel, so I guess I have to wait until Nvidia fixes that (I’m sceptic as to whether they will at all). I would suggest changing your launch settings so that there are more threads Seems like this was the main problem. I have no idea what I do now that I haven’t done with my first implementation, but now it’s about four times faster than on my CPU+SSE. Also, use multiples of 32 threads. I don’t really know how to do that with my algorithm, as I have no way of controlling the number of vertices, but when I launch one thread for every vertex as I do now, I guess the bit of processing time wasted is not really significant anymore. even if the gpu is slower than a cpu, not having to copy the data to/from the device could help. (i.e. go straight from opencl to opengl buffer or whatever) Maybe I’ll do this in my final implementation. looks like your loop is accessing memory (vidx) in pretty much worst-case access pattern. each work-item should access adjacent values where possible. I can’t see any nice way to do this. Maybe presorting the vertices by their bone matrix indices, but that would be quite costly (although it would have to be done only once) and I don’t like the idea of changing the vertex order. To me it looks like it would be best implemented as a one-work-item-per-output algorithm as you said you first tried. Or even use 4 work items per result (one for each of idx.xyzw). Assuming the indexing is correct (i.e. set vidx == get_global_id(0)), i would have expected that to be faster. As I mentioned before, I do this now, and for whatever reason it’s faster than what I first tried. I think you’re also confusing what local work size is - it is just the modulo of the total work size which is allocated to a given work-unit (i.e. shares LDS and some other resources). It isn’t a separate dimension from global work size. It is only a ‘coincidence’ that your code is working and num_vertices/num_vertices_per_work_item is a multiple of num_bones. That’s what I thought it is: The number of work-items (threads) per work-group (thread block). I know I have to choose execution parameters so that the total number of work-items is evenly dividable by it. In my understanding, changing local work size should not affect performance, assuming shared memory is not used (otherwise the more work groups you have, the more global-to-shared memory copies have to be done, assuming every work group always copies the same amount of data) and it is still a multiple of the warp size (because otherwise the warps aren’t fully utilized). One question I still have which I couldn’t guess from Nvidias docs is: Can a single warp be made up of threads from different work groups (thread blocks)? Although there might be room for further improvement, at least I can see that the GPU is actually faster than the CPU, so I’m satisfied for now. The only thing I can’t quite guess is why the same program runs about 8 times slower than the CPU on my old GeForce 8200, even when I optimize the execution parameters. I guess that is because it’s an onboard GPU and global memory accesses are even slower than on a GPU with dedicated memory. The same is true when I execute the CL program on my CPU device, but it might just be too massively multithreaded for a CPU, I haven’t tested this enough yet. Anyway, thanks for your help! That’s what I thought it is: The number of work-items (threads) per work-group (thread block). I know I have to choose execution parameters so that the total number of work-items is evenly dividable by it. In my understanding, changing local work size should not affect performance, assuming shared memory is not used (otherwise the more work groups you have, the more global-to-shared memory copies have to be done, assuming every work group always copies the same amount of data) and it is still a multiple of the warp size (because otherwise the warps aren’t fully utilized). Changing local work size will affect performance outside of just using LDS for a bunch of reasons: everything in the workgroup executes in lock-step, which affects cache and branching stuff, it affects how many registers are required which affects how many workgroups can be executed concurrently, etc. BTW use a worksize multiple of 64 if you also want it to work well on AMD hardware, as that is the minimum it requires. One question I still have which I couldn’t guess from Nvidias docs is: Can a single warp be made up of threads from different work groups (thread blocks)? A warp is just a hardware implementation thing specific to nvidia. But afaik, all threads in a warp are executing the same code at the same time: so they have to be part of the same opencl workgroup for it to make any sense. i.e. i believe there is a 1:N mapping of opencl workgroup to nvidia warp. Although there might be room for further improvement, at least I can see that the GPU is actually faster than the CPU, so I’m satisfied for now. The only thing I can’t quite guess is why the same program runs about 8 times slower than the CPU on my old GeForce 8200, even when I optimize the execution parameters. I guess that is because it’s an onboard GPU and global memory accesses are even slower than on a GPU with dedicated memory. The same is true when I execute the CL program on my CPU device, but it might just be too massively multithreaded for a CPU, I haven’t tested this enough yet. Anyway, thanks for your help! Well there’s a large variation in performance of gpu cards, so it can’t speed them up. And to get that performance you need to access memory properly - i.e. coalesced. I have a follow up question to this. In my GPU there are 384 cores, 8 compute units (streaming multiprocessors), so there 384/8 = 48 streaming processors on each compute unit. Given that NVidia warp size is 32, which means 32 threads execute in step, doesn’t that mean 16 SPs are not doing anything on each cycle? That doesn’t seem to make sense to me. Can someone help to clarify? Thanks, J Powered by Discourse, best viewed with JavaScript enabled
How to optimize tail effect? Hi Experts, I have been optimizing a kernel function for several days now, and I believe there is no more room for optimization in terms of mathematics and algorithms. It’s time to focus on CUDA programming for further optimization. According to Nsight Compute, the tail effect is the biggest bottleneck. My device is the AGX Orin, with 16 SMs. Currently, gridDim = dim3(121, 3), blockDim = 128 (4 warps). I understand that the tail effect is caused by an imbalance in the workload of the last wave, and I may need to fill the remaining blocks in the last wave. I have also referred to CUDA Pro Tip: Minimize the Tail Effect | NVIDIA Technical Blog I would like to know if my understanding is correct, and what would be the best approach for optimization. Thanks! 圖片1703×102 17.1 KB Tail Effect: Est. Speedup: 50% A wave of thread blocks is defined as the maximum number of blocks that can be executed in parallel on the target GPU. The number of blocks in a wave depends on the number of multiprocessors and the theoretical occupancy of the kernel. This kernel launch results in 1 full waves and a partial wave of 171 thread blocks. Under the assumption of a uniform execution duration of all thread blocks, the partial wave may account for up to 50.0% of the total kernel runtime with a lower occupancy of 32.6%. Try launching a grid with no partial wave. The overall impact of this tail effect also lessens with the number of full waves executed for a grid. 3 * 121 = 363 = 192 + 171 You have two waves, one with 192 blocks, one with 171 blocks. That is quite balanced. Not sure, where the 32.6% is coming from. I would doubt that you achieve a speed-up of 50% without getting more work to the GPU with one invocation (e.g. two iterative kernel launches put into one kernel launch). Thank you for your reply! This is a single kernel, so there’s no issue with iterative or consecutive launches. In fact, I often doubt whether the optimization suggestions provided by Nsight Compute can actually be achieved… This is a single kernel, so there’s no issue with iterative or consecutive launches. I was more thinking of combining with other launches to make the invocation more efficient. With two waves the amount of tail effect is higher than with more waves. Or with parallel kernel launches on other streams, etc. If you want to explore it further, the usual suggestion (I think) would be to rewrite the kernel as a grid-stride loop, and choose a grid size to exactly fill your GPU. That will reduce your kernel to a single wave. The 50% number is based on the idea that that will be the most efficient launch (with respect to occupancy and the tail effect), and “could” result in a kernel duration equivalent to a single wave of your current realization. That could be viewed as an “upper bound” or best case outcome. So if you wish, interpret the nsight compute suggestions that way. The percentage speedup is “the most you could possibly get from this change, if all other conditions were perfect for the effect.” Since nsight compute can’t (currently) do the level of analysis needed to accurately predict the actual speedup from a complex refactoring, then it gives the output in that sense. Thanks for the hints! After switching to grid-stride and reusing threads, the duration dropped to 18 microseconds, showing 25% improvement. Additionally, the Tail Effect: Est. Speedup: 50% suggestion has disappeared from the optimization opportunities section. I’m curious whether the 50% improvement mentioned in the NCU report can really be achieved. 圖片1544×305 85.1 KB Hi, with just two original two waves, one should in general take special care, when using grid-stride loops, how the work is distributed on different SMs and SM Partitions to avoid that one SM Partition has finished early and others have several more blocks to do (which is like an implicit tail effect). That is especially relevant, if one SM does more than 4 warps (i.e. one SM Partition has more than 1 warp). In your case, each SM Partition has one warp, so it is less relevant. I want to mention briefly that the previous results had some bugs. The final results, which used grid-stride loops, did not show significant gains. I might share the code later. I wouldn’t really expect much gains from the grid-stride refactoring by itself, for a case like this (2 waves). As curefab points out, you may not be changing the work breakdown structure very much, and even though you have eliminated the “wave effect”, you probably haven’t done much to actually make the imbalance go away. Naturally this would depend to some degree on how much work you are actually doing per element or per thread. If the work per element or per thread is “large” then the refactoring is likely to make little difference. To see something approaching a large difference in performance (like 50%) you would need a situation where the work per thread or per element was almost zero, such that the overhead of work scheduling (e.g. deposition of blocks, traversal of loops, etc.) was dominating your performance. Such characteristics are not a hallmark of good CUDA code anyway, although people sometimes wrestle with those cases as well. Related topics Powered by Discourse, best viewed with JavaScript enabled
vLLM Office Hours: Get the latest updates, connect with committers, and up-level your vLLM skills. Join us! Products Discover faster ways to inference your ML model. Products nm-vllm Enterprise inference server for LLMs on GPUs. Neural Magic Compress Developer subscription for enterprises aiming to build and deploy efficient GenAI models. DeepSparse Sparsity-aware inference server for LLMs, CV and NLP models on CPUs. Community Explore essential resources for every ML practitioner. Community vLLM Office Hours Join our bi-weekly vLLM office hours to learn, ask, and give feedback. GitHub Look under the hood and contribute to our open-source code. SparseZoo Get started faster with our open-source model repository. Hugging Face Deliver fast inference with our pre-optimized, open-source LLMs. Docs Access the tutorials, guides, examples, and more. Blog Resources Peruse our research. Ask a question. Resources Research Papers Learn more about the magic behind Neural Magic. Support Get the answers you need. Company Get to know us better. Company About Us Who's Neural Magic? Our Technology How does it work? Careers Interested in joining our team? Contact Have a question for us? Let's Connect Let's Connect Introducing Machete, a Mixed-Input GEMM Kernel Optimized for NVIDIA Hopper GPUs Oct 14, 2024 Author(s) Mixed-input quantization is a technique that processes weights and activations at different precisions in neural networks. The most common implementation is w4a16 quantization (e.g., GPTQ or AWQ), which uses 4-bit quantized weights and 16-bit activations (float16 or bfloat16). This approach primarily aims to reduce GPU memory requirements for model execution. In most Large Language Model (LLM) workloads, model weights consume the majority of GPU memory. By quantizing weights from float16 to 4-bit, a remarkable ~4x reduction in memory needed for weight storage can be achieved. Additionally, smaller weights can lead to speedups when the linear layer is memory-bound (i.e., limited by weight loading), which occurs when the batch or sequence length is small, resulting in activations being much smaller than the weights. LLM inference involves a mix of both compute-bound and memory-bound iterations. On modern NVIDIA Hopper GPUs, current state-of-the-art mixed-input linear kernels struggle in compute-bound scenarios (as illustrated in Figure 1). We are excited to announce Machete, Neural Magic's latest advancement in mixed-input quantization performance. This kernel is the spiritual successor to the Marlin kernels created by Elias Frantar and integrated into vLLM by Neural Magic. While Marlin was specifically designed for Ampere generation GPUs and struggles on Hopper GPUs (namely H100), Machete was built on top of the work highlighted in CUTLASS 3.5.1 (see example #55 as our initial starting point). This allows it to efficiently target Hopper and beyond, performing well in both compute and memory-bound regimes. It optimizes the on-the-fly upconversion of weights required in mixed-input scenarios, and hides this latency by overlapping it with compute and data-movement. Machete is now available in vLLM 0.6.2+ as a backend for w4a16 and w8a16 compressed-tensors models, for GPTQ models, and more to come. With Machete, you can now serve Llama 3.1 70B on a single H100 GPU with up to 5 user requests per second while maintaining a median time to first token (TTFT) of <250ms and a median time per output token (TPOT) of <100ms (using chunked prefill on the ShareGPT dataset). With Machete you can now also hit those same serving targets for Llama 3.1 405B using 4 H100 GPUs with up to 3 user requests per second. NOTE: Our use of the term "mixed-input" rather than "mixed-precision" is deliberate, as it more accurately describes the specific case we're addressing. The term "mixed-precision" has traditionally been used to describe a broader range of cases, namely including the case where activations and weights share the same type but are accumulated into a different type (i.e. w8a8). Optimizing Mixed-Input Linear Operations: Weight Pre-Shuffling While Neural Magic's previous blog post on Marlin covered many optimizations for mixed-input linear operations, this article focuses on an important previously undiscussed optimization used by both Marlin and Machete: weight pre-shuffling. To understand the benefits of weight pre-shuffling, we first need to examine how data is fed into the tensor cores in NVIDIA GPUs. When performing matrix-input multiplication on a GPU, the process begins by loading data for a small subproblem from global memory to an SM's local shared memory. This data is then transferred to threads and passed to the tensor cores. Each thread is responsible for loading and holding a specific piece of data in its registers. The layout of this data in registers follows a fixed, complex pattern that varies depending on the instructions used (such as mma or wgmma). In PyTorch, weights are typically stored in row-major or column-major format, which doesn't align with the intricate layout required by tensor cores. This mismatch creates a challenge: while we can load data into shared memory in row-major format, we must shuffle it when loading into registers to match the tensor core requirements. For the purposes of illustration in the following animations, we're using a fictitious GPU that has 8 threads per warp and tensor cores that operate on 8x8 chunks of the weight matrix. While simplified, this closely matches the types of layouts used by NVIDIA tensor cores, albeit scaled down. In these diagrams, we only show the weight matrix (and not activations) as loading and up-converting the weight matrix is the main challenge in mixed-input linear layers. For standard data types (16, and 32-bit), NVIDIA provides an efficient ldmatrix instruction to perform this shuffling in hardware; i.e. ensuring that the right data gets shuffled to the right thread. However, this instruction isn't available for 4-bit types. When working with 4-bit elements and using float16 or bfloat16 as the compute type, we need to load 4-bit elements to match the thread layout for a 16-bit type. Without a 4-bit ldmatrix instruction, we would naively need to resort to performing four 8-bit shared memory loads per tensor core operation. In this case the data shuffling is being handled by software using multiple shared memory loads. These additional shared memory loads are detrimental to performance, as they add latency and the use of only 8-bit loads restricts the shared-memory to registers bandwidth. To overcome this limitation, we can reorder the data ahead of time. By doing so, we can perform a single 32-bit load from shared memory instead of four 8-bit loads. This approach is much more efficient in terms of shared memory bandwidth and latency, ensuring we don't get bottlenecked waiting for shared memory. Importantly, all global memory reordering is done in advance, so it doesn't impact inference time. This pre-shuffling and its effect on how the data gets loaded into memory can be seen in Animation 3. We can push this optimization even further. By interleaving data for four tensor operations together (e.g., interleaving four 8x8 tiles in the visualization), we can perform 128-bit loads—the widest shared load instruction currently available on CUDA devices. After loading the weight parameters into the correct registers, they must be upconverted to 16-bit. Animation 5 demonstrates this process, highlighting how the interleaving of tiles can simplify upconversion and save instructions. By interleaving tiles in global memory, the data is arranged so that, once in registers, multiple nibbles can be efficiently extracted and up-converted in parallel. This is achieved by shifting the nibbles into the lower four bits of their destination registers using simple bit shifts and masking operations and then expanding in-place. If you're curious about these interleaved upconverts, you can find them here. What's New in Machete vs. Marlin? These types of repackaging techniques have already been used in previous mixed-input kernels (namely Marlin and AWQ), so what does Machete do differently? The motivation for developing Machete mainly stems from the poor performance of current mixed-input linear kernels on the NVIDIA Hopper architecture when it comes to larger, more compute bound matrix-multiplications, as can be seen in Figure 1. New Tensor Core Instructions (wgmma) For Marlin, the poor performance on Hopper architecture primarily stems from the use of outdated 'mma' tensor core operations. To achieve peak FLOPs on NVIDIA Hopper, the new 'wgmma' instructions must be utilized. Using only 'mma' instructions results in a loss of approximately 37% of peak compute throughput [1, 2]. Marlin's weight pre-shuffling, being hand-derived and implemented, makes it challenging to easily adapt to the new 'wgmma' layouts. Machete circumvents this issue by employing CUTLASS CUTE layout algebra to construct a description of the repacked layout for a full weight matrix using instruction layout definitions available in CUTLASS. This approach should, in theory, facilitate easier adaptation to any future instructions as well as different types (w4a8 is already in progress). A key challenge with 'wgmma' is that for a matrix multiplication C = AB, only A and C can be sourced from registers, while B must be sourced from shared memory. Since we upconvert the 4-bit bit weights to 16-bit floating point values in registers, we can avoid restoring them into shared memory by computing Y^T = W^T * X^T instead of Y = XW. This ensures the weights are ‘A’ (left side input-operand) with respect to the ‘wgmma’ instructions, allowing them to be sourced directly from registers. CUTLASS enables us to more easily compute the transpose problem by simply manipulating layouts. Tensor Memory Accelerator (TMA) The Tensor Memory Accelerator (TMA) represents a significant advancement in NVIDIA Hopper GPUs' memory handling capabilities. This new hardware feature is designed to asynchronously copy blocks of multidimensional data, known as subtensors, from global memory to shared memory. The introduction of TMA brings several important benefits to the table. Primarily, TMA reduces register pressure by offloading data movement operations, thereby freeing up CUDA cores for other computational tasks. It also simplifies address calculations by handling these complex operations in hardware. Furthermore, TMA's ability to operate independently of compute operations allows for better overlap between memory transfers and computations. Machete takes advantage of this new hardware feature by leveraging CUTLASS's existing TMA infrastructure. Warp-specialization Warp-specialization, introduced in CUTLASS 3.0, divides warps into data movement (producer) and computation (consumer) roles. This technique aims to better overlap data movement and computation, improving memory and tensor core latency hiding. Machete incorporates this approach by leveraging existing infrastructure in CUTLASS. For a more detailed explanation of warp-specialization in CUTLASS, refer to this COLFAX Research blog. Machete Performance With all of the above optimizations in place, we can see that Machete outperforms the other mixed input linear kernels for batch size / prefill seq. len 128+. At batch sizes of 128 and above, the performance is competitive with FP16, meaning there is no longer a trade-off between prefill performance or high-batch size performance and improved low-batch and decode performance. In Figure 6 we can see end-to-end serving performance of these kernels on a 4bit Llama 3.1 70B on a single H100. In the higher user requests rates (3+ req/s), we see a geomean speedup of 29% for input token throughput and 32% for output token throughput. In Figure 7 we can see end-to-end serving performance of these kernels on a 4bit Llama 3.1 405b on 4 H100s. In the higher user requests rates (3+ req/s),we see a geomean speedup of 42% for both input token and output token throughput. Future Work As we continue to develop and refine Machete, we have several exciting areas of focus for future improvements: These initiatives underscore our commitment to pushing the boundaries of mixed-input quantization performance. By addressing these areas, we aim to make Machete an even more powerful and flexible tool for efficient LLM inference on NVIDIA Hopper GPUs and beyond. We're excited about the potential impact of these improvements and look forward to sharing updates as we progress. Subscribe to our blog, follow us on X, and join our bi-weekly vLLM office hours to stay tuned for more exciting AI developments. About Neural Magic Neural Magic is advancing the performance of AI inference by optimizing large language models (LLMs) for efficient and scalable deployments. As a leading contributor to the open-source vLLM project, we develop and implement key techniques like sparse architectures, mixed-precision quantization, and performance optimizations to enhance inference speed, reduce memory footprint, and maintain model accuracy. Neural Magic is also a member of NVIDIA Inception, a program designed to nurture startups, and is thankful to the CUTLASS team for their valuable work. Our goal is to empower developers to build and deploy high-performance LLMs across different hardware configurations without compromise. To learn more, visit neuralmagic.com or check out our GitHub to accelerate your AI workloads today. Author(s) Spread the Word Stay Up to Date SUBMIT Join the Conversation slack Continue Reading Recent Blogs Open Source Feb 27, 2025 Quantized DeepSeek-R1 Models: Deployment-Ready Reasoning Models Newsletter Feb 24, 2025 Friends of vLLM: February 2025 Open Source Feb 20, 2025 Driving Enhanced Support for Multimodal LLMs With vLLM V1 view all blogs SUBSCRIBE Subscribe to Neural Magic events & news Community Blog Let's Connect Contact Us Company Policies BACK TO TOP © 2024 Neuralmagic, Inc. Neuralmagic, Inc. 55 Davis Sq STE 3 Somerville, MA 02144 United States BACK TO TOP
Optimizing memory-bound kernel (memory dependency around 95% in NVVP) I have a piece of code that, according to Nvidia Visual Profiler, is memory bound and so far I haven’t managed to improve it further after passing some arguments as constants. If you copy/paste and compile the following code, NVVP shows that both kernels are memory bound, have around 89% occupancy even though the kernel configurations should fully saturate the device, and the SMs from 1 to 7 are around 88-90% utilization while the other ones are closer to 100%. Error checking was omitted for easier reading, but cuda-memcheck reports no errors for any array length I use. #include <iostream> __global__ void init_array(float *array, size_t len) { for(size_t idx = blockDim.x * blockIdx.x + threadIdx.x; idx < len; idx += gridDim.x * blockDim.x) array[idx] = idx; } __global__ void transform_array(float *in, float *out, const float scale_factor, size_t len) { for(size_t idx = blockDim.x * blockIdx.x + threadIdx.x; idx < len; idx += gridDim.x * blockDim.x) out[idx] = in[idx] * scale_factor; } int main(void) { float *array_in, *array_out; size_t length = 100000000; const unsigned short block_Size = 256, grid_Size = 200; const float factor = 0.5; // Allocate and initialize memory cudaMallocManaged(&array_in, length * sizeof(float)); cudaMallocManaged(&array_out, length * sizeof(float)); cudaMemset(array_in, 0, length * sizeof(float)); cudaMemset(array_out, 0, length * sizeof(float)); cudaDeviceSynchronize(); // Fill the input array init_array <<< grid_Size, block_Size >>> (array_in, length); cudaDeviceSynchronize(); // Transform input and write to output array transform_array <<< grid_Size, block_Size >>> (array_in, array_out, factor, length); cudaDeviceSynchronize(); cudaFree(array_in); cudaFree(array_out); return 0; } The first kernel just initializes the input array with some numbers using a strided loop, and second kernel saves the multiplication between the input element and some scaling factor (which I calculate with other functions, but here it is just an arbitrary value) to the output array, again using the same strided loop. Essentially doing a lot of work in global memory. How do you normally get rid of/alleviate this bottleneck? You won’t eliminate the memory bottleneck for a memory bound code. The operations you are doing here are so trivial they are going to be memory bound. There is likely very little you can do to make them run substantially faster. At this point, if you want to improve things, you are in the realm of what I call “ninja methods”. Things like tuning kernel size (e.g. number of blocks - easily doable with your grid-stride loop method) for the number of SMs in your device to minimize the tail effect, attempting to see if larger vector loads will improve things (slightly), etc. Ninja methods are referred to here: [url]http://on-demand.gputechconf.com/gtc/2012/presentations/S0514-GTC2012-GPU-Performance-Analysis.pdf[/url] These methods in my experience don’t usually provide more than a few percent improvement. At a higher level of abstraction, programmers who have multiple operations like this to do will sometimes seek to fuse operations. This means combining multiple kernel calls to do more work in a single kernel call. The objective is to do as much work as possible per load and store operation in global memory. Your two operations could be trivially fused into a single kernel, for example. Fusing to reduce kernel calls also saves the overhead of additional kernel calls - another ninja topic (usually). In any event, these trivial memory-bound kernels are “fully optimized” when the kernel runs at the rate of memory bandwidth. For example, determine the total number of loads and stores done by a kernel, in bytes, and divide by the kernel execution time, in seconds. This bytes/sec number is then compared to a proxy measurement of peak achievable bandwidth (e.g. such as the device-to-device memory bandwidth reported by bandwidthTest sample code). When your kernel is running at that rate, it probably cannot be optimized further. You are done, excepting higher-level “meta” work like algorithm redesign or fusing of operations/kernels. Thanks for these clarifications, txbob. So I think it is just what it is. And the Titan V with that ridiculous memory bus is probably laughing at it. But while I was reading this document and trying some of the ninja techniques, like increasing the grid size to raise the occupancy (with 200 it had 89%, with 1000 it goes to 98%) and shaving some milliseconds here and there, I found by accident, clicking the wrong kernel to profile, that the array reduction we worked some weeks ago actually has some branch divergence, doesn’t it? It is exactly the last lines: if (tid == 0) array_out[blockIdx.x] = sdata[0]; Only a few threads will execute it, so I don’t think it is all that harmful, yes? NVVP shows an increase in divergence as the grid size increases. There is probably an expression in English like, you aim at something but hit something else… Many, many kernels will have some divergence. your grid-stride loops, for example, are prone to some small divergence as well. These sorts of things are usually insignificant, from a performance perspective. Related topics Powered by Discourse, best viewed with JavaScript enabled
Keeping data between kernels calls Hi, I have a quite long kernel that I want to optimize. My idea is to split this single kernel into multiple smaller ones that can be better optimized. The problem I see is now that the result of one kernel is the input data of the next. So far the only solution I found to keep the data between different kernel calls is the use of global memory. If I unterstand correctly the lifetime of shared memory does not allow the use for different kernels - so that is unfortunately not an option. Is there any other way to keep the data? Thanks. My idea is to split this single kernel into multiple smaller ones that can be better optimized. Better optimized in which sense? What factors will be driving the performance improvements that are being envisioned? So far the only solution I found to keep the data between different kernel calls is the use of global memory. Keeping data resident on the GPU and minimizing data transfers between host and device is the correct approach. Hi, thanks for your fast reply. Some part of the kernel can make use of a whole warp to solve a single problem, other parts may use just a single thread for a fast calculation. This is alternating through the kernel. So I want to separate the parts that can make use of more than a warp to solve a problem from the other that needs just a single thread. Hope I could make my point clear? I would hope that the kernel launch comprises more than one warp, because that would still be very inefficient. As for the single-threaded portions: I can see how this may be necessary if there is simply no inherent parallelism available, but I am not sure what kind of situation “just a single thread for a fast calculation” refers to. Generally speaking, the kind of situation you describe, with varying degrees of parallelism available throughout a kernel, is not uncommon and may well be the highest-performing option. A thorough design process would prototype both this design approach as well as the split-kernel approach to assess which one is faster for this particular use case. The other aspect that is likely worth investigating is whether there are algorithmic changes that could be used to increase the amount of available parallelism, so as to avoid single-threaded computation. Alternatively or additionally, since single-threaded execution is ideally suited to the CPU, one might alternatively look into how these portions of the computation could be moved to the host, maybe in the form of pre-computation. A thorough search of the literature might turn up interesting ideas. How to best map work to execution resources on massively parallel computational platforms is still an area of active research. Sometimes a switch from traditional data structures can be the key to better parallelization. By now the applicability of GPUs to just about any kind of computational task has been looked into, so a literature search is highly likely to yield some useful information. The process I like to use for best results is to first spend some quality time thinking up a design and prototyping it (usually with variants) by myself. I put serious effort into this. In a second stage I then search the literature extensively, going back multiple decades, looking for anything even remotely applicable. Having made a serious first effort by myself usually provides me with enough insight that I can make a reasonable assessment whether some idea from the literature is (1) roughly along my own line(s) of though, (2) inferior to my own ideas (3) superior (at least in parts) to my best efforts. It is fairly rare (but happens), that I realize that an approach from group (3) is so vastly superior to my own ideas that I adopt it wholesale. Most of the time my final design is the synthesis of my original ideas and ideas from the literature: the “stand on the shoulder of giants” approach. Thanks again for your detailed answer. Yes, I agree that prototype is one way to go (already started). Just to clarify my approach since I think I didn’t explain very well what I try to do. With ‘single threaded’ I do not mean that I start a kernel with a single grid and 32 threads. What I mean is that each thread gives me a single solution. I’m also happy with the performance of these parts. In other parts of the kernel I can use 4 thread to improve performance, also giving me a single solution, and in other parts I can make use of a complete warp. If all of these parts work in a single kernel I need to configure the kernel in that way that the part that needs most of the threads determine how many solution my kernel can calculate. Sample: If I start now a kernel like this: grid <64, 1, 1> and block <128, 1, 1> I use 64 * 128 threads that gave me 256 solutions since the last part needs 32 threads for a single result. And on the other side I have most of the GPU doing nothing for part 1 and 2. My idea for splitting the kernel is now to run part 1 in its own kernel with a good occupancy and store all results in memory. The run part 2, reading the results from part 1 and continue calculation also in configuration that its kernel make good use of the gpu - then finnaly run the last part - again with its own kernel configuration. I hope to get better performance because part 1 and 2 can use their own configuration resulting in a better occupancy. I can also use the unused MC for something else in a different stream (if the used resources will allow this). But this something for a later development if splitting the kernel gives the expected results. Thanks for your help and insights. Hi TrailingStop, Variant 1: it can be done by e.g. using a single kernel and have each warp calculate 32 solutions. In your example: For the first part of the kernel, each thread of the warp calculates one solution; for the second part, you have a for loop from 0 to 3 and within the loop body 4 threads each cooperate for 8 independent calculations concurrently times 4 loop iterations; for the third part you have a for loop from 0 to 31 with only one solution calculated for each iteration. The data can be exchanged by warp shuffle or by shared memory. Variant 2: A different approach would be to have whole warps doing nothing instead of some threads of a warp. Keeping warps idle (e.g. by blocking for a _syncthreads()) is free, because the blocked warps are not scheduled and do not use up compute resources, except that the effective occupancy lowers. Your example would be a (too) extreme case. But basically it would go: Start 32x32 thread blocks. For part 1 one warp is active; for part 2 fours warps are active; for part 3 thirty-two warps are active. The data would be exchanged by shared memory. Generally: The thread indices of threads within a block can have different meanings throughout your kernel. You are not bound to map them 1:1 to your problem space indices. Just see the threads as numbered resources. Between each _syncwarp() (or even _syncthreads()) you can redefine, how the work is distributed onto threads, and you can add any for loop within the kernel on top of it. Hi Curefab, thanks a lot for this information. I will give them a try and check which one works best in my case. Thanks! Related topics Powered by Discourse, best viewed with JavaScript enabled
Why performance is worse with CUBLAS- than with kernel-function System: CPU: Intel Core i5-4570 MSVS Community 2017 v15.9.7 Platform Toolset: Visual Studio 2017 (v141) Build: Release x64 GPU: GeForce GT 640 (c.c. 3.0) CUDA Compilation tools R10.1, V10.1.105 CUDA Driver Ver.: 10.1 I am using CUDA for last couple of months. Goal of my research is to develop performance optimized 2D DCT transform kernel function. Optimization targets short processing time. Since transform is used for video processing batches of data are processed. Transform can be described with mathematical equation C = A * B * AT where A and AT are predefined matrices. All matrices are of size 32 x 32. Own kernel function was developed at first and to check potential improvement variant with CUBLAS function was developed as well. Function cublasgemmBatched()was used for this purpose. It was used twice for two multiplications from math eq. Batch size is 12960. Results were compared at the end for both variants. I expected that transform variant with CUBLAS will be faster but processing time with kernel function is almost 10x faster. How to explain this ? Any idea where to search for an answer ? Should I try with strided batched matrix multiplication cublasgemmStridedBatched() since I operate with square matrices only ? Or there is another library which should be used to outperform the kernel function ? I know I ask to many questions :) but any suggestion is welcomed. There was problem that I didn’t have warmup call. Its absence was of significant importance for CUBLAS variant since after warmup was added for both, CUBLAS and own kernel, processing time for CUBLAS variant was 1,38x shorter. In next iteration I moved to cublasgemmStridedBatched(). There CUBLAS showed even better performance, it was 1,73x faster. Using NVIDIA visual profiler I identified that low shared efficiency (50%) in my own kernel had improvement potential. After rewriting my kernel for vectorized memory access with float2 data type shared efficiency increased to about 98% and my kernel was 1,3 faster than CUBLAS. Now, I don’t know why CUBLAS exhibits such low performance. In the profiler I see that block size for CUBLAS kernel is relatively small (8x8) what decreases number of possible active warps per SM and decreases the occupancy (30%). Global store efficiency is also low, 25%. Is there an issue that CUBLAS GEMM is optimized for specific matrix sizes and my application with 32x32 matrices is not in that group ? Probably I will move to Tesla K40 GPU and see does GPU architecture makes the difference. Generally speaking, GEMM maps to dozens of different kernels, optimized for different GPU architectures, matrix sizes, transpose modes, matrix aspect ratios etc. For a given GEMM invocation a heuristic picks the most appropriate kernel(s). The heuristic may not always pick the optimal kernel, or none of the available kernels may be the perfect fit for a particular call to GEMM. As I recall, batched GEMMs in particular were introduced primarily to deal with very small matrices, as some applications need to handle tons of matrices of size 3x3, 4x4, or thereabouts. Matrices of size 32x32 may be close to the upper limit of what batched GEMMs were targetted to handle; check the documentation. With regard to the sub-optimal performance observed, consider filing an enhancement request with NVIDIA. You can file one by using the bug reporting form and prefixing the synopsis with “RFE:” to mark it as an enhancement request. Realistically, given the age of the Kepler architecture, it is unlikely that improvements will be made for compute capability 3.x, but equivalent issues may affect newer architectures. The primary targets for performance improvements in libraries are the latest GPU architectures (at this time: Turing and Volta) although some amount of back-porting of such improvements to older architectures may occur. Related topics Powered by Discourse, best viewed with JavaScript enabled
For context: this WebGPU version achieves ~17% of peak theoretical performance of M2. With CUDA (i.e. CuBLAS), you can reach ~75% of peak performance for same matrix config (without tensor core). Not on the same computer, CUDA doesn’t run on the integrated GPU of the Apple M2 Pro. That single lib call must have used the AMX accelerator, which is separate from the cores and shared by a group of cores.So that AMX accelerator performance may be greater than of all CPU cores together. AFAIK, some Apple CPUs have one AMX accelerator for the big cores and another AMX accelerator for the smaller cores, but in any case there is no chance to hope that if you have obtained 1 TFLOP/s when running the program on 1 core you will get much more when running it on multiple cores, because all cores of the same type will use the same shared accelerator. So that AMX accelerator performance may be greater than of all CPU cores together. AFAIK, some Apple CPUs have one AMX accelerator for the big cores and another AMX accelerator for the smaller cores, but in any case there is no chance to hope that if you have obtained 1 TFLOP/s when running the program on 1 core you will get much more when running it on multiple cores, because all cores of the same type will use the same shared accelerator. nVidia tensor cores support int8, couple versions of FP16 (BF16 and the standard IEEE one) and FP19 which they call TensorFloat-32. I think Intel AMX only supports int8 and BF16.None of them supports FP32 let alone FP64 input numbers, which makes them completely useless for traditional GEMM stuff. None of them supports FP32 let alone FP64 input numbers, which makes them completely useless for traditional GEMM stuff. The Apple’s version is indeed interesting. I wonder why haven’t Apple exposed it to programmers, or implemented a BLAS library on top of that thing? Using the Accelerate framework (which includes Apple's BLAS) is the only supported way for programmers to access the AMX. Reverse engineering the instruction set to access it directly is discouraged, because it's not a documented stable interface. Now that they’re using “standard” SME, it shouldn’t be a problem to write SME assembly opcodes directly, although I suspect Apple themselves is still probably sparse on the documentation. I’m not aware if there’s any way to use intrinsics or something slightly higher level than inline-ASM, but lower level than the Accelerate framework. It sounded like you claimed that using only one core you already reach 1 TFLOP/s, implying that you could reach more than that by using more cores, which is false.Now you have clarified that you actually claim that it is good that when using a single core you can reach the maximum throughput of the shared matrix operation accelerator.This is correct, but there is no essential difference between this and a Zen 5 CPU that reaches this throughput by using only half of the cores, while having the other half of the cores free to do any other tasks. Now you have clarified that you actually claim that it is good that when using a single core you can reach the maximum throughput of the shared matrix operation accelerator.This is correct, but there is no essential difference between this and a Zen 5 CPU that reaches this throughput by using only half of the cores, while having the other half of the cores free to do any other tasks. This is correct, but there is no essential difference between this and a Zen 5 CPU that reaches this throughput by using only half of the cores, while having the other half of the cores free to do any other tasks. (Also, that’s a M2 number, since that’s what OP was talking about. Someone will presumably post M4 benchmarks for BLAS sometime soon, if they haven’t already.) This is way way lower than what you claim M2 Pro is capable of and since I'm comparing it against the state-of-the-art datacenter CPU I'm curious how did you get to this number?M2 Pro core runs at much lower frequency, what it seems to be around ~3.4GHz. And I couldn't find any information about SVE vector widths supported nor number of FMAs. M2 Pro core runs at much lower frequency, what it seems to be around ~3.4GHz. And I couldn't find any information about SVE vector widths supported nor number of FMAs. In my desktop computer, I have Ryzen 7 8700G CPU, which has 8 Zen 4 cores, 4.2 GHz base frequency, 65W TDP. Theoretically, when doing FP32 FMA, each CPU core can do 32 FLOP/cycle. At the base frequency, this translates into 134 GFlops per core. You gonna need all 8 cores to achieve 1 theoretical TFlops.BTW, integrated GPU inside the same 8700G processor can theoretically do 8.2 TFlops FP32. BTW, integrated GPU inside the same 8700G processor can theoretically do 8.2 TFlops FP32. Isn't it that zen4 doesn't have "native" support for AVX-512 but "mimics" it through 2x 256-bit FMA units?Because of this, a single AVX-512 instruction will occupy both FMA units and therefore I think that the theoretical limit for a single zen4 core should be half of the 134 GFLOPS number? Because of this, a single AVX-512 instruction will occupy both FMA units and therefore I think that the theoretical limit for a single zen4 core should be half of the 134 GFLOPS number? According to uops.info, Zen 4 cores can do two 8-wide FMA instructions per cycle, or one 16-wide FMA per cycle. See VFMADD132PS (YMM, YMM, YMM) and VFMADD132PS (ZMM, ZMM, ZMM) respectively, the throughput column is labelled TP. That’s where 32 FLOP/cycle number comes from.> doesn't have "native" support for AVX-512 but "mimics" it through 2x 256-bit FMA unitsThat’s correct, AVX512 doesn’t deliver more FLOPs on that CPU. The throughput of 32-byte FMA and 64-byte FMA is the same, 32 FLOP/cycle for FP32 numbers. > doesn't have "native" support for AVX-512 but "mimics" it through 2x 256-bit FMA unitsThat’s correct, AVX512 doesn’t deliver more FLOPs on that CPU. The throughput of 32-byte FMA and 64-byte FMA is the same, 32 FLOP/cycle for FP32 numbers. That’s correct, AVX512 doesn’t deliver more FLOPs on that CPU. The throughput of 32-byte FMA and 64-byte FMA is the same, 32 FLOP/cycle for FP32 numbers. Right. This is where the discrepancy comes from. I counted FMA as a single FLOP. For example, I have GeForce 4070 Ti Super in my desktop. The chip has 8448 execution units; nVidia calls them CUDA cores but I don’t like the name, the correct number is 66 cores where each core can do 4 wavefronts of 32 threads each. Anyway, these EUs can do one FP32 FMA each cycle, and the boost clock frequency is 2.61 GHz. Multiplying these two numbers results in 22.04928E+12 cycles*EU/second, and nVidia reports 44E+12 FLOPs peak FP32 performance of the GPU. That can lead you to some pretty counter-intuitive optimizations because it's often faster to do more compute work if it means you touch less memory in the process. It is not specific to the GPUs: this kind of optimizations are pretty common on CPU too where latency kills you and 200 cycles spent wasted on doing compute can actually be faster than a single cache miss trying to fetch data. This is pretty common for many SIMD algorithms actually.Memory is currently lagging behind compute on almost every type of modern hardware, and it will very likely become worst, not better. Memory is currently lagging behind compute on almost every type of modern hardware, and it will very likely become worst, not better. As for handcoded assembly, do you believe that it would be financially sound to hand code and maintain thousands of kernels that way, even if you believed that they would be faster? Why not? We do so for cryptographic primitives and video codecs. And why are you talking about “thousands of kernels”? AI programs only need a small amount of different kernels so it doesn't sound intractable. That is not the case. What appears like a simple matmul operation actually requires these libraries to select which specific kernel out of the many internally available to execute.If you are curious to learn more, NVidia open sourced a library called Cutlass some years ago. And remember that is only what they are willing to open source. If you are curious to learn more, NVidia open sourced a library called Cutlass some years ago. And remember that is only what they are willing to open source. How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance (https://siboehm.com/articles/22/CUDA-MMM)(It's CUDA-specific, so there may be aspects that can't yet be ported to WGPU) (It's CUDA-specific, so there may be aspects that can't yet be ported to WGPU) there are a few things that i wasn't able to figure out how to get access to/i wasn't sure if they were possible. for example, a lot of Simon's article takes advantage of the warp scheduler and warp tiling.i had a hard time finding information on if that's even possible with my M2/metal and the general memory access patterns. it seems like CUDA does have better documentation in this regard i had a hard time finding information on if that's even possible with my M2/metal and the general memory access patterns. it seems like CUDA does have better documentation in this regard The datasheet for the H100 SXM seems to indicate that it can only do ~1000 TFLOP/s peak. I am not an expert in LLM but I don't think you can end up having a significant amount of zeroed weights (~50%) in a converged network so I think it is safe to say that the theoretical throughput for 99% of cases is really ~800 TFLOPS and not ~1600 TFLOPS as advertised. Also does quantized matmuls. As you see, I have implemented 32×32 tiling, using thread groups of 32×8 threads, two groupshared buffers to load tiles of the input matrices, and I accumulate numbers into local variables, 32 / 8 = 4 accumulators per thread. I have implemented a profiler on top of D3D11_QUERY_TIMESTAMP and D3D11_QUERY_TIMESTAMP_DISJOINT queries, and tweaked the compute shader to minimize the time reported by these queries for my specific use case. I have no experience with WebGPU but if you mean group shared memory, I think the support is available. See the demo: https://compute.toys/view/25 i'm excited to try subgroups though: https://developer.chrome.com/blog/new-in-webgpu-128#experime... it would be cool to see if there's some way to get better access to those lower-level primitives but would be surprisedit does seem like subgroup support are a step in the right direction though! it does seem like subgroup support are a step in the right direction though! The smoothness of an iPhone map zoom, on any device. Any device except an iPhone, until Apple finally gets around to shipping WebGPU in Safari. Any year now... https://developer.apple.com/documentation/safari-release-not... "I have found that WebGPU is enabled by default now with iOS 18.2. Apple has been working in the open on WebGPU. The WebKit source code has their latest WebGPU work in it. What hasn’t been known is their release schedule, but now with 18.2 it’s looking very promising that it will be on by default in that version." Edit: I just pressed “Reset All to Defaults” under “WebKit Feature Flags” on my device running 18.2 beta, and the switch for WebGPU is on!! <3
How to optimize my cuda code? I have a simple program, I just want to verify my GPU real performance. but, its result is out of my expectation. I don’t know how to explain it, and how to optimize my program. So, I hope NV’s experts can help me. the detail about my GPU as following: Device 0: “NVIDIA RTX A4000” CUDA Driver Version / Runtime Version 11.6 / 11.3 CUDA Capability Major/Minor version number: 8.6 Total amount of global memory: 16109 MBytes (16891379712 bytes) (48) Multiprocessors, (128) CUDA Cores/MP: 6144 CUDA Cores GPU Max Clock rate: 1560 MHz (1.56 GHz) Memory Clock rate: 7001 Mhz Memory Bus Width: 256-bit L2 Cache Size: 4194304 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total shared memory per multiprocessor: 102400 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1536 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) ok, let me introduce my program, my program is very very simple, just execut “ffma”, details as following: the performance should be ~10Tflops(x2=20Tflops),but the result of my program is about 6.5T, just about 65% peak performance. I modified the blocksPerSM to 2, 4, 8, and I modified the threadPerBlock to 128/256/512。 unfortunately, these results are very similar, about 60% - 65%。 and then, I profiled my program with NCU,it tell me “The ratio of peak float (fp32) to double (fp64) performance on this device is 64:1. The kernel achieved 61% of this device’s fp32 peak performance and 0% of its fp64 peak performance.” in “Roofline Analysis” I think my program have avoid memory-access, avoid register dependency, I don’t know why my peak performance is 61% I tried to read the profile information in NCU,but, I found I still cannot found the reason of poor performance. I’ve uploaded my program and the profile file from NCU, base_mac.tar.gz (3.7 KB) repoprt.ncu-rep (12.6 MB) Is there anyone would like to teach me? I think the keypoint is reading the profile file, but I cannot understand them, is there anyone would like to help me? I implement the loop body with ptx, I think it can avoid the optimization behavior of nvcc. Highly unlikely to be a good idea. The CUDA compiler is based on LLVM, an extremly powerful framework for code transformations, i.e. optimizations. If you run into the compiler optimizing away code that you don’t want to have optimized away, create dependencies that prevent that from happening. Your chosen approach for measuring peak FP32 throughput appears to be the common method of using independent dot products. You would want to sum these dot products at the end and write the result to global memory to avoid dead code elimination. Instruction caches on GPUs tend to be pretty small, and your massive loop may exceed the size of the instruction cache, which based on past experience may cost 3% of performance. It is easier to fill up the SMs as much as possible using relatively fine granularity, e.g. use 128 threads per thread block as a starting point. You may be better off using dot-products floats instead of float4s. The latter is likely to result in higher register pressure. You are unlikely to achieve more than 85% of theoretical FP32 throughput, as the ptxas compiler is unlikely to produce a perfect instruction scheduling with perfect register assignment. So there will be bubbles in the pipeline caused by register bank conflicts and execution pipe contention. [Later: ] Below is a simple test scaffold for measuring FP32 throughput. It is currently configured for the 9 year old low-end GPU in my web-browsing machine, for which it achieves 86% of theoretical peak FP32 throughput. Increase MAX_BLOCKS, REPS, ITER to adapt to your hardware. Then vary POLY_DEPTH to see how throughput changes. #include <stdlib.h> #include <stdio.h> #define MAX_BLOCKS (65520) #define THREADS_PER_BLK (128) #define LEN (MAX_BLOCKS * 1024) #define POLY_DEPTH (512) #define REPS (2) #define ITER (10) #if defined(_WIN32) #if !defined(WIN32_LEAN_AND_MEAN) #define WIN32_LEAN_AND_MEAN #endif #include <windows.h> double second (void) { LARGE_INTEGER t; static double oofreq; static int checkedForHighResTimer; static BOOL hasHighResTimer; if (!checkedForHighResTimer) { hasHighResTimer = QueryPerformanceFrequency (&t); oofreq = 1.0 / (double)t.QuadPart; checkedForHighResTimer = 1; } if (hasHighResTimer) { QueryPerformanceCounter (&t); return (double)t.QuadPart * oofreq; } else { return (double)GetTickCount() * 1.0e-3; } } #elif defined(__linux__) || defined(__APPLE__) #include <stddef.h> #include <sys/time.h> double second (void) { struct timeval tv; gettimeofday(&tv, NULL); return (double)tv.tv_sec + (double)tv.tv_usec * 1.0e-6; } #else #error unsupported platform #endif // Macro to catch CUDA errors in CUDA runtime calls #define CUDA_SAFE_CALL(call) \ do { \ cudaError_t err = call; \ if (cudaSuccess != err) { \ fprintf (stderr, "Cuda error in file '%s' in line %i : %s.\n",\ __FILE__, __LINE__, cudaGetErrorString(err) ); \ exit(EXIT_FAILURE); \ } \ } while (0) // Macro to catch CUDA errors in kernel launches #define CHECK_LAUNCH_ERROR() \ do { \ /* Check synchronous errors, i.e. pre-launch */ \ cudaError_t err = cudaGetLastError(); \ if (cudaSuccess != err) { \ fprintf (stderr, "Cuda error in file '%s' in line %i : %s.\n",\ __FILE__, __LINE__, cudaGetErrorString(err) ); \ exit(EXIT_FAILURE); \ } \ /* Check asynchronous errors, i.e. kernel failed (ULF) */ \ err = cudaDeviceSynchronize(); \ if (cudaSuccess != err) { \ fprintf (stderr, "Cuda error in file '%s' in line %i : %s.\n",\ __FILE__, __LINE__, cudaGetErrorString( err) ); \ exit(EXIT_FAILURE); \ } \ } while (0) __global__ void kernel (const float * __restrict__ src, float * __restrict__ dst, int len) { int stride = gridDim.x * blockDim.x; int tid = blockDim.x * blockIdx.x + threadIdx.x; for (int i = tid; i < len; i += stride) { float p = src[i] + 1.000001f; float q = src[i] + 1.000002f; for (int k = 0; k < REPS; k++) { #pragma unroll POLY_DEPTH for (int j = 0; j < POLY_DEPTH; j++) { p = fmaf (p, p, 1.000001f); q = fmaf (q, q, 1.000002f); } } dst[i] = p + q; } } int main (int argc, char *argv[]) { double start, stop, nbr_of_fma; float *d_a, *d_b; /* Allocate memory on device */ CUDA_SAFE_CALL (cudaMalloc((void**)&d_a, sizeof(d_a[0]) * LEN)); CUDA_SAFE_CALL (cudaMalloc((void**)&d_b, sizeof(d_b[0]) * LEN)); /* Initialize device memory */ CUDA_SAFE_CALL (cudaMemset(d_a, 0x00, sizeof(d_a[0]) * LEN)); // zero /* Compute execution configuration */ dim3 dimBlock(THREADS_PER_BLK); int threadBlocks = (LEN + (dimBlock.x - 1)) / dimBlock.x; dim3 dimGrid(threadBlocks); printf ("burn: using %d threads per block, %d blocks, %f GB used\n", dimBlock.x, dimGrid.x, 2*1e-9*LEN*sizeof(d_a[0])); start = second(); for (int k = 0; k < ITER; k++) { kernel<<<dimGrid,dimBlock>>>(d_a, d_b, LEN); CHECK_LAUNCH_ERROR(); } stop = second(); nbr_of_fma = (2.0 * POLY_DEPTH * REPS + 3.0) * LEN * ITER; printf ("flop=%13.6e elapsed=%.5f sec throughput=%.5f FP32 GFLOPS\n", nbr_of_fma * 2, stop-start, nbr_of_fma * 2 * 1e-9 / (stop - start)); CUDA_SAFE_CALL (cudaFree(d_a)); CUDA_SAFE_CALL (cudaFree(d_b)); return EXIT_SUCCESS; } so kindly, thanks for your code. I run your code, and I get the performance above 90%, so cool but, I found a difference between my poor code and your good code. I found the clause about fma in your code like this: p = fmaf (p, p, 1.000001f); q = fmaf (q, q, 1.000002f); at the same time, my code like this: c += a * b; and then, I modified your code like this: __global__ void kernel (const float * __restrict__ src, float * __restrict__ dst, int len) { int stride = gridDim.x * blockDim.x; int tid = blockDim.x * blockIdx.x + threadIdx.x; for (int i = tid; i < len; i += stride) { float p = src[i] + 1.000001f; float q = src[i] + 1.000002f; float r = 0.0f; for (int k = 0; k < REPS; k++) { #pragma unroll(512) for (int j = 0; j < POLY_DEPTH; j++) { r = fmaf (p, q, r); } } dst[i] = r * 0.0001; } } and modify the statistic clause like this: nbr_of_fma = (POLY_DEPTH * REPS + 3.0) * LEN * ITER; and then, the performance is about 50%. as comparison, I modify my blockPerSM to 64, and modify the loop body like this: #pragma unroll(128) for(int i = 0 ; i < loop ; i ++){ vec_C.x = fmaf (vec_A.x, vec_B.x, vec_C.x); vec_C.y = fmaf (vec_A.y, vec_B.y, vec_C.y); vec_C.z = fmaf (vec_A.z, vec_B.z, vec_C.z); vec_C.w = fmaf (vec_A.w, vec_B.w, vec_C.w); } *ptr_C = vec_C; performance is about 60%. another modifcation like this: loop = loop >> 1; #pragma unroll(128) for(int i = 0 ; i < loop ; i ++){ vec_A.x = fmaf (vec_A.x, vec_A.x, 0.0001f); vec_A.y = fmaf (vec_A.y, vec_A.y, 0.0001f); vec_A.z = fmaf (vec_A.z, vec_A.z, 0.0001f); vec_A.w = fmaf (vec_A.w, vec_A.w, 0.0001f); vec_B.x = fmaf (vec_B.x, vec_B.x, 0.0002f); vec_B.y = fmaf (vec_B.y, vec_B.y, 0.0002f); vec_B.z = fmaf (vec_B.z, vec_B.z, 0.0002f); vec_B.w = fmaf (vec_B.w, vec_B.w, 0.0002f); } vec_C.x = vec_A.x + vec_B.x; vec_C.y = vec_A.y + vec_B.y; vec_C.z = vec_A.z + vec_B.z; vec_C.w = vec_A.w + vec_B.w; *ptr_C = vec_C; this performance is also about 90%, I think the third parameter of fmaf is constant, so, I feel this result(90%) cannot prove the real performance. I think the keypoint is “register reuse” and data dependency, but your code there are dependency(a = a * a + 1.0001f) also, and the performance is good. why? I checkd their sass, the sass about the fma in your code, like this: /*0130*/ FFMA R6, R6, R6, 1.0000009536743164062 ; /* 0x3f80000806067423 */ /* 0x000fe20000000006 */ /*0140*/ FFMA R7, R7, R7, 1.0000020265579223633 ; /* 0x3f80001107077423 */ /* 0x000fc60000000007 */ /*0150*/ FFMA R6, R6, R6, 1.0000009536743164062 ; /* 0x3f80000806067423 */ /* 0x000fe20000000006 */ /*0160*/ FFMA R7, R7, R7, 1.0000020265579223633 ; /* 0x3f80001107077423 */ /* 0x000fc60000000007 */ the sass about fma in my code, like this: /*0190*/ FFMA R17, R4, R8, R12 ; /* 0x0000000804117223 */ /* 0x020fe2000000000c */ /*01a0*/ FFMA R12, R5, R9, R13 ; /* 0x00000009050c7223 */ /* 0x000fe2000000000d */ /*01b0*/ FFMA R13, R6, R10, R14 ; /* 0x0000000a060d7223 */ /* 0x000fe2000000000e */ /*01c0*/ FFMA R14, R7, R11, R15 ; /* 0x0000000b070e7223 */ /* 0x000fe2000000000f */ /*01d0*/ FFMA R17, R4, R8, R17 ; /* 0x0000000804117223 */ /* 0x000fe20000000011 */ /*01e0*/ FFMA R12, R5, R9, R12 ; /* 0x00000009050c7223 */ /* 0x000fe2000000000c */ /*01f0*/ FFMA R13, R6, R10, R13 ; /* 0x0000000a060d7223 */ /* 0x000fe2000000000d */ /*0200*/ FFMA R14, R7, R11, R14 ; /* 0x0000000b070e7223 */ /* 0x000fe2000000000e */ /*0210*/ FFMA R17, R4, R8, R17 ; /* 0x0000000804117223 */ /* 0x000fe20000000011 */ /*0220*/ FFMA R12, R5, R9, R12 ; /* 0x00000009050c7223 */ /* 0x000fe2000000000c */ /*0230*/ FFMA R13, R6, R10, R13 ; /* 0x0000000a060d7223 */ /* 0x000fe2000000000d */ /*0240*/ FFMA R14, R7, R11, R14 ; /* 0x0000000b070e7223 */ /* 0x000fe2000000000e */ obviously: So, would you like to teach me how to avoid these dependency, if these dependency really exist. The code I posted above is an ad-hoc adaption of some code I have had sitting around for quite a few years. I seem to recall that I chose the particular arrangement of FMAs used so as to minimize register bank conflicts, but I do not know for sure. Having been retired for almost a decade, I am a hobbyist these days who will, often on a whim, explore some issue for an extended afternoon, then forget the exploratory code once my curiosity is satisfied: “The journey is the reward”. I rarely keep notes on what I tried and why. Sustaining three-input operations at full speed is a challenge in all processor architectures due to the tremendous bandwidth required (3 read ports, 1 write port on the register file). The problem is exacerbated by multi-issue capability. A common way to boost register file bandwidth on average is to use register banks, each of which provides fewer read (and possibly, write) ports. To my knowledge, all NVIDIA GPUs use a (publicly undocumented) scheme of this nature to boost practically available bandwidth. Bank conflicts causing pipeline bubbles may occur intra-instruction or inter-instruction in case of multi-issue capability. thanks for your clear explanation I see,R4, R8, R12 is conflict(register bank, 0 == 4%4,0 == 8%4, 0 == 12%4, they are in the same bank,0), and R5/R9/R13,R6/R10/R14, R7/R11/R15, they are all conflict. so, performance is poor as your explanation, right? /*0190*/ FFMA R17, R4, R8, R12 ; /* 0x0000000804117223 */ /* 0x020fe2000000000c */ /*01a0*/ FFMA R12, R5, R9, R13 ; /* 0x00000009050c7223 */ /* 0x000fe2000000000d */ /*01b0*/ FFMA R13, R6, R10, R14 ; /* 0x0000000a060d7223 */ /* 0x000fe2000000000e */ /*01c0*/ FFMA R14, R7, R11, R15 ; but I don’t know how to solve it, because the SASS is complied by nvcc. maybe, I don’t know some tricky, would you like to give me more further advise? but I don’t know how to solve it, because the SASS is complied by nvcc. The nvcc compiler is aware of performance issues related to register usage. While its certainly possible that this can be improved, its also possible that this is the best tradeoff of choices about register usage. It should be evident in a large dependent sequence like this that register usage changes will have side effects. Changes you make to address performance on one instruction may have a negative performance impact elsewhere in the dependent chain. I don’t know of “tricks” to tell nvcc to reorganize its register usage. You could try playing with very crude, coarse controls like -maxrregcount switch to the compiler. The other options I know of are: Based on the 2nd option, you could file a bug to request study by the compiler team, if you think you can do better than the compiler. But you would need a well-documented example, showing a wholistic solution. Even then, there are probably knowledge gaps that make this sort of approach difficult. To quote Wikipedia: The problem of optimal register allocation is NP-complete. As a consequence, compilers employ heuristic techniques to approximate its solution. I think it is reasonable to assume that the CUDA compiler engineers in charge of ptxas are fully aware of the latest developments in the field and that heuristics that consider the general problem constraint by GPU-specific restrictions (such as calling conventions , register aggregation for 64-bit operations or vector loads/stores, dual issue, and register banks) are in place. It would also be reasonable to assume that after 15 years of development, this fundamental building block of a compiler is mature. That does not mean there could not be room for improvement, just that the burden of demonstrating noticeably improved performance from superior register allocation for specific cases lies with a prospective bug filer. I just heard of register bank is “%4”,but, I didn’t find it in nv’s public document. So, I must confirm the restriction for register bank at first, would you like to give me some documents about it? my compiler is nvcc 11.8, and the arch is sm86 I am not aware that the details of the GPU register file organization are disclosed in official CUDA documentation. I also cannot find any relevant details in papers from people who have explored GPU microarchitectures with targeted microbenchmarks. An older paper by NVIDIA subject matter experts, Mark Gebhart, Stephen W. Keckler, Brucek Khailany, Ronny Krashinsky, William J. Dally, “Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor” gives some details, but I have no idea how these relate to newer architectures: Each MRF bank is 16 bytes wide with 4 bytes allocated to the same-named architectural register for threads in each of the 4 SIMT lanes in the cluster. Each bank has a capacity of 8KB, providing a total of 256KB of register file capacity per SM. Registers are interleaved across the register file banks to minimize bank conflicts. Instructions that access multiple values from the same bank incur a cycle of delay for each access beyond the first. The operand buffering between the MRF and the execution units represents interconnect and pipeline storage for operands that may be fetched from the MRF on different cycles. Stalls due to bank conflicts are rare and can be minimized with compiler techniques Note that the use of vector types, such as float4, tends to impose an additional burden on register allocation. That is why earlier in this thread, I suggested using scalar computation for the exercise at hand (maximize floating-point throughput), and did so in the sample code I posted. thanks for your patient In practical terms, I see the risk of becoming mired in microarchitectural details that have no bearing on 99% of real-life CUDA code out there. Yes, it is cool to figure out how to get close to the theoretical peak FLOPS, but the resulting code often has very little similarity with code people write to address actual use cases. Not much has changed in that regard since I showed how to get 1 GFLOPS out of the AMD K7 (Athlon) processor ca. 1999. Outside of special scenarios, much forward progress can be made by simply relying on the CUDA compiler and using feedback provided by the CUDA profiler. “relying on the CUDA compiler and using feedback provided by the CUDA profiler”, yes, you are right but, my dilemma is I cannot obtain useful clue from NCU’s profile information. You might want to start with the recommendations from the CUDA Best Practices guide before delving into the profiler. All profilers these days are very sophisticated utilities that one needs to spend some quality time with to get the full benefit. So give it some time, keep on experimenting and exploring. When profiler use first became common some thirty years ago, their functionality was very limited and interacting with profilers was less overwhelming than today. There is a bit of a trade-off between ease of use and depth of analysis, I would say. So, I must confirm the restriction for register bank at first, would you like to give me some documents about it? One source of information is pages 6-8 of Dissecting Volta. The same authors wrote Dissecting Ampere, where they state on page 29, that the Ampere register layout is the same. thanks for your help Related topics Powered by Discourse, best viewed with JavaScript enabled
An Almost Pointless Exercise in GPU Optimization Not everyone is able to write funky fused operators to make ML models run faster on GPUs using clever quantisation tricks. However lots of developers work with algorithms that feel like they should be able to leverage the thousands of cores in a GPU to run faster than using the dozens of cores on a server CPU. To see what is possible and what is involved, I revisited the first problem I ever considered trying to accelerate with a GPU. What is unusual about my chosen problem is that it is officially pointless, so you ought not to be able to find any library that will accelerate this algorithm, because it isn’t worth writing one! That makes it an interesting proxy for algorithms which aren’t catered for by high-performance libraries written by experts, but can be structured to run thousands of threads in parallel. TL;DR​ Getting an existing C++ algorithm running on GPU is pretty easy, so it is a low bar to get started. What I learned is the importance of minimizing thread divergence and maximizing effective memory access speed. To do that effectively, I had to transform my algorithm into a state machine structure so that every thread is operating mostly in lock-step, just with different data values. My starting, interim and final code are open to see, along with a summary of the steps I took, and the corresponding improvements or regressions at each stage. I want to focus in this article on the thought process for deciding each step, mostly by explaining the Nvidia Nsight Compute analysis which helped guide me. In the end I managed to make my program run about 30x faster on my laptop using its GeForce GTX 1650 GPU, compared with its Core i7–9750H CPU. Only in the last two steps did it get meaningfully better than with CPU though, so be prepared for early and frequent disappointment. If you want just the summary of what worked, jump to Progression History. A Pointless Program​ Years ago, a colleague invited me to take on his Christmas programming challenge, which was to write the fastest program he could to continuously play the card game Beggar My Neighbour. The aim, noted by John Conway as definitely not worth solving, is to try to find the longest game — with the possibility that there might be a game which never ends. The longest game found so far has 8344 turns — a rainy afternoon diversion perhaps, if you can sustain playing a card every 2.589 seconds for six hours straight! You can see a history of new records, with a Python program that verifies them, here: https://github.com/matthewmayer/beggarmypython The game play algorithm is almost trivial, but it has some notable features which turn out to be relevant to the challenge of effectively leveraging a GPU. Game play is completely deterministic, so the outcome is defined only by the initial sorting of the deck. The problem is therefore embarrassingly parallel, since there are 653,534,134,886,878,245,000 (roughly 6.535×10206.535 \times 10^{20}6.535×1020) distinct deals, and we can play as many as we want in parallel, in any order. The algorithm tracks game state by using a few state variables and nested branching logic. This is easy to write and verify, and actually reasonably efficient to run on a single CPU core. The algorithm is very compact in terms of code size and data size. The search loop to play many games while tracking the best ones so far is also very compact. CPU Starting Point​ My initial C++ program to search for long games is here. It is just a port of the Python program with a search loop that runs continuously, shuffling the deck randomly between games. I implemented two simple optimizations that seemed obvious: a 64-entry circular buffer for each player’s hand and the discard pile, combined with a bit mask step to dereference the first and last index pointers. This avoids extra instructions and branches to handle wrap-around of the first and last index values as cards are added and removed. swap two cards between each game, to use fewer random numbers — we don’t need each deal to be completely random, but just have a random difference from the previous one. The program can use multiple CPU cores by just running separate copies of the search loop in different threads, starting with different RNG seeds. Each search loop does its own random walk through potential deals. On my laptop, throughput peaks at 2.9 million deals per second, using 12 threads to match the number of logical CPU cores. Initial GPU Port​ The beauty of General Purpose GPU programming is that you can often get started quickly with the code you already have. There is a good chance that only a few adaptations are needed to convert a program that uses multiple threads on CPU to one that runs with even more threads on GPU, as long as you don’t have extreme memory requirements (for code or data or both) which force breaking up the work into smaller pieces. GPU cores have a similar range of machine instructions to those of CPUs, so plain algorithms will compile readily enough to give reasonable single core efficiency; you don’t need to transform your program to use a subset of variable types or data structures or code constructs, or special parallelization primitives or libraries. You mainly just need to cope with some changes to library functions (such as the random number generator in my case), and finessing of class structures to graft in the global functions needed to launch work on the GPU. You do need to be ready for disappointment though. If your code is similar to mine, founded on nested branching logic, you may find that the GPU won’t even be able to match the CPU performance at first. This is to be expected: CPUs are designed for running unrelated complex branching logic in each thread, dedicating significant chip area on circuitry to predict branches, speculatively execute multiple paths, and do lots of caching — 1 MB per logical CPU core in my laptop for instance. CPU cores also run fast, with 3–4GHz max speed being fairly typical. By comparison, GPUs are much lighter on hardware per core for caching, branch prediction etc, and top out at perhaps 1.5–2GHz (unless you have an extreme cooling gaming GPU). They come with faster memory though, to help compensate. But the net effect is that an algorithm will probably run 2–3 times faster on a single CPU core than it will on a single GPU core, from a combination of the clock speed ratio and more aggressive single thread acceleration. You need to figure out how to make good use of the thousands of slow GPU cores in order to outperform a few fast CPU cores. My initial port to GPU ran at about 1.4M deals per second (once thermal throttling kicks in), using 2048 threads (128 blocks of 16 threads each). Not an encouraging start, but in hindsight about what I should have expected. Learn to use Nsight Compute​ Early on, I decided to get comfortable using the Nsight Compute tool to analyse the GPU portion of my code in spectacular detail, rather than trying to rely on intuition and the high-level utilization figures from nvidia-smi. The approach I settled on was to step through the execution of the first couple of GPU kernel launches (the kernel here being my global function which runs the core search and game play loop for a predefined number of iterations), and then profile the next kernel run to see how much of the hardware capability was being used effectively. The first, unflattering, report that the tool shows is the “Speed of Light” summary: ie. the percentage of theoretical peak performance on compute and memory bandwidth utilization. My first program scored around 12% for compute, and 28% for memory. (For the same program, nvidia-smi would report 88% and 28% utilization respectively, underscoring how misleading its information can be when you are trying to optimize your algorithm in the early stages.) There were many detailed metrics underneath the headline figures which could be examined, and various analysis warnings that point out potential problem areas. Many of these sounded esoteric, but there were two reasonably clear actionable warnings: The first one was easy to address: assign at least 32 threads per block (not 16), so we don’t leave warp capacity unused for not even trying. I ended up settling on 32 blocks of 32 threads, which increased throughput to 2.3M deals per second. nvidia-smi now reported 95% and 13% for compute and memory utilization. Nsight Compute reports we have improved average active threads per warp from 2.4 to 3.6. That left Thread Divergence as the key warning to address. Thread Divergence​ If you have learned about CUDA programming before, you may recall that thread divergence occurs when not all threads in the same warp are executing the same instruction. (Each group of 32 consecutive threads in a block will always execute together as a unit, known as a warp, and GPU hardware is heavily optimized around running these warps of threads very efficiently on the assumption that the threads are normally executing instructions in unison, working on closely related pieces of the same computation.) It is part of the beauty of General Purpose GPU programming that thread divergence is allowed, and is handled automatically by the hardware as well as it can be, by simply letting the threads in the warp take turns to run their next instruction, when they are different. Subsets of the warp that do share the next instruction will execute together, so degradation in performance is proportional to how much the threads have diverged from each other. In the worst case, where every thread is at its own unique point in the code, the threads are effectively time sliced on the hardware, and so running at 1/32 of the hardware’s potential. From the Nsight Compute warning, I could see we were quite close to that point: [Warning] Instructions are executed in warps, which are groups of 32 threads. Optimal instruction throughput is achieved if all 32 threads of a warp execute the same instruction. The chosen launch configuration, early thread completion, and divergent flow control can significantly lower the number of active threads in a warp per cycle. This kernel achieves an average of 3.6 threads being active per cycle. This is further reduced to 3.1 threads per warp due to predication. The compiler may use predication to avoid an actual branch. Instead, all instructions are scheduled, but a per-thread condition code or predicate controls which threads execute the instructions. Try to avoid different execution paths within a warp when possible. In addition, ensure your kernel makes use of Independent Thread Scheduling, which allows a warp to reconverge after a data-dependent conditional block by explicitly calling __syncwarp(). Getting about 3 active threads per warp means I’m only tapping into about 1/10th of the GPU’s compute capacity, at best. Not exactly what you would assume from the 95% compute utilization figure reported by nvidia-smi — that figure really just means I’m doing something on all of the GPU’s Streaming Multiprocessor or SM units about 95% of the time — even if that something is very inefficient at using each SM, as in this case. I highlighted the most relevant part of the remediation advice above, which is essentially to remove the nested branching logic as much as I can. To do that in my program, where each thread is working on a different game, I realised I would have to rewrite the core game play function. The original version uses position within the code’s nested branching structure to encode part of the game state — I needed to replace that with an explicit data representation of all pieces of game state. Then every step of the inner loop would be executed in unison across all threads playing their own games, just with different data values. Rewrite to use a lookup table​ To make the inner loop purely data driven, I chose to introduce a state transition table. Game state is now tracked explicitly in variables which are treated as inputs to the transition lookup table, followed by a set of actions that is always performed on every iteration using data values from the lookup table. A critical realization for implementing this cleanly in my case was to notice that the discard pile can be treated as a third player in the game, with some extra states to track when the discard pile is “playing” its cards to the player who won the last trick. With that mental twist in place, a fairly straightforward lookup table became possible to write by hand. The code for this version is here; it works on both CPU and GPU, but it is slower on both — about half the speed of the baseline version on CPU, and two-thirds on GPU. ☹ Being slower on CPU was expected, since it requires more instructions to manipulate more state variables, and there is more memory access to lookup state transitions. We also have forced the adding of the discard pile to the winning player’s hand on every trick to be done slowly, as individual turns with all the overhead needed for that. However, on GPU I had hoped to see gains because now more threads in each warp should be able to execute in unison. In fact Speed-of-Light compute and memory utilization figures did improve, to 17% and 38% respectively, but Thread Divergence was still listed as a remediation warning. We are at least up from an average of 3.6 active threads per warp to 5.3, but that is small comfort given the actual speed is now about back to where I first started on GPU, which is well behind the CPU performance. Ah, be careful about function exits​ What I had forgotten is that thread divergence will also occur when threads are exiting from a nested function inside the kernel. My game play loop would always play one game (inside the function play), then return to the search loop to swap two cards before calling play again. It basically didn’t matter that inside the play inner loop thread divergence has been largely solved, because each thread in a warp will still finish at different times, depending on how many turns are needed for their respective games. So they usually diverge at game end, and the whole warp of threads must wait for the longest game amongst them to finish. Stats on the range of game lengths shows a minimum of about 33 turns, an average of about 250 turns, and a max somewhere in the many thousands of turns at least. That’s a lot of variation. With that realization, my next program refactoring was to include the logic for game completion book-keeping and switching to a new game as another (necessarily conditional) step in the inner game playing loop. This slows down the inner loop even further, but allows the threads to stay mostly converged across multiple games. To support this, I pre-create a backlog of games to play, ready for the next game to be picked very quickly from inside the inner loop by whichever thread is available first. (This introduces the only inter-thread synchronization mechanism needed so far, which is the use of a CUDA atomic operation to read and increment the next_deal_to_play index, which barely affects speed but solves the race condition.) In order to allow the threads to run as long as possible in this synchronized inner loop, I decided to use a big chunk of main GPU memory to hold a large backlog, which so far has barely been used (just for the search loop’s best-game-so-far records). We now get to a near-final version of the program, which can be recreated from the latest version using some conditional compilation definitions. This version fills a large (eg. 1M entry) backlog of deals to play in GPU memory (using another kernel of cooperating GPU threads working in parallel), which is then processed in parallel by 1024 threads (32 blocks of 32 threads). Performance did indeed improve over the initial version with the lookup table, but only to about the same speed as the baseline version, which was achieved once I had tweaked the number of blocks and threads. What gives?! Ah, memory speed really matters​ Nsight Compute reveals Speed-of-Light is about the same as before (12% compute and 37% memory utilization), and again lists a number of warnings. This time, some other warnings now seem more relevant and actionable (as I’ve highlighted below): [Warning] All pipelines are under-utilized. Either this kernel is very small or it doesn’t issue enough warps per scheduler. Check the Launch Statistics and Scheduler Statistics sections for further details. [Warning] Every scheduler is capable of issuing one instruction per cycle, but for this kernel each scheduler only issues an instruction every 6.0 cycles. This might leave hardware resources underutilized and may lead to less optimal performance. Out of the maximum of 8 warps per scheduler, this kernel allocates an average of 1.00 active warps per scheduler, but only an average of 0.17 warps were eligible per cycle. Eligible warps are the subset of active warps that are ready to issue their next instruction. Every cycle with no eligible warp results in no instruction being issued and the issue slot remains unused. To increase the number of eligible warps either increase the number of active warps or reduce the time the active warps are stalled. [Warning] On average each warp of this kernel spends 2.4 cycles being stalled waiting for a scoreboard dependency on a L1TEX (local, global, surface, texture) operation. This represents about 39.7% of the total average of 6.0 cycles between issuing two instructions. To reduce the number of cycles waiting on L1TEX data accesses verify the memory access patterns are optimal for the target architecture, attempt to increase cache hit rates by increasing data locality or by changing the cache configuration, and consider moving frequently used data to shared memory. Thread Divergence is still listed as a warning, though the average active threads per warp is now at about 9. Looking at the “source” view of the kernel profile, which shows per-instruction info such as the execution counts, average active threads, distribution of instruction stall reasons etc, it looks like there are lots of times when the inner loop instructions are stalled waiting for access to results from memory. So it looks like there are two clear problems we can address: Ensure there are more (warps of) threads available to schedule, so they can hide the micro-latencies that are naturally experienced by any one warp. Reduce the times when the inner loop is stalled accessing GPU memory. The first issue can in theory be addressed by playing with the blocks and threads configuration. However, changing that doesn’t really make a difference: we are primarily bottlenecked on accessing data from GPU memory. Use Shared Memory as much as possible​ The best improvement we can make now is ironically to stop using main GPU memory as much as we can. In my case, that means replacing the single large backlog of deals with a very small backlog per block, held in Shared Memory. As you may recall from CUDA Programming primers, Shared Memory is the fastest memory area (other than registers allocated by the compiler) available to programmers, but the most limited in size. On my laptop’s GPU, it is limited to 48KB — the same amount of memory as my first home computer (from the Apple II era). This memory is per-block and only available while each specific block is executing — its contents will be lost when that block ends, so relevant info needs to be copied to main memory as final output. Thankfully for my program, using this only involves a modest code change, and basically means the inner game loop runs for fewer iterations before the local backlog is exhausted and has to be replenished. Also each small backlog is shared by fewer threads, so there is more opportunity for some threads to finish early and increase thread divergence. With this change (which included changing blocks and threads to 16 and 128, in order to try and provide more warps to the scheduler as noted above), performance on GPU finally moved ahead of CPU for the first time, and by a fairly impressive margin, reaching about 40M deals per second (vs. 3M on CPU)! Nsight Compute is still saying we are memory bottlenecked though. Final chance to squeeze further! Make Shared Memory stretch as far as possible​ My final change is to recognise that the core data structures are using more memory than they need to, since I’m using an enum to represent 5 possible card values, and enum by default is represented as an int in C++, which is treated as a 32-bit word in CUDA. Similarly, the lookup table data values are all defined as int, but far fewer bits would suffice. I had initially wondered whether native word sized values were more efficient at the individual instruction level, but actually GPUs (like most CPUs) efficiently support sub-word data sizes and even arbitrary bit field operations at the instruction level, so that isn’t an important concern. With a change to specify uint8 as my enum base type, add appropriate bit field declarations in the lookup table struct, and use a more compact representation of deals in the backlog (not the 64 entry circular buffer representation used for playing fast), I am able to squeeze a longer backlog into the 48KB of shared memory, and also reduce the memory bandwidth needed for lookup operations and other game play steps. The effect is rather gratifying: my final version now hits over 100M deals per second, at least until thermal throttling brings it back down to 95M or so. 😊 Nsight Compute is still saying I’m memory bound, but at this stage I’m thinking that may be just how it is. The algorithm is so light on computation that it will typically be waiting on memory (even the super-fast Shared Memory) no matter what. The next step, if it were possible to see how to restructure the game playing loop in a suitable way, would be to try and ensure we created coalesced memory access patterns (where threads in a warp all reference directly adjacent memory locations that allow the hardware to make them one read/write operation). But that seems very unlikely to be possible with each thread still working on its own deal. Maybe there is still some potential for improvement from tweaking block and thread counts, and paying attention to memory layout and other micro-details. I’m not holding my breath! Progression History​ From initial CPU version to final decent GPU version, here was the history of changes and results. All performance numbers are in millions of deals per second. Product Use Cases Pricing Resources About
Optimizing Sequential cuBLAS Calls for Matrix Operations—Alternatives to Kernel Fusion? I am currently working on a CUDA project where my code involves a sequence of matrix multiplications followed by activation functions. Typically, such dependent, sequential operations can be optimized using kernel fusion to minimize shared memory access, enhancing overall performance. To streamline my implementation, I opted to use cuBLAS for handling the matrix multiplications. However, I’ve found that cuBLAS doesn’t support kernel fusion, which seems like a missed opportunity for optimization in terms of reducing memory overhead and improving execution speed. Given this context, I am seeking advice on alternative methods to optimize these sequential cuBLAS calls. Are there techniques within CUDA or associated libraries that can mimic the effects of kernel fusion, or perhaps a way to efficiently manage these operations to achieve similar performance gains? Any suggestions on optimizing memory usage or overlapping computations would also be greatly appreciated. Share this problem. Way I understand it cuBLASDx aims to facilitate kernel fusion for BLAS operations. But currently it looks to only support matrix multiplication which is not enough in my case. You can use cublasLT for some fusion cases, or directly using cutlass. Please take a look at cuDNN’s Graph API (Graph API — NVIDIA cuDNN v9.1.0 documentation). It supports fusion prologue and epilogue fusions with convolutions and matmuls. It offers an abstraction layer on top of cublas and cutlass. Generic Runtime Fusion Engines (Graph API — NVIDIA cuDNN v9.1.0 documentation) Related topics Powered by Discourse, best viewed with JavaScript enabled
Democratizing AI Accelerators and GPU Kernel Programming using Triton by Sanjeev Rampal | Nov 7, 2024 | AI, Hybrid Cloud Red Hat’s Emerging Technologies blog includes posts that discuss technologies that are under active development in upstream open source communities and at Red Hat. We believe in sharing early and often the things we’re working on, but we want to note that unless otherwise stated the technologies and how-tos shared here aren’t part of supported products, nor promised to be in the future. Triton1 is a language and compiler for parallel programming. Specifically it is currently a Python-based DSL (Domain Specific Language) along with associated tooling, that enables the writing of efficient custom compute kernels used for implementing DNNs (Deep Neural Networks) and LLMs (Large Language Models), especially when executed on AI accelerators such as GPUs. The key goals for Triton are: Consequently Triton aims to democratize AI infrastructure, accelerate data science developer productivity (i.e., “developer inner loop”), enabling an open architecture for GPU and AI accelerator programming. The Triton project is currently released as open source by Philippe Tillet and OpenAI under an MIT license (with growing contributions from Meta and others). Red Hat is a strong proponent of Open Source AI technologies and innovations that facilitate a healthy and diverse hardware ecosystem that lowers costs and expands adoption of AI infrastructure solutions. In this blog post, we describe some of the foundational architecture topics in the Triton space and its connection with frameworks such as PyTorch. In subsequent blog posts, we will go into further details including the use of Triton on Red Hat platforms. Kernels, GPU programming models Figure 1 below illustrates a simplified view of a basic AI server model with a single multi-core CPU and a single GPU. A GPU kernel is simply the program/ function that runs on the GPU, loaded and invoked on-demand from the program running on the host CPU. A common GPU architecture is that of a SIMD/ SPMD machine that itself contains a number of processors (often referred to as Streaming Multiprocessors or SMs), which then themselves contain multiple smaller compute cores as well as specialized arithmetic units such as Tensor cores. When running a DNN application, the conventional design pattern is that the host CPU launches new GPU kernels onto the GPU, loads a (often very large) set of data into GPU memory, lets the GPU processors execute multiple parallel compute threads on the loaded kernel to perform a set of computations, (usually vector or matrix operations such as matrix multiplications) and then harvests the results back into CPU memory before potentially launching a follow up set of kernels and associated data. Later in this article we see how newer design patterns optimize this. A precise comparison of the vector processing differences between CPUs and GPUs is beyond the scope of this article, save to mention here that general purpose CPUs typically contains 10s or maybe 100s of general purpose compute cores with some amount of vector processing support, whereas GPUs contain 1 or 2 orders of magnitude more special purpose compute threads (e.g., “CUDA threads”) enabling massively parallel processing on large vectors in an SIMD manner and also have special purpose tensor arithmetic units e.g., “tensor cores”. Figure 2 (Ref. article) shows a common abstracted model of a single GPU that can serve for the purpose of this article. This shows that there is a multi-level memory hierarchy even within a single GPU, with an L1 SRAM based cache available to compute threads within a single SM, an L2 SRAM based cache shared by multiple SMs and a global GPU memory (often implemented using a form of DRAM called HBM or High Bandwidth Memory). The SRAM memories support higher throughput and significantly lower access latencies than the global HBM which in turn is faster than CPU DRAM. The exact numbers vary with GPU type but as an example on the H100, HBM memory bandwidth is 3 TB/s, L2 SRAM bandwidth is 12 TB/s and L1 SRAM bandwidth is 33 TB/s. Newer GPUs continue to add special capabilities that are not shown in the above simplified model, an example being the Tensor Memory Accelerator or TMA. A detailed analysis of the implications of these different memory hierarchies as well as the memory and compute bandwidths of these different components is beyond the scope of this article. The key point to note is that in order to achieve high efficiency and performance for DNNs, it is vital that all the compute cores, SMs and tensor cores of the GPU are kept busy, particularly with large transformer style models. See for example GPUs go brr. Due to advances in GPU hardware design technology, the raw compute capacity of GPU SMs have vastly outpaced the improvements in memory bandwidth implying that the arithmetic cores are often idling if data is not already in the GPU’s SRAM or if there isn’t enough computation designed in per HBM memory fetch. Due to the mismatch between throughputs of the various compute and memory components, a naively written AI application might well utilize the GPU’s SMs and tensor cores at less than 10% utilization (or sometimes even less than 1% utilization) resulting in both poor overall performance due to increased latency and overall execution time as well as high cost of providing the service given that these expensive GPUs are being utilized at a tiny fraction of their potential compute capacity. See for example “AI and memory wall”. Hence the performance of transformer model based applications is often said to be “memory-bound”, although there can be some scenarios where the performance is “compute-bound” as well. This is where a well designed GPU kernel can make a huge difference and improve overall performance and efficiency. Industry and academic research in this area has emphasized the need to design kernels that are better tuned to address this compute vs memory performance imbalance. See for example “Data movement is all you need” and “GPUs go brr”. Having learnt a bit about GPU architecture and the value of well written and tuned GPU kernels, we now come to what types of kernels one could have.  CUDA kernels are popular examples of such kernels that are specific to GPUs and AI accelerator hardware from one vendor. Similarly vendors of other AI chips have their own kernel stacks. For instance AMD has open sourced its software platform called ROCm for building compute kernels for its AI accelerators and GPUs. Triton With this background perspective, we now come to the Triton kernels.  Triton is a DSL for writing GPU kernels that aren’t tied to any one vendor. Additionally, it is architected to have multiple layers of compilers that automate the optimization and tuning needed to address the memory vs compute throughput tradeoff noted above without requiring the kernel developer to do so. This is the primary value of Triton. Using this combination of device-independent front-end compilation and device-dependent backend compilation, it is able to generate near-optimal code for multiple hardware targets ranging from existing accelerators to upcoming accelerator families from Intel, Qualcomm, Meta, Microsoft and more without requiring the kernel developer to know details of or design optimization strategies for each hardware architecture separately. Effectively, this moves the detailed and often vendor-specific complexity out of the user’s concern and into the vendor’s.  This is a more logical design split because vendors are incentivized to highlight their hardware’s differentiating features, and user’s immediately benefit without change to their applications and do not have to build a depth of engineering proficiency in GPU programming for each model GPU they own. However we are still in an early phase of this technology and should monitor how back-end compiler tuning performance evolves. Figure 3 below illustrates some of the different ways that Triton could fit into a DNN software stack in combination with a PyTorch framework. Triton can also be used in combination with other frameworks such as TensorFlow. As shown, a DNN application may choose to make direct calls to launch custom Triton kernels (either developer authored or leveraged from open source Triton kernel library repos) to perform certain GPU intensive functions. Alternatively, it may choose to leverage a framework such as PyTorch. Within the PyTorch community, additional frameworks have been recently introduced as part of PyTorch 2.x which can further automate the options for using Triton kernels. For instance by using the Torch Dynamo and Torch Inductor compiler frameworks, Triton kernels can be automatically generated and optimized (including optimizations such as kernel fusions) without the app developer having to manually write, fuse or invoke an existing library of Triton kernels directly. However not all kinds of kernel functions can be automatically generated in this manner with optimal performance, so a mix of generated and manually developed Triton kernels may be used in practice. Several industry and open source projects already exist to develop well designed Triton kernels for various DNN functions. Examples include. LinkedIn’s Liger, Unsloth and sample kernels from the Triton team. We haven’t listed example Triton kernels in this article in order to focus on architecture and analysis. The official Triton repo has some tutorials with kernel examples and explanations that may be referred to. In any case, once we have the set of well designed Triton kernels, the Triton language compiler performs multiple passes and its own set of optimizations over the kernel code to generate a Triton IR/ Intermediate Representation version of the code. Beyond this point, we enter the realm of device and GPU specific compilations where backend compilers, typically provided by GPU vendors, translate the lower versions of the kernels into GPU specific machine code binaries for eventual execution on the hardware runtimes. Although we have skipped over many of the details, it should be clear to the reader that the overall process involves multiple layers and passes of compilations, code generation and optimizations when going from a high level DNN application all the way to binary code executable on specific hardware devices and GPUs. Definitions of some key concepts The prior sections provided some intuition around design issues relevant to Triton, GPU kernels and frameworks such as Pytorch. For completeness it is useful to have a short reference list below of common terminology which the reader may find handy when further researching the literature in this area as well as in advance of our future blog posts on these topics. Kernel Fusion – Optimization technique of combining multiple compute operations/ kernels into a single GPU kernel, optimizing memory transfers, minimizing latency and improving overall performance by executing related operations together in a single pass on the GPU. Auto-tuning – A process of automatically optimizing parameters (such as block size, thread count) to find the best-performing configuration for a specific hardware setup and workload, improving execution efficiency. Arithmetic Intensity – The ratio of computational operations (e.g., floating-point operations) to memory operations (data transfers), indicating how much work is done per memory access. Conclusion Triton is an important initiative in the move towards democratizing the use and programming of AI accelerators such as GPUs for Deep Neural Networks. In this article we shared some foundational concepts around this project. In future articles, we will dive into additional Triton details and illustrate its use in enterprise AI platforms from Red Hat. Acknowledgements: Thanks to Steve Royer (Red Hat), Jeremy Eder (Red Hat), Raffaele Spazzoli (Red Hat), Adnan Aziz (Meta), Adam Goucher (OpenAI), Madhukar Srivatsa (IBM) and Raghu Ganti (IBM) for their valued input and review. Explore Privacy statement Terms of use All policies and guidelines About Copyright © 2021-2023 Red Hat, Inc.
GPU Optimization Fundamentals Cliff Woolley Developer Technology Engineer © NVIDIA 2013 Note: Fundamentals will apply broadly Example performance numbers are presented for Tesla K20X, which is based on the Kepler GK110 GPU Same general optimization concepts apply to other GPUs, though some parameters may be different, e.g.: Number of SMs per GPU Number of functional units per SM Maximum number of concurrent warps per SM Shared memory size per SM Register file size per SM Developer tools from NVIDIA help you analyze the concepts without having to memorize parameters of each architecture © NVIDIA 2013 GPU OPTIMIZATION FUNDAMENTALS © NVIDIA 2013 Main Requirements for GPU Performance Expose sufficient parallelism Utilize parallel execution resources efficiently Use memory system efficiently Coalesce global memory accesses Use shared memory where possible Have coherent execution within warps of threads © NVIDIA 2013 APOD: A Systematic Path to Performance Assess Parallelize Optimize Deploy © NVIDIA 2013 Identify hotspots (total time, number of calls) Understand scaling (strong and weak) Assess HOTSPOTS © NVIDIA 2013 Parallelize Applications Applications Libraries Libraries Programming Languages Programming Languages Compiler Directives Compiler Directives © NVIDIA 2013 Optimize Profile-driven optimization Tools: nsight Visual Studio Edition or Eclipse Edition nvvp NVIDIA Visual Profiler nvprof Command-line profiling © NVIDIA 2013 Deploy Check API return values Run cuda-memcheck tools Library distribution Cluster management Early gains Subsequent changes are evolutionary Productize © NVIDIA 2013 ASSESS © NVIDIA 2013 Profile the code, find the hotspot(s) Focus your attention where it will give the most benefit Assess © NVIDIA 2013 Assess We’ve found a hotspot to work on! What percent of our total time does this represent? How much can we improve it? What is the “speed of light”? How much will this improve our overall performance? © NVIDIA 2013 Assess Let’s investigate… Strong scaling and Amdahl’s Law Weak scaling and Gustafson’s Law Expected perf limiters: Bandwidth? Computation? Latency? © NVIDIA 2013 Assess: Understanding Scaling Strong Scaling A measure of how, for fixed overall problem size, the time to solution decreases as more processors are added to a system Linear strong scaling: speedup achieved is equal to number of processors used Amdahl’s Law: 𝑺= 𝟏 𝟏−𝑷+ 𝑷 𝑵 ≈ 𝟏 (𝟏−𝑷) © NVIDIA 2013 Assess: Understanding Scaling Weak Scaling A measure of how time to solution changes as more processors are added with fixed problem size per processor Linear weak scaling: overall problem size increases as num. of processors increases, but execution time remains constant Gustafson’s Law: 𝑺= 𝑵+ (𝟏−𝑷)(𝟏−𝑵) © NVIDIA 2013 Assess: Applying Strong and Weak Scaling Understanding which type of scaling is most applicable is an important part of estimating speedup: Sometimes problem size will remain constant Other times problem size will grow to fill the available processors Apply either Amdahl's or Gustafson's Law to determine an upper bound for the speedup © NVIDIA 2013 Assess: Applying Strong Scaling Recall that in this case we are wanting to optimize an existing kernel with a pre-determined workload That’s strong scaling, so Amdahl’s Law will determine the maximum speedup ~93% © NVIDIA 2013 Assess: Applying Strong Scaling Say, for example, our kernel is ~93% of total time: Speedup 𝑺= 𝟏 𝟏−𝑷+ 𝑷 𝑺𝑷 (SP = speedup in parallel part) In the limit when 𝑺𝑷 is huge, 𝑺 will approach 𝟏 𝟏−𝟎.𝟗𝟑≈𝟏𝟒. 𝟑 In practice, it will be less than that depending on the 𝑺𝑷 achieved Getting 𝑺𝑷 to be high is the goal of optimizing, of course ~93% © NVIDIA 2013 Assess: Speed of Light What’s the limiting factor? Memory bandwidth? Compute throughput? Latency? Not sure? Get a rough estimate by counting bytes per instruction, compare it to “balanced” peak ratio 𝑮𝑩𝒚𝒕𝒆𝒔/𝒔𝒆𝒄 𝑮𝒊𝒏𝒔𝒏𝒔/𝒔𝒆𝒄 Profiler will help you determine this © NVIDIA 2013 Assess: Limiting Factor Comparing bytes per instr. will give you a guess as to whether you’re likely to be bandwidth-bound or instruction-bound Comparing actual achieved GB/s vs. theory and achieved Ginstr/s vs. theory will give you an idea of how well you’re doing If both are low, then you’re probably latency-bound and need to expose more (concurrent) parallelism © NVIDIA 2013 Assess: Limiting Factor © NVIDIA 2013 Assess: Speed of Light What’s the limiting factor? Memory bandwidth? Compute throughput? Latency? Consider SpMV: intuitively expect it to be bandwidth-limited Say we discover we’re getting only ~38% of peak bandwidth If we aim to get this up to ~65% of peak, that’s 1.7 for this kernel 1.7 for this kernel translates into 1.6 overall due to Amdahl: 𝐒= 𝟏 𝟏−𝟎.𝟗𝟑+𝟎.𝟗𝟑 𝟏.𝟕 ≈𝟏. 𝟔 ~93% © NVIDIA 2013 Assess: Limiting Factor For our example SpMV kernel, our first discovery was that we’re latency-limited, not bandwidth, since utilization was so low This tells us our first “optimization” step actually needs to be related how we expose (memory-level) parallelism ~93% © NVIDIA 2013 PARALLELIZE © NVIDIA 2013 PARALLELIZE Computation © NVIDIA 2013 Parallelize Applications Applications Libraries Libraries Programming Languages Programming Languages Compiler Directives Compiler Directives Pick the best tool for the job © NVIDIA 2013 NVIDIA cuFFT NVIDIA cuSPARSE NVIDIA cuBLAS NVIDIA cuRAND NVIDIA NPP Vector Signal Image Processing Matrix Algebra on GPU and Multicore C++ Templated Parallel Algorithms IMSL Library GPU Accelerated Linear Algebra Building-block Algorithms CenterSpace NMath Parallelize: e.g., with GPU Accelerated Libraries Parallelize: e.g., with GPU Accelerated Libraries © NVIDIA 2013 // generate 32M random numbers on host thrust::host_vector<int> h_vec(32 << 20); thrust::generate(h_vec.begin(), h_vec.end(), rand); // transfer data to device (GPU) thrust::device_vector<int> d_vec = h_vec; // sort data on device thrust::sort(d_vec.begin(), d_vec.end()); // transfer data back to host thrust::copy(d_vec.begin(), d_vec.end(), h_vec.begin()); Parallelize: e.g., with Thrust Parallelize: e.g., with Thrust Similar to C++ STL High-level interface Enhances developer productivity Enables performance portability between GPUs and multicore CPUs Flexible Backends for CUDA, OpenMP, TBB Extensible and customizable Integrates with existing software Open source thrust.github.com or developer.nvidia.com/thrust © NVIDIA 2013 Parallelize: e.g., with OpenACC Program myscience ... serial code ... !$acc kernels do k = 1,n1 do i = 1,n2 ... parallel code ... enddo enddo !$acc end kernels ... End Program myscience CPU GPU Your original Fortran or C code Directives-based approach Compiler parallelizes code Works on many-core GPUs & multicore CPUs Directives-based approach Compiler parallelizes code Works on many-core GPUs & multicore CPUs OpenACC OpenACCCompiler Compiler Directive Directive www.nvidia.com/gpudirectives © NVIDIA 2013 void saxpy_serial(int n, float a, float *x, float *y) { for (int i = 0; i < n; ++i) y[i] = a*x[i] + y[i]; } // Perform SAXPY on 1M elements saxpy_serial(4096*256, 2.0, x, y); __global__ void saxpy_parallel(int n, float a, float *x, float *y) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < n) y[i] = a*x[i] + y[i]; } // Perform SAXPY on 1M elements saxpy_parallel<<<4096,256>>>(n,2.0,x,y); Parallelize: e.g., with CUDA C Parallelize: e.g., with CUDA C Standard C Code CUDA C Code developer.nvidia.com/cuda-toolkit © NVIDIA 2013 Parallelism Needed GPU is a parallel machine Lots of arithmetic pipelines Multiple memory banks To get good performance, your code must expose sufficient parallelism for 2 reasons: To actually give work to all the pipelines To hide latency of the pipelines Rough rule of thumb for Tesla K20X: You want to have 14K or more threads running concurrently © NVIDIA 2013 void transpose(float in[][], float out[][], int N) { for(int j=0; j < N; j++) for(int i=0; i < N; i++) out[j][i] = in[i][j]; } Case Study: Matrix Transpose i j © NVIDIA 2013 + Quickly implemented - Performance weak Need to expose parallelism! An Initial CUDA Version __global__ void transpose(float in[], float out[], int N) { for(int j=0; j < N; j++) for(int i=0; i < N; i++) out[i*N+j] = in[j*N+i]; } float in[N*N], out[N*N]; … transpose<<<1,1>>>(in, out, N); © NVIDIA 2013 + Quickly implemented - Performance weak Need to expose parallelism! An Initial CUDA Version __global__ void transpose(float in[], float out[], int N) { for(int j=0; j < N; j++) for(int i=0; i < N; i++) out[i*N+j] = in[j*N+i]; } float in[N*N], out[N*N]; … transpose<<<1,1>>>(in, out, N); © NVIDIA 2013 Parallelize across matrix elements tid in tid tid tid out tid tid __global__ transpose(float in[], float out[]) { int tid = threadIdx.x; int bid = blockIdx.x; out[tid*N+bid] = in[bid*N+tid]; } float in[], out[]; … transpose<<<N,N>>>(in, out); Process elements independently bid bid bid bid © NVIDIA 2013 PARALLELIZE Data Transfer © NVIDIA 2013 Heterogeneous system: overlap work and data movement Asynchronicity = Overlap = Parallelism DMA DMA © NVIDIA 2013 This is the kind of case we would be concerned about Found the top kernel, but the GPU is mostly idle – that is our bottleneck Need to overlap CPU/GPU computation and PCIe transfers Asynchronicity © NVIDIA 2013 What we want to see is maximum overlap of all engines Parallelize: Achieve Asynchronicity © NVIDIA 2013 OPTIMIZE © NVIDIA 2013 Main Requirements for GPU Performance Expose sufficient parallelism Utilize parallel execution resources efficiently Use memory system efficiently Coalesce global memory accesses Use shared memory where possible Have coherent execution within warps of threads © NVIDIA 2013 GPU Optimization Fundamentals Find ways to parallelize sequential code Adjust kernel launch configuration to maximize device utilization Ensure global memory accesses are coalesced Minimize redundant accesses to global memory Avoid different execution paths within the same warp Minimize data transfers between the host and the device http://docs.nvidia.com/cuda/cuda-c-best-practices-guide/ © NVIDIA 2013 GPU Optimization Fundamentals Find ways to parallelize sequential code Kernel optimizations Launch configuration Global memory throughput Shared memory access Instruction throughput / control flow Optimization of CPU-GPU interaction Maximizing PCIe throughput Overlapping kernel execution with memory copies © NVIDIA 2013 OPTIMIZE Kernel Optimizations: Kernel Launch Configuration © NVIDIA 2013 Kernel Launch Configuration A kernel is a function that runs on the GPU A kernel is launched as a grid of blocks of threads Launch configuration is the number of blocks and number of threads per block, expressed in CUDA with the <<< >>> notation: mykernel<<<blocks_per_grid,threads_per_block>>>(…); What values should we pick for these? Need enough total threads to process entire input Need enough threads to keep the GPU busy Selection of block size is an optimization step involving warp occupancy © NVIDIA 2013 High-level view of GPU Architecture Several Streaming Multiprocessors E.g., Kepler GK110 has up to 15 SMs L2 Cache shared among SMs Multiple channels to DRAM Kepler GK110 © NVIDIA 2013 Kepler Streaming Multiprocessor (SMX) Per SMX: 192 SP CUDA Cores 64 DP CUDA Cores 4 warp schedulers Up to 2048 concurrent threads One or two instructions issued per scheduler per clock from a single warp Register file (256KB) Shared memory (48KB) © NVIDIA 2013 CUDA Execution Model Thread: Sequential execution unit All threads execute same sequential program Threads execute in parallel Threads Block: a group of threads Executes on a single Streaming Multiprocessor (SM) Threads within a block can cooperate Light-weight synchronization Data exchange Grid: a collection of thread blocks Thread blocks of a grid execute across multiple SMs Thread blocks do not synchronize with each other Communication between blocks is expensive © NVIDIA 2013 Software Hardware Threads are executed by scalar CUDA Cores Thread CUDA Core Thread Block Multiprocessor Thread blocks are executed on multiprocessors Thread blocks do not migrate Several concurrent thread blocks can reside on one multiprocessor - limited by multiprocessor resources (shared memory and register file) Grid A kernel is launched as a grid of thread blocks Execution Model Device © NVIDIA 2013 Launch Configuration: General Guidelines How many blocks should we use? 1,000 or more thread blocks is best Rule of thumb: enough blocks to fill the GPU at least 10s of times over Makes your code ready for several generations of future GPUs © NVIDIA 2013 Launch Configuration: General Guidelines How many threads per block should we choose? The really short answer: 128, 256, or 512 are often good choices The slightly longer answer: Pick a size that suits the problem well Multiples of 32 threads are best Pick a number of threads per block (and a number of blocks) that is sufficient to keep the SM busy © NVIDIA 2013 Multiprocessor 32 Threads Warps A thread block consists of warps of 32 threads A warp is executed physically in parallel on some multiprocessor. Threads of a warp issue instructions in lock- step (as with SIMD) = Warps Thread Block 32 Threads 32 Threads 32 Threads © NVIDIA 2013 Hardware Levels of Parallelism SIMD MPI Single Instruction, Multiple Data In-core parallelism SMT Simultaneous Multithreading Cross-core, Cross-socket Single Computer OpenMP, pthreads Multiple “computers” Tightly-coupled Supercomputing apps SIMT Single Instruction, Multiple Threads In-processor parallelism Many threads on many cores These form a continuum. Best performance is achieved with a mix. © NVIDIA 2013 Low Latency or High Throughput? CPU Optimized for low-latency access to cached data sets Control logic for out-of-order and speculative execution GPU Optimized for data-parallel, throughput computation Architecture tolerant of memory latency More transistors dedicated to computation © NVIDIA 2013 Occupancy Need enough concurrent warps per SM to hide latencies: Instruction latencies Memory access latencies Hardware resources determine number of warps that fit per SM Occupancy = Nactual / Nmax © NVIDIA 2013 Low Latency or High Throughput? CPU architecture must minimize latency within each thread GPU architecture hides latency with computation from other (warps of) threads GPU Streaming Multiprocessor – High-throughput Processor CPU core – Low-latency Processor Computation Thread/Warp Tn Processing Waiting for data Ready to be processed Context switch W1 W2 W3 W4 T1 T2 T3 T4 © NVIDIA 2013 Latency Hiding Instruction latencies: Roughly 10-20 cycles for arithmetic operations DRAM accesses have higher latencies (400-800 cycles) Instruction Level Parallelism (ILP) Independent instructions between two dependent ones ILP depends on the code, done by the compiler Switching to a different warp If a warp must stall for N cycles due to dependencies, having N other warps with eligible instructions keeps the SM going Switching among concurrently resident warps has no overhead State (registers, shared memory) is partitioned, not stored/restored FFMA R0, R43, R0, R4; FFMA R1, R43, R4, R5; FMUL R7, R9, R0; FMUL R8, R9, R1; ST.E [R2], R7; ILP=2 © NVIDIA 2013 Occupancy Occupancy: number of concurrent warps per SM, expressed as: Absolute number of warps of threads that fit concurrently (e.g., 1..64), or Ratio of warps that fit concurrently to architectural maximum (0..100%) Number of warps that fit determined by resource availability: Threads per thread block Registers per thread Shared memory per thread block Kepler SM resources: – 64K 32-bit registers – Up to 48 KB of shared memory – Up to 2048 concurrent threads – Up to 16 concurrent thread blocks © NVIDIA 2013 Occupancy and Performance Note that 100% occupancy isn’t needed to reach maximum performance Once the “needed” occupancy (enough warps to switch among to cover latencies) is reached, further increases won’t improve performance Level of occupancy needed depends on the code More independent work per thread -> less occupancy is needed Memory-bound codes tend to need more occupancy Higher latency than for arithmetic, need more work to hide it © NVIDIA 2013 Thread Block Size and Occupancy Thread block size is a multiple of warp size (32) Even if you request fewer threads, hardware rounds up Thread blocks can be too small Kepler SM can run up to 16 thread blocks concurrently SM can reach the block count limit before reaching good occupancy E.g.: 1-warp blocks = 16 warps/SM on Kepler (25% occ – probably not enough) Thread blocks can be too big Enough SM resources for more threads, but not enough for a whole block A thread block isn’t started until resources are available for all of its threads © NVIDIA 2013 Thread Block Sizing SM resources: Registers Shared memory Number of warps allowed by SM resources Too few threads per block Too many threads per block © NVIDIA 2013 CUDA Occupancy Calculator Analyze effect of resource consumption on occupancy © NVIDIA 2013 Occupancy Analysis in NVIDIA Visual Profiler Occupancy here is limited by grid size and number of threads per block © NVIDIA 2013 OPTIMIZE Kernel Optimizations: Global Memory Throughput © NVIDIA 2013 Host CPU Chipset DRAM Device DRAM Global Constant Texture Local GPU Multiprocessor Registers Shared Memory Multiprocessor Registers Shared Memory Multiprocessor Registers Shared Memory Constant and Texture Caches L1 / L2 Cache CUDA Memory Architecture © NVIDIA 2013 Optimizing Memory Throughput Goal: utilize all available memory bandwidth Little’s Law: # bytes in flight = latency * bandwidth Increase parallelism (bytes in flight) (or) Reduce latency (time between requests) Access latency L © NVIDIA 2013 Illustration: Little’s Law for Escalators Say the parameters of our escalator are: 1 person fits on each step Step arrives every 2 secs (bandwidth=0.5 persons/s) 20 steps tall (latency=40 seconds) 1 person in flight: 0.025 persons/s achieved To saturate bandwidth: Need 1 person arriving every 2 s Means we’ll need 20 persons in flight The idea: Bandwidth × Latency It takes latency time units for the first person to arrive We need bandwidth persons to get on the escalator every time unit © NVIDIA 2013 Memory-Level Parallelism = Bandwidth In order to saturate memory bandwidth, SM must have enough independent memory requests in flight concurrently © NVIDIA 2013 Memory-Level Parallelism: Requests in flight Achieved Kepler memory throughput Shown as a function of number of concurrent requests per SM with 128-byte lines © NVIDIA 2013 Experiment: vary size of accesses by threads of a warp, check performance Memcopy kernel: each warp has 2 concurrent requests (one write and the read following it) Accesses by a warp: 4B words: 1 line 8B words: 2 lines 16B words: 4 lines To achieve same throughput at lower occupancy or with smaller words, need more independent requests per warp Requests per Thread and Performance © NVIDIA 2013 Optimizing Access Concurrency Ways to increase concurrent accesses: Increase occupancy (run more warps concurrently) Adjust block dimensions to maximize occupancy If occupancy is limited by registers per thread, try to reduce register count (-maxrregcount option or __launch_bounds__) Modify code to process several elements per thread Doubling elements per thread doubles independent accesses per thread © NVIDIA 2013 OPTIMIZE Kernel Optimizations: Global Memory Access Coalescing © NVIDIA 2013 Mechanics of a Memory Access Memory operations are issued per warp Just like all other instructions Operation: Threads in a warp provide memory addresses Hardware determines which lines/segments are needed, fetches them © NVIDIA 2013 Memory Access Efficiency Analysis Two perspectives on the throughput: Application’s point of view: count only bytes requested by application HW point of view: count all bytes moved by hardware The two views can be different: Memory is accessed at 32 byte granularity With a scattered or offset pattern, the application doesn’t use all the bytes the hardware actually transferred Broadcast: the same small transaction serves many threads in a warp © NVIDIA 2013 Access Patterns vs. Memory Throughput Scenario: Warp requests 32 aligned, consecutive 4-byte words Addresses fall within 4 segments Warp needs 128 bytes 128 bytes move across the bus Bus utilization: 100% ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 © NVIDIA 2013 Access Patterns vs. Memory Throughput ... addresses from a warp Scenario: Warp requests 32 aligned, permuted 4-byte words Addresses fall within 4 segments Warp needs 128 bytes 128 bytes move across the bus Bus utilization: 100% 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 © NVIDIA 2013 Access Patterns vs. Memory Throughput Scenario: Warp requests 32 misaligned, consecutive 4-byte words Addresses fall within at most 5 segments Warp needs 128 bytes At most 160 bytes move across the bus Bus utilization: at least 80% Some misaligned patterns will fall within 4 segments, so 100% utilization 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 ... addresses from a warp © NVIDIA 2013 Access Patterns vs. Memory Throughput addresses from a warp Scenario: All threads in a warp request the same 4-byte word Addresses fall within a single segment Warp needs 4 bytes 32 bytes move across the bus Bus utilization: 12.5% ... 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 © NVIDIA 2013 Access Patterns vs. Memory Throughput addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Scenario: Warp requests 32 scattered 4-byte words Addresses fall within N segments Warp needs 128 bytes N*32 bytes move across the bus Bus utilization: 128 / (N*32) ... © NVIDIA 2013 Parallelizing SAXPY void saxpy(int n, float a, float * x, float * y) { for(int i=0; i<n; i++) { y[base +i] += a * x[base+ i]; } } Divide the work equally among T threads Each thread is responsible for computing one contiguous ‘region’ of the arrays This is good for pthreads © NVIDIA 2013 Parallelizing SAXPY __global__ void saxpy1(int n, float a, float * x, float * y) { int workPerThread = 1 + n/blockDim.x; int base = threadIdx.x * workPerThread; for(int i=0; i<workPerThread; i++) { if(base + i < n) { y[base +i] += a * x[base+ i]; } } } Divide the work equally among T threads Each thread is responsible for computing one contiguous ‘region’ of the arrays This is good for pthreads thread 0 thread 1 thread 2 thread 3 … thread 31 x © NVIDIA 2013 Parallelizing SAXPY __global__ void saxpy1(int n, float a, float * x, float * y) { int workPerThread = 1 + n/blockDim.x; int base = threadIdx.x * workPerThread; for(int i=0; i<workPerThread; i++) { if(base + i < n) { y[base +i] += a * x[base+i]; } } } In SIMT, 32 threads of a warp issue the x[base+i] instruction simultaneously. Each thread has different value of base if workPerThread > 1, this becomes a strided load thread 0 thread 1 thread 2 thread 3 … thread 31 x © NVIDIA 2013 Parallelizing SAXPY __global__ void saxpy1(int n, float a, float * x, float * y) { int workPerThread = 1 + n/blockDim.x; int base = threadIdx.x * workPerThread; for(int i=0; i<workPerThread; i++) { if(base + i < n) { y[base +i] += a * x[base+i]; } } } In SIMT, 32 threads of a warp issue the x[base+i] instruction simultaneously. Each thread has different value of base if workPerThread > 1, this becomes a strided load thread 0 thread 1 thread 2 thread 3 … thread 31 x © NVIDIA 2013 … A Better Way to Parallelize SAXPY Divide work up so that each pass through the loop, the thread block computes one ‘contiguous region’ of the array. Achieves memory coalescing loopcount = 0 loopcount = 1 … x loopcount=k __global__ void saxpy2(int n, float a, float * x, float * y) { int id; int loopCount = 0; while(id < n) { id = loopCount*blockDim.x + threadIdx.x; y[id] += a * x[id]; loopCount++; } } © NVIDIA 2013 __global__ void saxpy2(int n, float a, float * x, float * y) { int id; int loopCount = 0; while(id < n) { id = loopCount*blockDim.x + threadIdx.x; y[id] += a * x[id]; loopCount++; } } The area of X addressed by each warp is contiguous in global memory. The number of global memory transactions is minimized. This effect translates to loads and stores of y also. loopcount = 0 loopcount = 1 … … x A Better Way to Parallelize SAXPY loopcount=k © NVIDIA 2013 Structures of Non-Native Size Say we are reading a 12-byte structure per thread struct Position { float x, y, z; }; ... __global__ void kernel( Position *data, ... ) { int idx = blockIdx.x * blockDim.x + threadIdx.x; Position temp = data[idx]; ... } © NVIDIA 2013 Structure of Non-Native Size Compiler converts temp = data[idx] into 3 loads: Each loads 4 bytes Can’t do an 8 and a 4 byte load: 12 bytes per element means that every other element wouldn’t align the 8-byte load on 8-byte boundary Addresses per warp for each of the loads: Successive threads read 4 bytes at 12-byte stride © NVIDIA 2013 First Load Instruction 4 8 12 16 20 56 60 64 0 24 48 52 36 40 44 28 32 addresses from a warp ... © NVIDIA 2013 Second Load Instruction 4 8 12 16 20 56 60 64 0 24 48 52 36 40 44 28 32 addresses from a warp ... © NVIDIA 2013 Third Load Instruction 4 8 12 16 20 56 60 64 0 24 48 52 36 40 44 28 32 addresses from a warp ... © NVIDIA 2013 Performance and Solutions Because of the address pattern, we end up moving 3x more bytes than application requests We waste a lot of bandwidth, leaving performance on the table Potential solutions: Change data layout from array of structures to structure of arrays In this case: 3 separate arrays of floats The most reliable approach (also ideal for both CPUs and GPUs) Use loads via read-only cache As long as lines survive in the cache, performance will be nearly optimal Stage loads via shared memory © NVIDIA 2013 Global Memory Access Patterns SoA vs AoS: Good: point.x[i] Not so good: point[i].x Strided array access: ~OK: x[i] = a[i+1] – a[i] Slower: x[i] = a[64*i] – a[i] Random array access: Slower: a[rand(i)] 0 1 31 0 1 31 © NVIDIA 2013 Summary: GMEM Optimization Strive for perfect address coalescing per warp Align starting address (may require padding) A warp will ideally access within a contiguous region Avoid scattered address patterns or patterns with large strides between threads Analyze and optimize address patterns: Use profiling tools (included with CUDA toolkit download) Compare the transactions per request to the ideal ratio Choose appropriate data layout (prefer SoA) If needed, try read-only loads, staging accesses via SMEM © NVIDIA 2013 A note about caches L1 and L2 caches Ignore in software design Thousands of concurrent threads – cache blocking difficult at best Read-only Data Cache Shared with texture pipeline Useful for uncoalesced reads Handled by compiler when const __restrict__ is used, or use __ldg() primitive © NVIDIA 2013 Blocking for GPU Memory Caches Short answer: DON’T GPU caches are not intended for the same use as CPU caches Smaller size (especially per thread), so not aimed at temporal reuse Intended to smooth out some access patterns, help with spilled registers, etc. Usually not worth trying to cache-block like you would on CPU 100s to 1,000s of run-time scheduled threads competing for the cache If it is possible to block for L1 then it’s possible block for SMEM Same size Same or higher bandwidth Guaranteed locality: hw will not evict behind your back © NVIDIA 2013 Read-only Data Cache Go through the read-only cache Not coherent with writes Thus, addresses must not be written by the same kernel Two ways to enable: Decorating pointer arguments as hints to compiler: Pointer of interest: const __restrict__ All other pointer arguments: __restrict__ – Conveys to compiler that no aliasing will occur Using __ldg() intrinsic Requires no pointer decoration © NVIDIA 2013 Read-only Data Cache Go through the read-only cache Not coherent with writes Thus, addresses must not be written by the same kernel Two ways to enable: Decorating pointer arguments as hints to compiler: Pointer of interest: const __restrict__ All other pointer arguments: __restrict__ – Conveys to compiler that no aliasing will occur Using __ldg() intrinsic Requires no pointer decoration __global__ void kernel( int* __restrict__ output, const int* __restrict__ input ) { ... output[idx] = input[idx]; } © NVIDIA 2013 Read-only Data Cache Go through the read-only cache Not coherent with writes Thus, addresses must not be written by the same kernel Two ways to enable: Decorating pointer arguments as hints to compiler: Pointer of interest: const __restrict__ All other pointer arguments: __restrict__ – Conveys to compiler that no aliasing will occur Using __ldg() intrinsic Requires no pointer decoration __global__ void kernel( int *output, int *input ) { ... output[idx] = __ldg( &input[idx] ); } © NVIDIA 2013 Texture and Constant Memory Read-only Data resides in global memory Read via special-purpose caches © NVIDIA 2013 Texture Separate cache Dedicated texture cache hardware provides: Out-of-bounds index handling clamp or wrap-around Optional interpolation Think: using fp indices for arrays Linear, bilinear, trilinear – Interpolation weights are 9-bit Optional format conversion {char, short, int} -> float All of these are “free” © NVIDIA 2013 Examples of Texture Object Indexing © 11 Index Clamp: 0 1 2 3 4 1 2 3 0 (5.5, 1.5) 1 2 3 0 (2.5, 0.5) (1.0, 1.0) 0 1 2 3 4 1 2 3 0 (5.5, 1.5) 0 1 2 3 4 Index Wrap: Integer indices fall between elements Optional interpolation: Weights are determined by coordinate distance © NVIDIA 2013 OPTIMIZE Kernel Optimizations: Shared Memory Accesses © NVIDIA 2013 Shared Memory Fast, on-chip memory Accessible by all threads within a thread block Common allocation for entire thread block Variety of uses: Software managed cache (e.g., tiled DGEMM) Global memory coalescing (e.g., transpose) Communication within a thread block (e.g., FFT, reductions) Limited Resource Use of shared memory affects occupancy Registers L1 SM SM SMEM © NVIDIA 2013 Shared Memory Organization Organized in 32 independent banks Optimal access: no two words from same bank Separate banks per thread Banks can multicast Multiple words from same bank serialize C Bank Any 1:1 or multicast pattern C C C Bank Bank Bank C Bank C C C Bank Bank Bank © NVIDIA 2013 Bank Addressing Examples  No Bank Conflicts  No Bank Conflicts Bank 31 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 31 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 31 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 31 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 © NVIDIA 2013 Bank Addressing Examples  2-way Bank Conflicts  8-way Bank Conflicts Thread 31 Thread 30 Thread 29 Thread 28 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 31 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 31 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 9 Bank 8 Bank 31 Bank 7 Bank 2 Bank 1 Bank 0 x8 x8 © NVIDIA 2013 Motivating Example: Matrix Transpose _global__ void gpuTranspose_kernel(int rows, int cols, float *in, float *out) { int i, j; i = blockIdx.x * blockDim.x + threadIdx.x; j = blockIdx.y * blockDim.y + threadIdx.y; out[i * rows + j] = in[j * cols + i]; } Either write or read is strided in gmem and uncoalesced Solution: tile in shared memory i j © NVIDIA 2013 Transposing with Shared Memory 1. Read block_ij into shared memory • Reads are coalesced 2. Transpose shared memory indices 3. Write transposed block to global memory • Writes are coalesced i j Global Memory Shared Memory © NVIDIA 2013 Shared Memory Organization Organized in 32 independent banks Note: same as warp size. Not a coincidence. Every 32byte word is in the next bank, modulo 32. Optimal access: no two words from same bank Separate banks per thread Banks can multicast Multiple words from same bank serialize Called bank conflict, causes instruction replay C Bank Any 1:1 or multicast pattern C C C Bank Bank Bank C Bank C C C Bank Bank Bank © NVIDIA 2013 Shared Memory: Avoiding Bank Conflicts Example: 32x32 SMEM array Warp accesses a column: 32-way bank conflicts (threads in a warp access the same bank) 31 2 1 0 31 2 1 0 31 2 1 0 Bank 0 Bank 1 … Bank 31 2 0 1 31 © NVIDIA 2013 Shared Memory: Avoiding Bank Conflicts Example: 32x32 SMEM array Warp accesses a column: 32-way bank conflicts (threads in a warp access the same bank) 31 2 1 0 31 2 1 0 31 2 1 0 Bank 0 Bank 1 … Bank 31 2 0 1 31 Accesses along row produces 0 bank conflicts Accesses along column produces 32 bank conflicts (replays) © NVIDIA 2013 Shared Memory: Avoiding Bank Conflicts Add a column for padding: 32x33 SMEM array Warp accesses a column: 32 different banks, no bank conflicts 31 2 1 0 31 2 1 0 31 2 1 0 padding Bank 0 Bank 1 … Bank 31 31 2 0 1 Accesses along row produces no bank conflicts Accesses along column produces no bank conflicts © NVIDIA 2013 Shared Memory/L1 Sizing Shared memory and L1 use the same 64KB physical memory Program-configurable split: Fermi: 48:16, 16:48 Kepler: 48:16, 16:48, 32:32 CUDA API: cudaDeviceSetCacheConfig(), cudaFuncSetCacheConfig() Large L1 can improve performance when: Spilling registers (more lines in the cache -> fewer evictions) Large SMEM can improve performance when: Occupancy is limited by SMEM © NVIDIA 2013 Final Notes on Shared Memory Fast: high bandwidth, low latency Useful as user managed cache for coalescing, caching, and communication within a thread block Shared memory size / L1 cache size is API-configurable 16k L1 / 48k Shared (default on both Fermi and Kepler) 48k L1 / 16k Shared 32k L1 / 32k Shared (Kepler only). Be careful of: Overuse: Excessive allocation can hurt occupancy Access pattern: Lots of bank conflicts can hurt performance © NVIDIA 2013 OPTIMIZE Kernel Optimizations: Instruction Throughput / Control Flow © NVIDIA 2013 Exposing Sufficient Parallelism What SMX ultimately needs: Sufficient number of independent instructions Kepler GK110 is “wider” than Fermi or GK104; needs more parallelism Two ways to increase parallelism: More independent instructions (ILP) within a thread (warp) More concurrent threads (warps) © NVIDIA 2013 Independent Instructions: ILP vs. TLP SMX can leverage available Instruction-Level Parallelism more or less interchangeably with Thread-Level Parallelism Sometimes easier to increase ILP than to increase TLP E.g., # of threads may be limited by algorithm or by HW resource limits But if each thread has some degree of independent operations to do, Kepler SMX can leverage that. (E.g., a small loop that is unrolled.) In fact, some degree of ILP is actually required to approach theoretical max Instructions Per Clock (IPC) © NVIDIA 2013 Control Flow Instructions are issued per 32 threads (warp) Divergent branches: Threads within a single warp take different paths if-else, ... Different execution paths within a warp are serialized Different warps can execute different code with no impact on performance © NVIDIA 2013 Control Flow Avoid diverging within a warp Note: some divergence is not necessarily a problem, but large amounts impacts execution efficiency Example with divergence: if (threadIdx.x > 2) {...} else {...} Branch granularity < warp size Example without divergence: if (threadIdx.x / warpSize > 2) {...} else {...} Branch granularity is a whole multiple of warp size © NVIDIA 2013 Control Flow if ( ... ) { // then-clause } else { // else-clause } instructions © NVIDIA 2013 Execution within warps is coherent instructions / time Warp (“vector” of threads) 35 34 33 63 62 32 3 2 1 31 30 0 Warp (“vector” of threads) © NVIDIA 2013 Execution diverges within a warp instructions / time 3 2 1 31 30 0 35 34 33 63 62 32 © NVIDIA 2013 Execution diverges within a warp instructions / time 3 2 1 31 30 0 35 34 33 63 62 32 Solution: Group threads with similar control flow © NVIDIA 2013 Runtime Math Library and Intrinsics Two types of runtime math library functions __func(): many map directly to hardware ISA Fast but lower accuracy (see CUDA Programming Guide for full details) Examples: __sinf(x), __expf(x), __powf(x, y) func(): compile to multiple instructions Slower but higher accuracy (5 ulp or less) Examples: sin(x), exp(x), pow(x, y) A number of additional intrinsics: __sincosf(), __frcp_rz(), ... Explicit IEEE rounding modes (rz,rn,ru,rd) © NVIDIA 2013 OPTIMIZE Optimizing CPU-GPU Interaction: Maximizing PCIe Throughput © NVIDIA 2013 Maximizing PCIe Throughput Use transfers that are of reasonable size (a few MB, at least) Use pinned system memory Overlap memcopies with useful computation © NVIDIA 2013 Pinned (non-pageable) memory Pinned memory enables: faster PCIe copies memcopies asynchronous with CPU memcopies asynchronous with GPU Usage cudaHostAlloc / cudaFreeHost instead of malloc / free cudaHostRegister / cudaHostUnregister pin regular memory after allocation Implication: pinned memory is essentially removed from host virtual memory © NVIDIA 2013 Asynchronicity in CUDA Default: Kernel launches are asynchronous with CPU Memcopies (D2H, H2D) block CPU thread CUDA calls are serialized by the driver Streams and async functions provide additional asynchronicity: Memcopies (D2H, H2D) asynchronous with CPU Ability to concurrently execute kernels and memcopies Stream: sequence of ops that execute in issue-order on GPU Operations from different streams may be interleaved Kernels and memcopies from different streams can be overlapped © NVIDIA 2013 OPTIMIZE Optimizing CPU-GPU Interaction: Overlapping Kernel Execution with Memory Copies © NVIDIA 2013 Overlap kernel and memory copy Requirements: D2H or H2D memcopy from pinned memory Kernel and memcopy in different, non-0 streams Code: cudaStream_t stream1, stream2; cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); cudaMemcpyAsync( dst, src, size, dir, stream1 ); kernel<<<grid, block, 0, stream2>>>(…); potentially overlapped © NVIDIA 2013 Call Sequencing for Optimal Overlap CUDA calls are dispatched in the sequence they were issued Kepler can concurrently execute: Up to 32 kernels Up to 2 memcopies, as long as they are in different directions (D2H, H2D) A call is dispatched if both are true: Resources are available Preceding calls in the same stream have completed Scheduling: Kernels are executed in the order in which they were issued Thread blocks for a given kernel are scheduled if all thread blocks for preceding kernels have been scheduled and SM resources still available © NVIDIA 2013 Hyper-Q Enables Efficient Scheduling Grid Management Unit selects most appropriate task from up to 32 hardware queues (CUDA streams) Improves scheduling of concurrently executed grids Particularly interesting for MPI applications when combined with CUDA MPS (though not limited to MPI applications) © NVIDIA 2013 Stream Examples without Hyper-Q K1,M1,K2,M2: K1 K1 M1 M1 K2 K2 M2 M2 K1,K2,M1,M2: K1 K1 M1 M1 K2 K2 M2 M2 K1,M1,M2: K1 K1 M1 M1 M2 M2 K1,M2,M1: K1 K1 M1 M1 M2 M2 K1,M2,M2: K1 K1 M2 M2 M2 M2 Time K: Kernel M: Memcopy Integer: Stream ID © NVIDIA 2013 Stream Examples with Hyper-Q K1,M1,K2,M2: K1 K1 M1 M1 K2 K2 M2 M2 K1,K2,M1,M2: K1 K1 M1 M1 K2 K2 M2 M2 K1,M1,M2: K1 K1 M1 M1 M2 M2 K1,M2,M1: K1 K1 M1 M1 M2 M2 K1,M2,M2: K1 K1 M2 M2 M2 M2 Time K: Kernel M: Memcopy Integer: Stream ID © NVIDIA 2013 Grid Management Work Distributor 32 active grids Stream Queue Mgmt C B A R Q P Z Y X Grid Management Unit Pending & Suspended Grids 1000s of pending grids SMX SMX SMX SMX SM SM SM SM Work Distributor 16 active grids Stream Queue Mgmt C B A Z Y X R Q P CUDA Generated Work Fermi Kepler GK110 © NVIDIA 2013 Stream Dependencies Example void foo(void) { kernel_A<<<g,b,s, stream_1>>>(); kernel_B<<<g,b,s, stream_1>>>(); kernel_C<<<g,b,s, stream_1>>>(); } void bar(void) { kernel_P<<<g,b,s, stream_2>>>(); kernel_Q<<<g,b,s, stream_2>>>(); kernel_R<<<g,b,s, stream_2>>>(); } stream_1 kernel_A kernel_B kernel_C stream_2 kernel_P kernel_Q kernel_R © NVIDIA 2013 Stream Dependencies without Hyper-Q stream_1 kernel_A kernel_B kernel_C stream_2 kernel_P kernel_Q kernel_R Hardware Work Queue R—Q—P C—B—A © NVIDIA 2013 Stream Dependencies with Hyper-Q Hyper-Q allows 32-way concurrency Avoids inter-stream dependencies stream_1 kernel_A kernel_B kernel_C stream_2 kernel_P kernel_Q kernel_R C—B—A R—Q—P Multiple Hardware Work Queues © NVIDIA 2013 Heterogeneous system: overlap work and data movement Kepler + CUDA 5: Hyper-Q and CPU Callbacks Hyper-Q Example: Building a Pipeline DMA DMA © NVIDIA 2013 Tick-Tock Matrix Multiply cudaMemcpyAsync(devA1, A[tile0], N, stream1); cudaMemcpyAsync(devB1, B[tile0], N, stream1); DGEMM<<<g,b,s, stream1>>>(devA1, devB1, devC1); cudaMemcpyAsync(devA2, A[tile1], N, stream2); cudaMemcpyAsync(devB2, B[tile1], N, stream2); DGEMM<<<g,b,s, stream2>>>(devTileA, devTileB, devC1); cudaMemcpyAsync(C[tile0], devC, N, D2H, stream1); cudaMemcpyAsync(devA1, A[tile2], N, H2D, stream1) cudaMemcpyAsync(devB1, B[tile2], N, D2H, stream1) DGEMM<<<g,b,s, stream1>>>(devA1, devB1, devC1); cudaMemcpyAsync(C[tile1], devC, N, D2H, stream1); cudaMemcpyAsync(devA1, A[tile4], N, H2D, stream1); cudaMemcpyAsync(devB1, B[tile4], N, D2H, stream1); DGEMM<<<g,b,s, stream1>>>(devA1, devB1, devC1); © NVIDIA 2013 Tick-Tock Matrix Multiply dA1 stream 1 stream 2 memcpy B[0] dB1 DGEMM dC_1 =dA_1 x dB_1 dC1 dA1 dB1 A[0] C[0] B[2] A[2] DGEMM dC_2 =dA_2 x dB_2 DGEMM C_1 =A_1 x B_1 B[1] A[1] dA2 dB2 memcpy DGEMM C_2 =A_2 x B_2 dC2 dA2 dB2 C[1] B[3] A[3] C[2] B[4] A[4] dC1 dA1 dB1 DGEMM C_1 =A_1 x B_1 Copy Tile 0 Copy Tile 1 Compute Tile 0 Copy Tile 2 Compute Tile 1 Copy Tile 3 Compute Tile 2 Copy Tile 4 Compute Tile 3 GPU Memory CPU Memory memcpy memcpy memcpy memcpy dC2 dA2 dB2 C[3] B[5] A[5] Copy Tile 5 Compute Tile 4 © NVIDIA 2013 Just a Higher Level of Parallelism Problem is decomposed into parallel “workers”. At any given time 1 worker is using compute resources 1 worker is using copy transfers Importantly: The PCI-E link is kept saturated with useful work. For DGEMM, compute is also saturated. Arch specific balancing Depends on CPU and GPU characteristics. tile computed by stream 1 tile computed by stream 2 Result Matrix: © NVIDIA 2013 Pipeline Code for (unsigned int i = 0 ; i < nIterations ; ++i) { // Copy data from host to device cudaMemcpyAsync(d_data, h_data, cpybytes, cudaMemcpyHostToDevice, *r_streams.active()); // Launch device kernel A kernel_A<<<gdim, bdim, 0, *r_streams.active()>>>(); // Copy data from device to host cudaMemcpyAsync(h_data, d_data, cpybytes, cudaMemcpyDeviceToHost, *r_streams.active()); // Launch host post-process cudaStreamAddCallback(*r_streams.active(), cpu_callback, r_streamids.active(), 0); // Rotate streams r_streams.rotate(); r_streamids.rotate(); } © NVIDIA 2013 False dependencies prevent overlap Breadth-first launch gives overlap, requires more complex code Pipeline Without Hyper-Q © NVIDIA 2013 Full overlap of all engines Simple to program Pipeline With Hyper-Q © NVIDIA 2013 Hyper-Q also enables CUDA MPS No application modifications necessary Start MPS daemon using nvidia_cuda_mps_control -d CUDA driver detects daemon and routes GPU accesses through it Combines requests from several processes into one GPU context (shared virtual memory space, concurrent kernels possible, etc.) Allows for overlap of kernels with memcopies without explicit use of streams © NVIDIA 2013 But Hyper-Q != CUDA MPS One process: No MPS required! Automatically utilized One or many host threads no problem Just need multiple CUDA streams Removes false dependencies among CUDA streams that reduce effective concurrency on earlier GPUs Multi-process: Use CUDA MPS Leverages task-level parallelism across processes (e.g., MPI ranks) MPI is not required for MPS – it’s just the common case for HPC © NVIDIA 2013 Deploy We’ve removed (or reduced) some bottleneck Our app is now faster while remaining fully functional* Let’s take advantage of that! *Don’t forget to check correctness at every step © NVIDIA 2013 GPU Optimization Fundamentals Recap: Develop systematically with APOD Expose sufficient parallelism Utilize parallel processing resources efficiently Assess Parallelize Optimize Deploy © NVIDIA 2013 Online Resources www.udacity.com docs.nvidia.com developer.nvidia.com devtalk.nvidia.com www.stackoverflow.com
Skip links DeepSpeed Inference: Multi-GPU inference with customized inference kernels and quantization support March 15, 2021 Contents While DeepSpeed supports training advanced large-scale models, using these trained models in the desired application scenarios is still challenging due to three major limitations in existing inference solutions: 1) lack of support for multi-GPU inference to fit large models and meet latency requirements, 2) limited GPU kernel performance when running inference with small batch sizes, and 3) difficulties in exploiting quantization, which includes both quantizing the model to reduce the model size and latency as well as supporting high-performance inference of quantized models without specialized hardware. To handle these challenges, we introduce DeepSpeed Inference, which seamlessly adds high-performance inference support to large models trained in DeepSpeed with three key features: inference-adapted parallelism for multi-GPU inference, inference-optimized kernels tuned for small batch sizes, and flexible support for quantize-aware training and inference kernels for quantized models. Multi-GPU Inference with Adaptive Parallelism Parallelism is an effective approach to fit large models and reduce per-device memory consumption for both training and inference. However, simply applying training parallelism choices and degree to inference does not work well. The MP and PP configuration is normally set during the model training, apart from the data parallelism (DP), based on the memory footprint and computation style, and resource budget. On one hand, inference computation intrinsically requires less memory, so it can afford a larger partition per device. It helps reduce the degree of parallelism needed for model deployment. On the other hand, optimizing latency or meeting latency requirements is often a first-class citizen in inference while training optimizes throughput. To obtain desired latency, DeepSpeed Inference automatically adapts MP as an effective approach to reduce model latency, and its parallelism degree is often determined first. With MP, we can split the mode and parallelize computational operations across multiple devices (GPUs) to reduce latency, but it reduces computation granularity and increases communication that may hurt throughput. Once the latency target has been met, DeepSpeed can apply pipeline parallelism to maximize the throughput. Overall, DeepSpeed Inference supports flexible adaptation of both parallelism approach and degree choices from training to inference, minimizing latency while saving deployment costs. Customized Inference Kernels for Boosted Compute Efficiency of Transformer Blocks To achieve high compute efficiency, DeepSpeed-inference offers inference kernels tailored for Transformer blocks through operator fusion, taking model-parallelism for multi-GPU into account. The main difference between our kernel-fusion scheme and similar approaches is that we not only fuse element-wise operations (such as bias-add, residual, and activation function), but also merge the General matrix multiply (GeMM) operations with other operations. To do this, we design an efficient implementation for the vector-matrix or skinny matrix-matrix multiplication that allows us to fuse more operations at the reduction boundary of GeMM operations. Kernel-Fusion We take two main policies for fusing operations: 1) keeping the access-pattern of inputs and outputs intact throughout the sequence of operations fused together; 2) fusing operations at each all-reduce boundary. The first policy ensures that different thread-blocks won’t encounter transferring data between Streaming-Multiprocessors (SMs). This is due to no straight-forward communication among SMs other than using the main memory which adds the block-synching overhead because of non-deterministic behavior of memory access. The reason behind the second policy is that we cannot continue the execution unless the partial results are reduced among the model-parallel GPUs. Figure 1: Transformer Layer with Megatron-style model-parallelism all-reduce components. The figure illustrates the parts of layer fused together with broken lines (width of line shows the fusion depth). Figure 1 shows the different components of a Transformer layer, and the groups of operations considered for fusion in our inference optimization. We also consider the NVIDIA Megatron-LM style of parallelism that partitions attention (Attn) and feed-forward (FF) blocks across multiple GPUs. Thus, we include the two all-reduce operations that reduce the results among parallel GPUs after Attn and FF blocks. As Figure 1 shows, we fuse the operations inside a Transformer layer at four main regions: To fuse these operations, we exploit shared-memory as an intermediate cache for transferring data between reduction operations used in layer-norm and GeMM, and the element-wise operations. Moreover, we use the warp-level instructions to communicate data between threads when reducing partial computations. In addition, we use a new schedule for GeMM operations, which allows for fusing as many operations as needed for the third kernel-fusion. We also combine the GeMMs for the attention computation in the second kernel-fusion, by using an implicit matrix transformation in order to reduce the memory pressure. Compared to the unfused computation style using cuBLAS GeMM, we improve the performance by 1.5x, 2.9x. 3x, and 1.2x for all these kernel-fusions, respectively. Seamless pipeline from training to inference with automatic kernel-injection To run the model in Inference mode, DeepSpeed simply requires the location of the model checkpoints and the desired parallelism configuration, i.e., MP/PP degree. DeepSpeed Inference kernels can also be enabled for many well-known model architectures such as HuggingFace (Bert and GPT-2) or Megatron GPT-based models using a pre-defined policy map that maps the original parameters to the parameters in the inference kernels. For other transformer-based models, user can specify their own policy map. Note that DS-Inference can run independent of the training pipeline as long as it receives all model checkpoints, and the DeepSpeed Transformer kernels for inference can be injected into any Transformer model if the right mapping policy is defined. For more information on how to enable Transformer inference kernel as well as specifying parallelism, please refer to out inference tutorial. Flexible quantization support To further reduce the inference cost for large-scale models, we created the DeepSpeed Quantization Toolkit, supporting flexible quantize-aware training and high-performance kernels for quantized inference. For training, we introduce a novel approach called Mixture of Quantization (MoQ), which is inspired by mixed-precision training while seamlessly applying quantization. With MoQ, we can control the precision of the model by simulating the impact of quantization when updating the parameters at each step of training. Moreover, it supports flexible quantization policies and schedules—we find that by dynamically adjusting the number of quantization bits during training, the final quantized model provides higher accuracy under the same compression ratio. To adapt to different tasks, MoQ can also leverage the second order information of models to detect their sensitivity to precision and adjust the quantization schedule and target accordingly. To maximize the performance gains from the quantization model, we provide inference kernels tailored for quantized models that reduce latency through optimizing data movement but do not require specialized hardware. Finally, our toolkit does not require any code changes on the client side, making it easy to use. Performance results Boosting throughput and reducing inference cost. Figure 3 shows the inference throughput per GPU for the three model sizes corresponding to the three Transformer networks, GPT-2, Turing-NLG, and GPT-3. DeepSpeed Inference increases in per-GPU throughput by 2 to 4 times when using the same precision of FP16 as the baseline. By enabling quantization, we boost throughput further. We reach a throughput improvement of 3x for GPT-2, 5x for Turing-NLG, and 3x for a model that is similar in characteristics and size to GPT-3, which directly translates to 3–5x inference cost reduction on serving these large models. In addition, we achieve these throughput and cost improvements without compromising latency as shown in Figure 5. Figure 3: Inference throughput for different model sizes. DeepSpeed Inference achieves 3x to 5x higher throughput than baseline. One source of inference cost reduction is through reducing the number of GPUs for hosting large models as shown in Figure 4. The optimized GPU resources comes from 1) using inference-adapted parallelism, allowing users to adjust the model and pipeline parallelism degree from the trained model checkpoints, and 2) shrinking model memory footprint by half with INT8 quantization. As shown in this figure, we use 2x less GPUs to run inference for the 17B model size by adapting the parallelism. Together with INT8 quantization through DeepSpeed MoQ, we use 4x and 2x fewer GPUs for 17B and 175B sizes respectively. Figure 4: Number of GPUs used for running inference on the different model sizes shown in Figure 4. Reducing inference latency. For the application scenarios where inference latency is critical, we can increase model parallelism degree in DeepSpeed Inference to reduce inference latency further. As Figure 5 depicts, we can reduce the latency by 2.3x compared to PyTorch as we increase the model-parallelism size to 4. Furthermore, we can still have high latency improvement with a fewer number of GPUs by adapting the parallelism at inference and using MoQ to quantize the model. We obtain 1.3x and 1.9x speedups while using 4x and 2x lower resources than baseline, respectively. For the application scenarios where inference latency is critical, we can increase model parallelism degree in DeepSpeed Inference to reduce inference latency further. As Figure 5 depicts, we can reduce the latency by 2.3x compared to PyTorch as we increase the model-parallelism size to 4. Furthermore, we can still have high latency improvement with a fewer number of GPUs by adapting the parallelism at inference and using MoQ to quantize the model. We obtain 1.3x and 1.9x speedups while using 4x and 2x lower resources than baseline, respectively. Figure 5. Inference latency for the 17B model using different parallelism configuration to optimize latency. Updated: March 15, 2021
Source: Image created by Generative AI Lab using image generation models. Maximizing GPU Kernel Optimization in Python with Triton Author(s): Chaim Rand TL;DR: Learn how to optimize your Python code for GPU using Triton. This book provides practical tips and techniques for improving performance and unleashing the full potential of GPU kernels. From data management to parallelization, it covers everything you need to know to master GPU kernel optimization in Python.” Disclaimer: This post has been created automatically using generative AI. Including DALL-E, Gemini, OpenAI and others. Please take its contents with a grain of salt. For feedback on how we can improve, please email us Introduction to Triton and GPU Kernel Optimization In recent years, the use of graphics processing units (GPUs) has become increasingly popular in the field of data analysis and scientific computing. These powerful processors are capable of performing complex calculations and handling large datasets at lightning-fast speeds. However, harnessing the full potential of GPUs requires specialized knowledge and skills in optimization techniques. This is where Triton comes in – a powerful tool for GPU kernel optimization in Python and C++. Understanding Triton and Its Capabilities Triton is an open-source library developed by NVIDIA that allows users to write high-performance GPU kernels in Python and C++. It provides a simple and intuitive interface for writing code that can be executed on GPUs, without the need for complex and time-consuming low-level programming. With Triton, users can easily harness the full power of GPUs and accelerate their code, making it ideal for tasks such as machine learning, data analysis, and scientific simulations. The Benefits of Using Triton for GPU Kernel Optimization One of the main advantages of using Triton for GPU kernel optimization is its ease of use. With its simple and intuitive interface, even users with little or no experience in GPU programming can quickly learn how to write efficient and high-performing code. Additionally, Triton offers a wide range of built-in functions and optimizations that can significantly speed up the execution of code on GPUs. This not only saves time and effort but also allows users to focus on the logic and algorithms of their code rather than worrying about low-level optimizations. Mastering GPU Kernel Optimization with Triton To fully unleash the power of Triton, it is essential to understand its various optimization techniques and how to use them effectively. These include techniques such as data layout optimizations, loop unrolling, and memory coalescing, among others. Triton also provides a set of tools for profiling and debugging, which can help identify bottlenecks and optimize code further. By mastering these techniques and tools, users can achieve significant performance gains and fully utilize the capabilities of GPUs. Real-World Applications of Triton in GPU Kernel Optimization The applications of Triton in GPU kernel optimization are vast and diverse. From accelerating machine learning algorithms to speeding up scientific simulations, Triton has been used in a wide range of fields and industries. For example, researchers have used Triton to optimize code for computational fluid dynamics simulations, resulting in a 10x speedup compared to traditional CPU-based code. In the field of finance, Triton has been used to accelerate risk analysis calculations. With the increasing demand for faster and more powerful computing, understanding and utilizing GPU optimization techniques can be a valuable skill. With Triton, developers can easily harness the power of GPUs and achieve optimal results. It is a valuable tool for those looking to maximize their use of GPU technology in Python. Crafted using generative AI from insights found on Towards Data Science. Join us on this incredible generative AI journey and be a part of the revolution. Stay tuned for updates and insights on generative AI by following us on X or LinkedIn. Introduction to Machine Learning, fourth edition (Adaptive Computation and Machine Learning series) ChatGPT for Beginners Made Useful: Master the Fundamentals, Learn Useful Prompts, Boost Your Productivity and Build Passive Income (Generative AI & Chat GPT Mastery Series) Super Study Guide: Transformers & Large Language Models Disclaimer: The content on this website reflects the views of contributing authors and not necessarily those of Generative AI Lab. This site may contain sponsored content, affiliate links, and material created with generative AI. Thank you for your support. Must read Empowering Biology with Generative AI: GenBio AI’s Breakthrough Generalizing Temporal Difference (TD) Algorithms with n-Step Bootstrapping in Reinforcement Learning From Solo Notebooks to Collaborative Powerhouse: Essential VS Code Extensions for Data Science and Machine Learning Teams Data Scientists Beware: The Power of Polars Over Pandas More articles Generalizing Temporal Difference (TD) Algorithms with n-Step Bootstrapping in Reinforcement Learning From Solo Notebooks to Collaborative Powerhouse: Essential VS Code Extensions for Data Science and Machine Learning Teams Data Scientists Beware: The Power of Polars Over Pandas LEAVE A REPLY Cancel reply Save my name, email, and website in this browser for the next time I comment. Latest articles Empowering Biology with Generative AI: GenBio AI’s Breakthrough Generalizing Temporal Difference (TD) Algorithms with n-Step Bootstrapping in Reinforcement Learning From Solo Notebooks to Collaborative Powerhouse: Essential VS Code Extensions for Data Science and Machine Learning Teams Data Scientists Beware: The Power of Polars Over Pandas Beyond LLMs: Compound Systems, Agents, and Building AI Products About Us Popular Category Editor Picks Best Books on Generative AI Top Books on Large Language Models (LLMs) © Generative AI Lab Co.
Optimize TensorFlow GPU performance with the TensorFlow Profiler Overview This guide will show you how to use the TensorFlow Profiler with TensorBoard to gain insight into and get the maximum performance out of your GPUs, and debug when one or more of your GPUs are underutilized. If you are new to the Profiler: Keep in mind that offloading computations to GPU may not always be beneficial, particularly for small models. There can be overhead due to: Performance optimization workflow This guide outlines how to debug performance issues starting with a single GPU, then moving to a single host with multiple GPUs. It is recommended to debug performance issues in the following order: For example, if you are using a TensorFlow distribution strategy to train a model on a single host with multiple GPUs and notice suboptimal GPU utilization, you should first optimize and debug the performance for one GPU before debugging the multi-GPU system. As a baseline for getting performant code on GPUs, this guide assumes you are already using tf.function. The Keras Model.compile and Model.fit APIs will utilize tf.function automatically under the hood. When writing a custom training loop with tf.GradientTape, refer to the Better performance with tf.function on how to enable tf.functions. The next sections discuss suggested approaches for each of the scenarios above to help identify and fix performance bottlenecks. 1. Optimize the performance on one GPU In an ideal case, your program should have high GPU utilization, minimal CPU (the host) to GPU (the device) communication, and no overhead from the input pipeline. The first step in analyzing the performance is to get a profile for a model running with one GPU. TensorBoard's Profiler overview page—which shows a top level view of how your model performed during a profile run—can provide an idea of how far away your program is from the ideal scenario. The key numbers to pay attention to the overview page are: Achieving optimal performance means maximizing these numbers in all three cases. To get an in-depth understanding of your program, you will need to be familiar with TensorBoard's Profiler trace viewer. The sections below show some common trace viewer patterns that you should look for when diagnosing performance bottlenecks. Below is an image of a model trace view running on one GPU. From the TensorFlow Name Scope and TensorFlow Ops sections, you can identify different parts of the model, like the forward pass, the loss function, backward pass/gradient calculation, and the optimizer weight update. You can also have the ops running on the GPU next to each Stream, which refer to CUDA streams. Each stream is used for specific tasks. In this trace, Stream#118 is used to launch compute kernels and device-to-device copies. Stream#119 is used for host-to-device copy and Stream#120 for device to host copy. The trace below shows common characteristics of a performant model. For example, the GPU compute timeline (Stream#118) looks "busy" with very few gaps. There are minimal copies from host to device (Stream #119) and from device to host (Stream #120), as well as minimal gaps between steps. When you run the Profiler for your program, you may not be able to identify these ideal characteristics in your trace view. The rest of this guide covers common scenarios and how to fix them. 1. Debug the input pipeline The first step in GPU performance debugging is to determine if your program is input-bound. The easiest way to figure this out is to use the Profiler’s Input-pipeline analyzer, on TensorBoard, which provides an overview of time spent in the input pipeline. You can take the following potential actions if your input-pipeline contributes significantly to step time: In addition, refer to the best practices for optimizing the input data pipeline. 2. Debug the performance of one GPU There are several factors that can contribute to low GPU utilization. Below are some scenarios commonly observed when looking at the trace viewer and potential solutions. 1. Analyze gaps between steps A common observation when your program is not running optimally is gaps between training steps. In the image of the trace view below, there is a large gap between steps 8 and 9, meaning that the GPU is idle during that time. If your trace viewer shows large gaps between steps, this could be an indication that your program is input bound. In that case you should refer to the previous section on debugging your input pipeline if you have not already done so. However, even with an optimized input pipeline, you can still have gaps between the end of one step and the start of another due to CPU thread contention. tf.data makes use of background threads to parallelize pipeline processing. These threads may interfere with GPU host-side activity that happens at the beginning of each step, such as copying data or scheduling GPU operations. If you notice large gaps on the host side, which schedules these ops on the GPU, you can set the environment variable TF_GPU_THREAD_MODE=gpu_private. This ensures that GPU kernels are launched from their own dedicated threads, and don't get queued behind tf.data work. Gaps between steps can also be caused by metric calculations, Keras callbacks, or ops outside of tf.function that run on the host. These ops don’t have as good performance as the ops inside a TensorFlow graph. Additionally, some of these ops run on the CPU and copy tensors back and forth from the GPU. If after optimizing your input pipeline you still notice gaps between steps in the trace viewer, you should look at the model code between steps and check if disabling callbacks/metrics improves performance. Some details of these ops are also on the trace viewer (both device and host side).The recommendation in this scenario is to amortize the overhead of these ops by executing them after a fixed number of steps instead of every step. When using the Model.compile method in the tf.keras API, setting the steps_per_execution flag does this automatically. For custom training loops, use tf.while_loop. 2. Achieve higher device utilization 1. Small GPU kernels and host kernel launch delays The host enqueues kernels to be run on the GPU, but there is a latency (around 20-40 μs) involved before kernels are actually executed on the GPU. In an ideal case, the host enqueues enough kernels on the GPU such that the GPU spends most of its time executing, rather than waiting on the host to enqueue more kernels. The Profiler's overview page on TensorBoard shows how much time the GPU was idle due to waiting on the host to launch kernels. In the image below, the GPU is idle for about 10% of the step time waiting on kernels to be launched. The trace viewer for this same program shows small gaps between kernels where the host is busy launching kernels on the GPU. By launching a lot of small ops on the GPU (like a scalar add, for example), the host might not keep up with the GPU. The TensorFlow Stats tool in TensorBoard for the same Profile shows 126,224 Mul operations taking 2.77 seconds. Thus, each kernel is about 21.9 μs, which is very small (around the same time as launch latency) and can potentially result in host kernel launch delays. If your trace viewer shows many small gaps between ops on the GPU like in the image above, you can: 2. TensorFlow op placement The Profiler overview page shows you the percentage of ops placed on the host vs. the device (you can also verify the placement of specific ops by looking at the trace viewer. Like in the image below, you want the percentage of ops on the host to be very small compared to the device. Ideally, most of the compute intensive ops should be placed on the GPU. To find out which devices the operations and tensors in your model are assigned to, set tf.debugging.set_log_device_placement(True) as the first statement of your program. Note that in some cases, even if you specify an op to be placed on a particular device, its implementation might override this condition (example:tf.unique). Even for single GPU training, specifying a distribution strategy, such as tf.distribute.OneDeviceStrategy, can result in more deterministic placement of ops on your device. One reason for having the majority of ops placed on the GPU is to prevent excessive memory copies between the host and the device (memory copies for model input/output data between host and device are expected). An example of excessive copying is demonstrated in the trace view below on GPU streams #167, #168, and #169. These copies can sometimes hurt the performance if they block GPU kernels from executing. Memory copy operations in the trace viewer have more information about the ops that are the source of these copied tensors, but it might not always be easy to associate a memCopy with an op. In these cases, it is helpful to look at the ops nearby to check if the memory copy happens at the same location in every step. 3. More efficient kernels on GPUs Once your program's GPU utilization is acceptable, the next step is to look into increasing the efficiency of the GPU kernels by utilizing Tensor Cores or fusing ops. 1. Utilize Tensor Cores Modern NVIDIA® GPUs have specialized Tensor Cores that can significantly improve the performance of eligible kernels. You can use TensorBoard's GPU kernel stats to visualize which GPU kernels are Tensor Core-eligible, and which kernels are using Tensor Cores. Enabling fp16 (see Enabling Mixed Precision section below) is one way to make your program’s General Matrix Multiply (GEMM) kernels (matmul ops) utilize the Tensor Core. GPU kernels use the Tensor Cores efficiently when the precision is fp16 and input/output tensor dimensions are divisible by 8 or 16 (for int8). For other detailed recommendations on how to make kernels efficient for GPUs, refer to the NVIDIA® deep learning performance guide. 2. Fuse ops Use tf.function(jit_compile=True) to fuse smaller ops to form bigger kernels leading to significant performance gains. To learn more, refer to the XLA guide. 3. Enable mixed precision and XLA After following the above steps, enabling mixed precision and XLA are two optional steps you can take to improve performance further. The suggested approach is to enable them one by one and verify that the performance benefits are as expected. 1. Enable mixed precision The TensorFlow Mixed precision guide shows how to enable fp16 precision on GPUs. Enable AMP on NVIDIA® GPUs to use Tensor Cores and realize up to 3x overall speedups when compared to using just fp32 (float32) precision on Volta and newer GPU architectures. Make sure that matrix/tensor dimensions satisfy requirements for calling kernels that use Tensor Cores. GPU kernels use the Tensor Cores efficiently when the precision is fp16 and input/output dimensions are divisible by 8 or 16 (for int8). Note that with cuDNN v7.6.3 and later, convolution dimensions will automatically be padded where necessary to leverage Tensor Cores. Follow the best practices below to maximize the performance benefits of fp16 precision. 1. Use optimal fp16 kernels With fp16 enabled, your program’s matrix multiplications (GEMM) kernels, should use the corresponding fp16 version that utilizes the Tensor Cores. However, in some cases, this does not happen and you do not experience the expected speedup from enabling fp16, as your program falls back to the inefficient implementation instead. The GPU kernel stats page shows which ops are Tensor Core eligible and which kernels are actually using the efficient Tensor Core. The NVIDIA® guide on deep learning performance contains additional suggestions on how to leverage Tensor Cores. Additionally, the benefits of using fp16 will also show in kernels that were previously memory bound, as now the ops will take half the time. 2. Dynamic vs. static loss scaling Loss scaling is necessary when using fp16 to prevent underflow due to low precision. There are two types of loss scaling, dynamic and static, both of which are explained in greater detail in the Mixed Precision guide. You can use the mixed_float16 policy to automatically enable loss scaling within the Keras optimizer. When trying to optimize performance, it is important to remember that dynamic loss scaling can introduce additional conditional ops that run on the host, and lead to gaps that will be visible between steps in the trace viewer. On the other hand, static loss scaling does not have such overheads and can be a better option in terms of performance with the catch that you need to specify the correct static-loss scale value. 2. Enable XLA with tf.function(jit_compile=True) or auto-clustering As a final step in getting the best performance with a single GPU, you can experiment with enabling XLA, which will fuse ops and lead to better device utilization and a lower memory footprint. For details on how to enable XLA in your program with tf.function(jit_compile=True) or auto-clustering, refer to the XLA guide. You can set the global JIT level to -1 (off), 1, or 2. A higher level is more aggressive and may reduce parallelism and use more memory. Set the value to 1 if you have memory restrictions. Note that XLA does not perform well for models with variable input tensor shapes as the XLA compiler would have to keep compiling kernels whenever it encounters new shapes. 2. Optimize the performance on the multi-GPU single host The tf.distribute.MirroredStrategy API can be used to scale model training from one GPU to multiple GPUs on a single host. (To learn more about how to do distributed training with TensorFlow, refer to the Distributed training with TensorFlow, Use a GPU, and Use TPUs guides and the Distributed training with Keras tutorial.) Although the transition from one GPU to multiple GPUs should ideally be scalable out of the box, you can sometimes encounter performance issues. When going from training with a single GPU to multiple GPUs on the same host, ideally you should experience the performance scaling with only the additional overhead of gradient communication and increased host thread utilization. Because of this overhead, you will not have an exact 2x speedup if you move from 1 to 2 GPUs, for example. The trace view below shows an example of the extra communication overhead when training on multiple GPUs. There is some overhead to concatenate the gradients, communicate them across replicas, and split them before doing the weight update. The following checklist will help you achieve better performance when optimizing the performance in the multi-GPU scenario: 1. Optimize gradient AllReduce When training with a synchronous strategy, each device receives a portion of the input data. After computing the forward and backwards passes through the model, the gradients calculated on each device need to be aggregated and reduced. This gradient AllReduce happens after the gradient calculation on each device, and before the optimizer updates the model weights. Each GPU first concatenates the gradients across the model layers, communicates them across GPUs using tf.distribute.CrossDeviceOps (tf.distribute.NcclAllReduce is the default), and then returns the gradients after reduction per layer. The optimizer will use these reduced gradients to update the weights of your model. Ideally, this process should happen at the same time on all GPUs to prevent any overheads. The time to AllReduce should be approximately the same as: (number of parameters * 4bytes)/ (communication bandwidth) This calculation is useful as a quick check to understand whether the performance you have when running a distributed training job is as expected, or if you need to do further performance debugging. You can get the number of parameters in your model from Model.summary. Note that each model parameter is 4 bytes in size since TensorFlow uses fp32 (float32) to communicate gradients. Even when you have fp16 enabled, NCCL AllReduce utilizes fp32 parameters. To get the benefits of scaling, the step-time needs to be much higher compared to these overheads. One way to achieve this is to use a higher batch size as batch size affects step time, but does not impact the communication overhead. 2. GPU host thread contention When running multiple GPUs, the CPU’s job is to keep all of the devices busy by efficiently launching GPU kernels across the devices. However, when there are a lot of independent operations that the CPU can schedule on one GPU, the CPU can decide to use a lot of its host threads to keep one GPU busy, and then launch kernels on another GPU in a non-deterministic order. This can cause a skew or negative scaling, which can negatively affect the performance. The trace viewer below shows the overhead when the CPU staggers GPU kernel launches inefficiently, as GPU1 is idle and then starts running ops after GPU2 has started. The trace view for the host shows that the host is launching kernels on GPU2 before launching them on GPU1 (note that the below tf_Compute* ops are not indicative of CPU threads). If you experience this kind of staggering of GPU kernels in your program’s trace view, the recommended action is to: Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2022-09-15 UTC. Stay connected Support
CISC 879 : Software Support for Multicore Architectures Yu, Xuan Dept of Computer & Information Sciences University of Delaware Program Optimization Study on a 128-Core GPU Shane Ryoo, Christopher I. Rodrigues, Sam S. Stone, Sara S. Baghsorkhi, Sain-Zee Ueng, and Wen-mei W. Hwu CISC 879 : Software Support for Multicore Architectures General Idea Good news Improving programmability and generality on GPU Possibility to perform a wide variety of parallelization optimizations Problem How do you choose and control optimizations on GPU properly  Possible combination of optimization is very large and makes the optimization space tedious to explore  Limited local resources and global memory bandwidth makes performance sensitive to even small changes in code, unpredictable CISC 879 : Software Support for Multicore Architectures General Idea • Presented a study that examines a broad space of optimizations performed on several applications • Found configurations up to 74% faster than previously thought optimal. • Explained why this is happening on GPU, discuss some principles and techniques for finding near-optimal configurations CISC 879 : Software Support for Multicore Architectures Organization Architecture Overview (CUDA)  Introduction of execution hardware and threading model  Compute Unified Device Architecture (CUDA) optimization space search  Discussion of the space search process and the classifications and characteristics of the program optimizations Experiments  Discuss result of the search for several applications  Matrix Multiplication  Magnetic resonance Imaging  Sums of Absolute Difference Conclusion CISC 879 : Software Support for Multicore Architectures Architecture • General Programming and compilation process GPU is treaded as a coprocessor that executes data-parallel kernel functions The user supplies a both host (CPU) and kernel (GPU) code Codes are separated and compiled by NVIDIA’s compiler. Host code transfers data to GPU’s and initialed the kernel code via API calls CISC 879 : Software Support for Multicore Architectures Architecture 16 streaming multiprocessors (SMs) Each SM containing eight streaming processors (SPs), or cores Each core executes a single thread's instruction in SIMD multiply-add arithmetic unit two special functional units (SFUs) reciprocal square root, sine, and cosine fully pipelined, CISC 879 : Software Support for Multicore Architectures Architecture CISC 879 : Software Support for Multicore Architectures Architecture Three Level Hierarchy: •Grid •Block •Thread Each kernel creates a single grid A grid consists of many thread blocks.(512, on single SM) Threads in a block are organized into warps of 32 threads. Each warp executes in SIMD fashion, issuing in four cycles on the eight SPs of an SM. When one wrap stall, SM switch to another warp CISC 879 : Software Support for Multicore Architectures Architectural Interactions • Hardware constraints These constrains interacts with each other making accurately predicting the effects of one or more compiler optimizations of CUDA difficult. CISC 879 : Software Support for Multicore Architectures Architectural Interactions Consider an application: •Uses 256 threads per block •10 registers per thread •4KB of shared memory per thread block. Can schedule 3 thread blocks and 768 threads on each SM. An optimization: Increases each thread's register usage from 10 to 11 (an increase of only 10%) will decrease the number of blocks per SM from 3 to 2. This decreases the number of threads on an SM by 33%. Why? 768 * 11 = 8448 > 9192 CISC 879 : Software Support for Multicore Architectures Architectural Interactions By contrast, an optimization that increases each thread block's shared memory usage by 1KB (an increase of 25%) does not decrease the number of blocks per SM. Clearly, the optimization space is inherently non-linear. CISC 879 : Software Support for Multicore Architectures Optimization space search Architecture Overview (CUDA)  Introduction of execution hardware and threading model  Compute Unified Device Architecture (CUDA) Optimization space search  Discussion of the space search process and the classifications and characteristics of the program optimizations Experiments  Discuss result of the search for several applications  Matrix Multiplication  Magnetic resonance Imaging  Sums of Absolute Difference Conclusion CISC 879 : Software Support for Multicore Architectures Optimization space search Basic strategy for good performance: Reduce dynamic instruction count while maintaining high SP occupancy. Four categories of machine-level behavior to optimizae Thread-level work redistribution Instruction count reduction Intra-thread parallelism Resource balancing CISC 879 : Software Support for Multicore Architectures Example of matrix multiplication The kernel is tiled so that each thread block computes a square 16- by-16 tile of the output matrix Optimization space search CISC 879 : Software Support for Multicore Architectures Optimization space search Example of matrix multiplication tx and ty are each thread's coordinates in the thread block; indexA , indexB , and indexC are positions in the matrices Threads in a block cooperatively load parts of the input matrices into shared memory, amortizing the cost of global load latency Using larger tiles enhances the benefit of data sharing, but reduces scheduling flexibility since a greater fraction of the threads on an SM must wait at barrier synchronizations. CISC 879 : Software Support for Multicore Architectures Optimization space search Four categories of machine-level behavior to optimize Thread-level work redistribution Instruction count reduction Intra-thread parallelism Resource balancing Each thread compute two matrix elements instead of one, presents opportunities for eliminating redundant instructions previously distributed across threads CISC 879 : Software Support for Multicore Architectures Four categories of machine-level behavior to optimize Thread-level work redistribution Instruction count reduction Intra-thread parallelism Resource balancing Traditional compiler optimizations such as common sub expression elimination, loop- invariant code removal, and loop unrolling. Optimization space search CISC 879 : Software Support for Multicore Architectures Four categories of machine-level behavior to optimize Thread-level work redistribution Instruction count reduction Intra-thread parallelism Resource balancing A developer can unroll loops to facilitate code scheduling in the compiler or explicitly insert pre-fetching code. Optimization space search CISC 879 : Software Support for Multicore Architectures Four categories of machine-level behavior to optimize Thread-level work redistribution Instruction count reduction Intra-thread parallelism Resource balancing Trade certain resource usages, some of which may be counterintuitive, to produce a better performing application. An example of this is using shared memory to buffer data for reuse, regardless of whether it is shared with other threads. Another example is proactive register spilling by the programmer. By reducing register usage, often a critical resource, more thread blocks can be assigned to each SM. Optimization space search CISC 879 : Software Support for Multicore Architectures Experiments Architecture Overview (CUDA)  Introduction of execution hardware and threading model  Compute Unified Device Architecture (CUDA) optimization space search  Discussion of the space search process and the classifications and characteristics of the program optimizations Experiments  Discuss result of the search for several applications  Matrix Multiplication  Magnetic resonance Imaging  Sums of Absolute Difference Conclusion CISC 879 : Software Support for Multicore Architectures Experiments Comparison: GPU experiments: AMD Opteron 248 2.2GHz with 1GB main memory. CPU versions: Intel Core2 Extreme Quad running at 2.66 GHz with 4GB main memory. CISC 879 : Software Support for Multicore Architectures Experiments We varied tiling sizes, tiling dimensions, pre-fetching, and unroll factors, CISC 879 : Software Support for Multicore Architectures Experiments The general trend: Larger tiles sizes and more work per thread gives higher performance Initial thought optimal 1x1 tiling, 16x16 tiles, complete unrolling, pre-fetching 87.5 GFLOPS. Actual peak performing: 1x1 tiling, 16x16 tiles, complete unrolling, no pre-fetching 91.3 GFLOPS, an improvement of 4.2%. CISC 879 : Software Support for Multicore Architectures Experiments Increasing the tiling dimensions: •Stable performance •Slight advantage in average •does not result in peak performance. Reason: negative effects of unrolling by more than a factor of two for the higher tiling dimensions. CISC 879 : Software Support for Multicore Architectures Experiments In summary: larger thread blocks are good due to data sharing. Complete unrolling often good due to reducing the branches calculations. However, the runtime's scheduling may increase register pressure such that the number of thread blocks assigned to each SM is reduced. CISC 879 : Software Support for Multicore Architectures Experiments Another application: Magnetic resonance imaging (MRI) reconstruction Reconstruct high-quality images from non-Cartesian trajectories. The computation required to perform these reconstructions is substantial. Parameters sensitive to performance: loop unrolling factor, The number of threads per block (tpb), The number of scan points processed by each grid CISC 879 : Software Support for Multicore Architectures Shorter execution time for an unrolling factor of 8 4 is often worse than either 2 or 8 Reason: •12 registers when unrolled, 2 thread blocks per SM and 6.17s. •12 registers Unrolling factor is 2, 5.52s. •24 registers unrolling factor is 4, only admit on block per SM 5.89s •30 registers unrolling factor is 8, 1 block per SM, 4.64s CISC 879 : Software Support for Multicore Architectures Experiments CISC 879 : Software Support for Multicore Architectures there is a smaller chance of conflicts when fewer thread blocks run on an SM. Experiments CISC 879 : Software Support for Multicore Architectures To summarize MRI Performance relatively insensitive to block size An unrolling factor of 8 provided the highest performing Experiments CISC 879 : Software Support for Multicore Architectures Gradual changes in optimization parameters can have wildly varying effects on an application. Local resources used by a thread increases to points where fewer thread blocks can be assigned to each SM will reduce overall performance. They believe that scheduling should be better controlled, possibly by the compiler rather than the runtime. Conclusions
Acceleration of BLAST Hydra Code on GPU Tingxing(Tim) Dong Lawrence Livermore National Laboratory September 9th, 2011 Mentors: Tzanio Kolev, Robert Rieben This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 Outline Introduction of BLAST Motivation Details Optimization and Restriction Examples and Results Conclusion T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 2 / 34 Outline Introduction of BLAST Motivation Details Optimization and Restriction Examples and Results Conclusion T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 3 / 34 Introduction BLAST Solve equations of compressible hydrodynamics with Finite Element Method(FEM) Based on Lagrangian frame (moving mesh) C++ code, parallelized by MPI BLAST’s features Curvilinear zone geometries Higher order field representations Exact discrete energy conservation by construction Reduces to classical SGH under simplifying assumptions Support for 2D/3D meshes Multiple options for basis functions / quadrature order Q1Q0, Q2Q1, and Q3Q2 cases velocity,position density,energy,pressure reference random T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 4 / 34 Euler’s Equations in a Lagrangian Frame Euler’s Equations Momentum Conservation: ρ d⃗v dt = ∇· σ Mass Conservation: 1 ρ dρ dt = −∇· ⃗v Energy Conservation: ρ de dt = σ : ∇⃗v Equation of State: p = EOS(e, ρ) Equation of Motion: d⃗x dt = ⃗v Semi-discrete finite element method in BLAST Momentum Conservation: dv dt = −M−1 v F · 1 Energy Conservation: de dt = M−1 e FT · v Equation of Motion: dx dt = v T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 5 / 34 Outline Introduction of BLAST Motivation Details Optimization and Restriction Examples and Results Conclusion T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 6 / 34 Google Profiler of BLAST Consider three cases of Lagrange hydro problems with different computational workloads: 2D q2q1: least expensive, most common 2D q3q2: more expensive, greater robustness and accuracy 3D q2q1: most expensive 2D q2q1 blast Total samples: 29855 Focusing on: 29855 Dropped nodes with <= 149 abs(samples) Dropped edges with <= 29 samples __libc_start_main 0 (0.0%) of 29220 (97.9%) main 0 (0.0%) of 29220 (97.9%) 29220 _start 0 (0.0%) of 29220 (97.9%) 29220 RK2AvgIntegrator TimeStep 0 (0.0%) of 20523 (68.7%) 20523 HydroStatePU DeltaTEstimate 0 (0.0%) of 8242 (27.6%) 8242 HydroStatePU KineticEnergy 0 (0.0%) of 341 (1.1%) 341 HydroStatePU Eval_dv_dt 8 (0.0%) of 20053 (67.2%) 20053 hypre_ParCSRMatrixMatvec 1 (0.0%) of 7527 (25.2%) 99 HydroStatePU Eval_de_dt 12 (0.0%) of 295 (1.0%) 294 HydroStatePU ComputeCornerForces 2918 (9.8%) of 16502 (55.3%) 8260 HyprePCG Mult 0 (0.0%) of 11161 (37.4%) 11161 hypre_ParCSRMatrixMatvecT 0 (0.0%) of 219 (0.7%) 85 DenseMatrix Mult 166 (0.6%) 162 FiniteElementSpace GetElementVDofs 93 (0.3%) of 162 (0.5%) 41 2918 MultABt 4548 (15.2%) 4480 MultAtB 2510 (8.4%) 2447 DenseMatrix CalcEigenvalues 1589 (5.3%) 1571 DenseMatrix FNorm 1128 (3.8%) 1120 DenseMatrix CalcSingularvalue 815 (2.7%) 793 DenseMatrix operator*= 801 (2.7%) 768 Mult 785 (2.6%) of 790 (2.6%) 718 hypre_PCGSolve 6 (0.0%) of 11161 (37.4%) 11161 7015 HYPRE_ParCSRDiagScale 2288 (7.7%) 2288 hypre_SeqVectorAxpy 820 (2.7%) 820 hypre_ParVectorInnerProd 1 (0.0%) of 580 (1.9%) 572 hypre_SeqVectorSetConstantValues 241 (0.8%) 241 hypre_SeqVectorScale 202 (0.7%) 202 8242 hypre_CSRMatrixMatvec 7515 (25.2%) 7513 7515 4521 2482 2288 1576 1120 820 806 768 754 hypre_SeqVectorInnerProd 572 (1.9%) 572 572 DenseMatrix MultTranspose 532 (1.8%) 506 DenseMatrix operator= 470 (1.6%) 443 267 132 Vector GetSubVector 187 (0.6%) 47 31 241 hypre_CSRMatrixMatvecT 219 (0.7%) 219 219 202 Vector operator* 198 (0.7%) 198 185 164 91 IdealGasEOS SoundSpeed 159 (0.5%) 159 T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 7 / 34 Google Profiler of BLAST 2D q3q2 blast Total samples: 255527 Focusing on: 255527 Dropped nodes with <= 1277 abs(samples) Dropped edges with <= 255 samples __libc_start_main 0 (0.0%) of 251170 (98.3%) main 1 (0.0%) of 251170 (98.3%) 251170 _start 0 (0.0%) of 251170 (98.3%) 251170 RK4Integrator TimeStep 1 (0.0%) of 214472 (83.9%) 214472 HydroStatePU DeltaTEstimate 1 (0.0%) of 34758 (13.6%) 34758 HydroStatePU KineticEnergy 0 (0.0%) of 1351 (0.5%) 1351 HydroStatePU Eval 0 (0.0%) of 213707 (83.6%) 213707 HydroStatePU Eval_dv_dt 36 (0.0%) of 210349 (82.3%) 210349 hypre_ParCSRMatrixMatvec 19 (0.0%) of 80662 (31.6%) 558 HydroStatePU Eval_de_dt 24 (0.0%) of 2697 (1.1%) 2697 HydroStatePU ComputeCornerForces 18660 (7.3%) of 137915 (54.0%) 103158 HyprePCG Mult 2 (0.0%) of 103015 (40.3%) 103015 DenseMatrix Mult 1492 (0.6%) 1481 18660 MultABt 63910 (25.0%) 62845 MultAtB 20013 (7.8%) 19573 DenseMatrix CalcEigenvalues 8264 (3.2%) 8137 DenseMatrix FNorm 5629 (2.2%) 5585 DenseMatrix CalcSingularvalue 4874 (1.9%) 4647 Mult 4640 (1.8%) of 4691 (1.8%) 4272 DenseMatrix operator*= 4269 (1.7%) 4077 hypre_PCGSolve 19 (0.0%) of 103015 (40.3%) 103013 78201 HYPRE_ParCSRDiagScale 16982 (6.6%) 16982 hypre_SeqVectorAxpy 3207 (1.3%) 3206 hypre_ParVectorInnerProd 1 (0.0%) of 2902 (1.1%) 2884 hypre_CSRMatrixMatvec 80560 (31.5%) 80558 80560 63768 34757 19866 16982 8225 5591 4833 4518 DenseMatrix MultTranspose 4332 (1.7%) 4232 4089 3206 hypre_SeqVectorInnerProd 2878 (1.1%) 2878 2878 1490 DenseMatrix operator= 2543 (1.0%) 2389 1490 1124 T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 8 / 34 Google Profiler of BLAST 3D q2q1 blast Total samples: 195012 Focusing on: 195012 Dropped nodes with <= 975 abs(samples) Dropped edges with <= 195 samples __libc_start_main 0 (0.0%) of 191550 (98.2%) main 0 (0.0%) of 191550 (98.2%) 191550 _start 0 (0.0%) of 191550 (98.2%) 191550 RK2AvgIntegrator TimeStep 1 (0.0%) of 112516 (57.7%) 112516 HydroStatePU DeltaTEstimate 0 (0.0%) of 77426 (39.7%) 77426 HydroStatePU KineticEnergy 0 (0.0%) of 983 (0.5%) 983 HydroStatePU ComputeCornerForces 8223 (4.2%) of 154644 (79.3%) 8223 MultABt 61576 (31.6%) 61244 MultAtB 28754 (14.7%) 28524 DenseMatrix CalcEigenvalues 17807 (9.1%) of 23690 (12.1%) 23526 DenseMatrix CalcSingularvalue 7887 (4.0%) of 13772 (7.1%) 13558 DenseMatrix FNorm 5468 (2.8%) 5415 Mult 4789 (2.5%) of 4825 (2.5%) 4567 DenseMatrix operator*= 2668 (1.4%) 2531 HydroStatePU Eval_dv_dt 8 (0.0%) of 111675 (57.3%) 111675 77218 HyprePCG Mult 0 (0.0%) of 33087 (17.0%) 33087 77426 61528 hypre_PCGSolve 16 (0.0%) of 33087 (17.0%) 33087 hypre_ParCSRMatrixMatvec 8 (0.0%) of 29284 (15.0%) 27786 HYPRE_ParCSRDiagScale 2897 (1.5%) 2897 hypre_SeqVectorAxpy 1121 (0.6%) 1121 hypre_CSRMatrixMatvec 29248 (15.0%) 29246 29247 28683 17779 acos 905 (0.5%) of 7681 (3.9%) 3483 cos 4613 (2.4%) of 4654 (2.4%) 2400 7855 3925 1960 667 __ieee754_acos 6536 (3.4%) of 6644 (3.4%) 6408 6467 5450 4720 4545 2897 2566 DenseMatrix operator= 2049 (1.1%) 1968 DenseMatrix MultTranspose 1941 (1.0%) 1886 1121 CalcAdjugate 992 (0.5%) 956 907 T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 9 / 34 GPU vs CPU GFLOP/s Bandwidth T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 10 / 34 My background before this work Fairly familiar with Finite Difference Method (FDM) Limited understanding about Finite Element Method (FEM) Starts from June 1st ends at August 13th 2.5months T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 11 / 34 Outline Introduction of BLAST Motivation Details Optimization and Restriction Examples and Results Conclusion T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 12 / 34 Generalized corner forces on the GPU Semi-discrete finite element method in BLAST Momentum Conservation: dv dt = −M−1 v F · 1 Energy Conservation: de dt = M−1 e FT · v Equation of Motion: dx dt = v Matrix F is highly floating point operation intensive and thread independent F is constructed by two loops: - Loop over zones in the domain(in each processor) -Loop over quadrature points in this zone Compute hydro forces associated with this quadrature point on each point we compute this value absoutely independently varies with basis functions, dimension, etc. F can be abitrarily expensive T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 13 / 34 Generalized corner forces on the GPU Semi-discrete finite element method in BLAST Momentum Conservation: dv dt = −M−1 v F · 1 Energy Conservation: de dt = M−1 e FT · v Equation of Motion: dx dt = v CUDA kernel 1: Loop over qudrature points. Compute part of F based on v, e, x (transferred from CPU) and allocated work space (on GPU) CUDA kernel 2: Loop over zones. each zone does a Matrix Matrix TransposeMultiplication and assemble the F (stay on GPU) CUDA kernel 3 (in Momentum Equaton): Compute F · 1 and either return result to the CPU or keep on the GPU depending on the CG solver settings. CUDA kernel 4 (in Energy Equaton): Compute FT · v based on v (results stay on GPU) T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 14 / 34 Mass matrix solve on the GPU Semi-discrete finite element method in BLAST Momentum Conservation: dv dt = −M−1 v F · 1 Energy Conservation: de dt = M−1 e FT · v Equation of Motion: dx dt = v CUDA kernel 5 (in Momentum Equaton): Custom CG solver (Provided by Stan) for M−1 v F · 1 based on CUBLAS/CUSPARSE, with a diagonal preconditioner (We add later). CUDA kernel 6 (in Energy Equaton): Sparse Matrix(CSR) Multiplication to solve M−1 e FT · v by calling CUSPARSE Notice: Mv and M−1 e are computed once and read only thereafter (stay on GPU) M−1 v is dense, so we did not use it directly Me is a diagonal local dense matrix, so M−1 e is a sparse one, can be used directly T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 15 / 34 Map to CUDA Thread Hierachy CUDA kernel 1: loop points Each thread < −−> one quadrature point Each thread block < −−> one or more zones (tunable) In fact more flexsbile: one zone can be splited into two thread blocks CUDA kernel 2: loop zone Each thread block < −−> one zone Each block(zone) do MMtMult(ABt=C) Each thread < −−> one row of Matix C CUDA kernel 3, 4 Each thread < −−> one zone Each thread block is composed of 32 or 64 threads(tunable) CUDA kernel 5, 6 Call CUBLAS/CUSPARSE/MAGMA library rountes T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 16 / 34 Kernel2: ABt = C Each thread block(zone) does a ABt=C A,B are not big and generally can be fitted in shared and constant memory on Fermi A is varying to each thread block and updated in each iteration; B is read only and the same to every block Matrix are stored in column major Accessing global memory is coalesced T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 17 / 34 Technical details: Memory management CUDA code can be integrated into the previous C++ code very well Malloc GPU memory in C++ Constructor Free GPU memory in C++ Destructor Add a new method CUDA ComputerCornerForce Constructor and Destructor // in .hpp files // declare the varibles double *d_vec; // in .cu files HydroState::CUDA_Constructor//called by HydroState() { // malloc variables on GPU and copy initilized // and read only data from CPU to GPU cudaMalloc(&d_vec); cudaMemcpy(ToDevice); } HydroState::CUDA_Destructor//called by ~HydroState() { cudaFree(d_vec); } Corner Force Method // still in .cu files, HydroState::CUDA_CornerForce { // copy updated hydro states(v,e,x) to GP cudaMemcpy(ToDevice); // compute on GPU kernel<<< , >>>(d_vec, ...); // copy outputs to CPU cudaMemcpy(ToHost); } T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 18 / 34 Technical details: GPU Class Porting code, not developing algorithms Maximize use of previous C++ code to avoid develope new code In BLAST, almost everything is class CUDA4.0 support C++ class in device code (althought not fully) Class on CPU class Vector { private: int size; double *data; public: Vector(int a) double *GetData() {return data;} void Operation() Vector() } Class on GPU class Vector_GPU { private: int size; double *data; public: __device__ Vector_GPU(int a) __device__ double *GetData() {return data;} __device__ void Operation() __device__ Vector_GPU() } T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 19 / 34 An example of using class in GPU kernel #define num threads 32 #define vector length 4 global void kernel(double* d data) { Vector GPU vec(vector length); //threadIdx.x is the thread’s id, from 0-31, each thread grab its own data via pointer cast vec.GetData()=d data + threadIdx.x * vector length; vec.Operation(); } int main() { double *d data; //malloc a space on device memory cudaMalloc(&d data, sizeof(double) * num threads * vector length); //in the kernel, each thread grab its own portion of data and execute parallelly kernel<<<1, num threads,>>>(d data); cudaFree(d data); return 1; } T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 20 / 34 Technical details: Transfer of arguments hydro state e, v, x (in the form of class objects) needs to be transferred into CUDA kernel can not transfer C++ class objects directly like scalar grab data (pointer) from class objects and stored into double array or structs (see programming guide 4.0 page) define a struct typedef struct { int height; int width; int chunk; int size; double *data; } d_Matrix; transfer struct as argument void configureMatrix(d_Matrix &dm); { // malloc memory // grab data from C++ class objects and copy to dm.data // initilize height, width, chunk, size } int main() { d_Matrix dm; configureMatrix(dm,..); kernel<<<, >>> (d_Matirx dm); } T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 21 / 34 Outline Introduction of BLAST Motivation Details Optimization and Restriction Examples and Results Conclusion T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 22 / 34 Optimization Use cuda profiler to identify the hot spot (pics of profiler shown later) Constant memory store static read only coefficients, like basis function, weight parameters,etc (in kenel 1-4) Use shared (store A) and constant (store B) memory to accelerate in cuda kenel 2 ABt=C (memory refer O(n3)), hinted by profiler Implement PCG Solver which uses CUBLAS, CUSPARSE routines instead of coding myself Hand code Eigenvalue/vector, SVD(by Veselin) (for very small matrix 2*2(2D) or 3*3(3D), just specific to this application) in kernel 1, as can’t call LAPACK as used in C++ BLAST T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 23 / 34 Restriction Code is developed on Telsa C1060(my local PC), tests run on Fermi Tesla(1.3): not support dynamic malloc and free inside kernel memory has to be pre-allocated outside kernel, even temporary variables Fermi(2.0): supports dynamic malloc and free inside kernel Tesla: do not support virtual function, so some codes has to be rewritten PCG solver and kenel 6 only work on one processor at present Fairly tuned, but not fully optimized for Fermi Kenel 1 (also the most complicated and expensive one) reading global memory is uncoalesced: each thread(quardarature points) access small matrix(2*2,3*3) althought consecutively T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 24 / 34 Outline Introduction of BLAST Motivation Details Optimization and Restriction Examples and Results Conclusion T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 25 / 34 Tests q2q1 2D Triple-pt 9 * 2 velocity unkowns dofs(degree of freedoms). 4 engery dofs (zones) 16 points per zones q3q2 2D Triple-pt 16 * 2 velocity unkowns dofs. 9 engery dofs (zones) 36 points per zones q2q1 3D Sedov wave 27 * 3 velocity unkowns dofs. 8 engery dofs (zones) 64 points per zones T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 26 / 34 Performance 2550 lines of CUDA code + 2 months Tesla C1060 CPU: Xeon E5520 at 2.27GHz Tesla C2050 CPU: Xeon Westmere-EP X5660 (on Edge) Quadro 5000 CPU: Xeon Westmere-EP X5660 T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 27 / 34 Performance On Edge 2D q2q1 triple-pt problem MPI+CUDA Result of 4 MPI Result of 4 MPI+CUDA T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 28 / 34 CPU vs GPU results 2D q3q2 triple-pt on CPU vs GPU total energy = kinetic energy + internal energy CPU energy change:-4.04832e-12 with iteration step 38910 at t=2.5 GPU energy change: 2.99486e-09 with iteration step 38748 at t=2.5 (below, 3x on C2050) T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 29 / 34 CPU vs GPU results 3D q2q1 Sedov on CPU total energy = kinetic energy + internal energy CPU: energy change:+1.89570e-13 with iteration step 848 at t=0.3 GPU: energy change:+1.26013e-11 with iteration step 848 at t=0.3 (right, 4x on C2050) T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 30 / 34 CUDA Profiler Profiler of 2D q3q2 triple-pt T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 31 / 34 Outline Introduction of BLAST Motivation Details Optimization and Restriction Examples and Results Conclusion T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 32 / 34 Conclusion GPU is well suited for computational heavy kernel Floating points operation should accomodate the penalty of transferring data between CPU and GPU Optimization is a procedure of discovering Profiler does help to identify the bottleneck Use existing library instead of coding yourself (not necessarily to be the best but the most stable) T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 33 / 34 Thanks T. Dong (LLNL) Acceleration of BLAST Hydra Code on GPU Seminar 34 / 34
Optimizing CUDA kernel with atomics Please help, I have been working on optimizing a CUDA kernel that utilizes atomic adds and comparing performance. I have been able to achieve approximately 2x faster but once the size of the input goes beyond 1025, e.g., 1026, I get the following output: i (0): original (0.000000,0.000000), modified (16416.000000,8208.000000) Atomic and Optimized kernels do not match. Which is incorrect as the modified/optimized CUDA kernel should match the output from the original atomic add CUDA kernel - just faster execution. I am hoping that it is just something stupid simple I am doing wrong. Can anyone help? The code is posted below, and I am using CUDA 12.2 (driver 535.54.03) with A100-SXM4-80GB device: #include <cuda.h> #include <chrono> #include <iostream> #include <stdio.h> #include <string> #include <sstream> #include <vector> #include <stdexcept> #include <cstdlib> #include <cmath> // for easy gpu error checking #define GPU_ERROR_CHECK(ans) do{gpuAssert((ans),__FILE__,__LINE__);}while(0) inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true) { if (code != cudaSuccess) { fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line); printf("\nCUDA KERNEL ERROR: CUDA Kernel reports error: %s\n",cudaGetErrorString(code)); if (abort) exit(code); } } __forceinline__ __host__ __device__ float dot(float3 a, float3 b) { return a.x * b.x + a.y * b.y + a.z * b.z; } __forceinline__ __host__ __device__ float length(float3 v) { return sqrtf(dot(v, v)); } __forceinline__ __device__ float2 myKernel(float val) { return make_float2(val, val / 2); } /** * @brief Works for values less than 1026 samples, that is up to and including 1025 samples */ __global__ void optimized_org_kernel(const float3 * __restrict__ pos_a, const float3 * __restrict__ pos_b, const uint32_t input_size, const uint32_t output_size, float2 * __restrict__ result, const int region, const uint32_t first_idx_x, const uint32_t last_idx_x) { // Calculate thread indices uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x; uint32_t stride_x = blockDim.x * gridDim.x; uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y; uint32_t stride_y = blockDim.y * gridDim.y; float3 distance3; float distance; uint32_t output_start, output_end; // Local accumulation variables to reduce the number of atomic operations float2 local_accum = make_float2(0.0f, 0.0f); for (uint32_t x = thridx_x; x < last_idx_x; x += stride_x) { // Calculate the distance between points distance3.x = pos_a[x].x - pos_b[x].x; distance3.y = pos_a[x].y - pos_b[x].y; distance3.z = pos_a[x].z - pos_b[x].z; distance = length(distance3); // Determine the output range for this thread output_start = __fdiv_rz(output_size, 2) + __fdiv_rz(distance, output_size) - region; output_end = output_start + region; // Clamp the values to ensure they stay within bounds output_start = max(0u, output_start); output_end = min(output_end, output_size); for (uint32_t y = thridx_y; y < output_size; y += stride_y) { // Only accumulate within the valid range if (y >= output_start && y < output_end) { float2 lval = myKernel(1.0f); local_accum.x += lval.x; local_accum.y += lval.y; } } } // Write back the accumulated values using atomic operations if (local_accum.x != 0.0f || local_accum.y != 0.0f) { atomicAdd(&result[thridx_y].x, local_accum.x); atomicAdd(&result[thridx_y].y, local_accum.y); } } __global__ void org_kernel(const float3 * pos_a, const float3 * pos_b, const uint32_t input_size, const uint32_t output_size, float2 * result, const int region, const uint32_t first_idx_x, const uint32_t last_idx_x) { uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x; uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y; uint32_t stride_x = blockDim.x * gridDim.x; uint32_t stride_y = blockDim.y * gridDim.y; float3 distance3 = make_float3(0.0f, 0.0f, 0.0f); float distance = 0; uint32_t output_start, output_end; for(uint32_t x = thridx_x; x < last_idx_x; x += stride_x){ // distance calcs distance3.x = pos_a[x].x - pos_b[x].x; distance3.y = pos_a[x].y - pos_b[x].y; distance3.z = pos_a[x].z - pos_b[x].z; distance = length(distance3); output_start = __fdiv_rz(output_size, 2) + __fdiv_rz(distance, output_size) - region; output_end = output_start + region; for(uint32_t y = thridx_y; y < output_size; y += stride_y){ if((y < output_end) && (y >= output_start)){ float2 lval = myKernel(1.0f); atomicAdd(&result[y].x, lval.x); atomicAdd(&result[y].y, lval.y); } } } } bool eval_arrays_equal(float2 * d_org, float2 * d_mod, uint32_t n) { if (d_org == nullptr || d_mod == nullptr) { throw std::invalid_argument("Arrays are NULL."); } if (n < 1) { throw std::invalid_argument("Invalid array length, less than 1."); } float2 * h_org; float2 * h_mod; size_t sz = n * sizeof(float2); h_org = (float2*)malloc(sz); h_mod = (float2*)malloc(sz); GPU_ERROR_CHECK(cudaMemcpy(h_org, d_org, sz, cudaMemcpyDeviceToHost)); GPU_ERROR_CHECK(cudaMemcpy(h_mod, d_mod, sz, cudaMemcpyDeviceToHost)); GPU_ERROR_CHECK(cudaDeviceSynchronize()); for (uint32_t i = 0; i < n; ++i) { if (h_org[i].x != h_mod[i].x || h_org[i].y != h_mod[i].y) { printf("\ti (%i): original (%f,%f), modified (%f,%f)\n", i, h_org[i].x, h_org[i].y, h_mod[i].x, h_mod[i].y); return false; } } free(h_org); free(h_mod); // Every element is equal return true; } void printDeviceProperties() { int32_t device; cudaError_t error = cudaGetDevice(&device); if (error != cudaSuccess) { std::cerr << "Failed to get current device: " << cudaGetErrorString(error) << std::endl; return; } cudaDeviceProp deviceProp; error = cudaGetDeviceProperties(&deviceProp, device); if (error != cudaSuccess) { std::cerr << "Failed to get device properties: " << cudaGetErrorString(error) << std::endl; return; } std::cout << "Device " << device << ": \"" << deviceProp.name << "\"" << std::endl; std::cout << " CUDA Capability: " << deviceProp.major << "." << deviceProp.minor << std::endl; std::cout << " Total Global Memory: " << deviceProp.totalGlobalMem / (1024 * 1024) << " MB" << std::endl; std::cout << " Shared Memory per Block: " << deviceProp.sharedMemPerBlock / 1024 << " KB" << std::endl; std::cout << " Registers per Block: " << deviceProp.regsPerBlock << std::endl; std::cout << " Warp Size: " << deviceProp.warpSize << std::endl; std::cout << " Max Threads per Block: " << deviceProp.maxThreadsPerBlock << std::endl; std::cout << " Max Threads Dim: [" << deviceProp.maxThreadsDim[0] << ", " << deviceProp.maxThreadsDim[1] << ", " << deviceProp.maxThreadsDim[2] << "]" << std::endl; std::cout << " Max Grid Size: [" << deviceProp.maxGridSize[0] << ", " << deviceProp.maxGridSize[1] << ", " << deviceProp.maxGridSize[2] << "]" << std::endl; std::cout << " Clock Rate: " << deviceProp.clockRate / 1000 << " MHz" << std::endl; std::cout << " Total Constant Memory: " << deviceProp.totalConstMem / 1024 << " KB" << std::endl; std::cout << " Multiprocessor Count: " << deviceProp.multiProcessorCount << std::endl; std::cout << " Compute Mode: " << deviceProp.computeMode << std::endl; } int main(int argc, char * argv[]) { if (argc != 2) { fprintf(stderr, "\nPass the number of array elements via command line as follows:\n"); fprintf(stderr, "./xTest <num_elems>\n\n"); return EXIT_FAILURE; } // Dimensions const uint32_t BLOCK_WIDTH = 512; dim3 nblks(BLOCK_WIDTH,1,1); dim3 nthreads(1,BLOCK_WIDTH,1); // Retrieve command-line argument uint32_t n_values = static_cast<uint32_t>(std::stoi(argv[1])); uint32_t region = 3; uint32_t n_float3s = n_values; uint32_t float3_sz = n_float3s * sizeof(float3); uint32_t output_sz = n_values * sizeof(float2); // Allocate host & device side float2 *d_out_org; float2 *d_out_mod; GPU_ERROR_CHECK(cudaMalloc(&d_out_org, output_sz)); GPU_ERROR_CHECK(cudaMalloc(&d_out_mod, output_sz)); GPU_ERROR_CHECK(cudaMemset(d_out_org, 0, output_sz)); GPU_ERROR_CHECK(cudaMemset(d_out_mod, 0, output_sz)); // Float3s float3 *pos_a, *pos_b; float3 *d_pos_a, *d_pos_b; pos_a = (float3*)malloc(float3_sz); pos_b = (float3*)malloc(float3_sz); for(size_t p = 0; p < n_float3s; ++p){ pos_a[p] = make_float3(1,1,1); pos_b[p] = make_float3(0.1,0.1,0.1); } GPU_ERROR_CHECK(cudaMalloc(&d_pos_a, float3_sz)); GPU_ERROR_CHECK(cudaMalloc(&d_pos_b, float3_sz)); GPU_ERROR_CHECK(cudaMemcpy(d_pos_a, pos_a, float3_sz, cudaMemcpyHostToDevice)); GPU_ERROR_CHECK(cudaMemcpy(d_pos_b, pos_b, float3_sz, cudaMemcpyHostToDevice)); GPU_ERROR_CHECK(cudaDeviceSynchronize()); float total_time_org = 0.0f; float total_time_mod = 0.0f; uint32_t first_idx_x = 0; uint32_t last_idx_x = n_values; const uint32_t n_passes = 16; for (uint32_t pass = 0; pass < n_passes; ++pass) { auto start = std::chrono::high_resolution_clock::now(); // Original atomic add kernel org_kernel<<<nblks,nthreads,0,0>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_org, region, first_idx_x, last_idx_x); GPU_ERROR_CHECK(cudaDeviceSynchronize()); auto stop = std::chrono::high_resolution_clock::now(); total_time_org += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count()); start = std::chrono::high_resolution_clock::now(); // Optimized atomic add kernel optimized_org_kernel<<<nblks,nthreads,0,0>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_mod, region, first_idx_x, last_idx_x); GPU_ERROR_CHECK(cudaDeviceSynchronize()); stop = std::chrono::high_resolution_clock::now(); total_time_mod += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count()); } // Check for fidelity if (eval_arrays_equal(d_out_org, d_out_mod, n_values)) { printf("\nFidelity achieved.\n"); printf("\tTotal number of passes: %d\n", n_passes); float org_time = (total_time_org / n_passes); float mod_time = (total_time_mod / n_passes); printf("\t[ORIGINAL] Time: %8.9f (us.)\n", org_time); printf("\t[MODIFIED] Time: %8.9f (us.)\n", mod_time); printf("\tSpeedup Factor: %8.9f\n", (org_time / mod_time)); } else { printf("\nAtomic and Optimized kernels do not match.\n"); return EXIT_FAILURE; } GPU_ERROR_CHECK(cudaPeekAtLastError()); GPU_ERROR_CHECK(cudaDeviceSynchronize()); GPU_ERROR_CHECK(cudaDeviceReset()); return EXIT_SUCCESS; } Thanks to anyone who can point out any error or point a direction that might fix this issue. The original kernel adds a number to multiple columns. The modified kernel sums up multiple numbers and adds it to a single column. How can this ever be equivalent? Did you investigate all suggestions already given in the original thread here ? Hi @striker159 Thank you for the reply. Yes, I have been looking into the suggestions. I don’t think shared memory is useful in my case as I was able to get it working but during execution the performance was not actually much better, maybe 3%. So, I wanted to look in another direction hopefully without shared memory - just lowering the number of necessary atomicAdds if possible. I think I just solved the problem, by using warp level primitives and only calling atomicAdd once per block. The performance is somewhere between 1.5x to 2.8x faster with the mod_kernel as compared to the org_kernel. I have posted the code below for anyone who may like to use it for their own project(s). Thanks to all for assistance pointing me in some good direction(s) for optimizing this CUDA kernel. #include <cuda.h> #include <cuda_runtime.h> #include <iostream> #include <iomanip> #include <stdexcept> #include <chrono> #include <cstdlib> // for easy gpu error checking #define GPU_ERROR_CHECK(ans) do{gpuAssert((ans),__FILE__,__LINE__);}while(0) inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true) { if (code != cudaSuccess) { fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line); printf("\nCUDA KERNEL ERROR: CUDA Kernel reports error: %s\n",cudaGetErrorString(code)); if (abort) exit(code); } } /** * @brief CUDA DEVICE kernels executes scalar dot product. * * @param a The first float3. * @param b The second float3. * @return floating-point value that is the scalar dot product. */ __forceinline__ __host__ __device__ float dot(float3 a, float3 b) { return a.x * b.x + a.y * b.y + a.z * b.z; } /** * @brief CUDA DEVICE kernel executes Euclidean length of input float3. * * @param v The float3 whose x, y, z components length is being computed from. * @return floating-point value that is the Euclidean length of input float3. */ __forceinline__ __host__ __device__ float length(float3 v) { return sqrtf(dot(v, v)); } /** * @brief CUDA DEVICE kernel is a toy operation for demonstration purposes, whereby the * input value is modified and returned as a float2 datatype. * * @param val The input value being modified. * @return float2 version of input float with modifications applied. */ __forceinline__ __host__ __device__ float2 myKernel(float val) { return make_float2(val, val / 2); } /** * @brief CUDA DEVICE kernel that executes a warp-level summation of input float2 value. * @details Allows data to be summed without the use of extra memory space, that is, shared * directly across threads in a single warp (32-threads). * * @param val The float2 value being summed across warp. * @return float2 value summed across threads in a single warp. */ __inline__ __device__ float2 warpReduceSum(float2 val) { for (int offset = warpSize / 2; offset > 0; offset /= 2) { val.x += __shfl_down_sync(0xffffffff, val.x, offset); val.y += __shfl_down_sync(0xffffffff, val.y, offset); } return val; } /** * @brief CUDA DEVICE kernel that calls CUDA intrinsic @ref atomicAdd only on the * first thread and warp in the block. * * @param address The resulting global address where the value is added and stored. * @param val The value being added to the global address. */ __inline__ __device__ void atomicAddWarp(float2 *address, float2 val) { if (threadIdx.x % warpSize == 0) { atomicAdd(&address->x, val.x); atomicAdd(&address->y, val.y); } } __global__ void org_kernel(const float3 * pos_a, const float3 * pos_b, const uint32_t input_size, const uint32_t output_size, float2 * result, const int32_t region, const uint32_t first_idx_x, const uint32_t last_idx_x) { // Compute indices uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x; uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y; uint32_t stride_x = blockDim.x * gridDim.x; uint32_t stride_y = blockDim.y * gridDim.y; float3 distance3 = make_float3(0.0f, 0.0f, 0.0f); float distance = 0; uint32_t output_start, output_end; for(uint32_t x = thridx_x; x < last_idx_x; x += stride_x){ // distance calcs distance3.x = pos_a[x].x - pos_b[x].x; distance3.y = pos_a[x].y - pos_b[x].y; distance3.z = pos_a[x].z - pos_b[x].z; distance = length(distance3); output_start = __fdiv_rz(output_size, 2) + __fdiv_rz(distance, output_size) - region; output_end = output_start + region; for(uint32_t y = thridx_y; y < output_size; y += stride_y){ if((y < output_end) && (y >= output_start)) { float2 lval = myKernel(1.0f); atomicAdd(&result[y].x, lval.x); atomicAdd(&result[y].y, lval.y); } } } } __global__ void mod_kernel(const float3 * __restrict__ pos_a, const float3 * __restrict__ pos_b, const uint32_t input_size, const uint32_t output_size, float2 * __restrict__ result, const int32_t region, const uint32_t first_idx_x, const uint32_t last_idx_x) { // Compute indices uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x; uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y; uint32_t stride_x = blockDim.x * gridDim.x; uint32_t stride_y = blockDim.y * gridDim.y; if (thridx_x >= last_idx_x) return; float3 distance3; float distance; uint32_t output_start, output_end; for (uint32_t x = thridx_x; x < last_idx_x; x += stride_x) { // Pre-calculate distance components distance3.x = pos_a[x].x - pos_b[x].x; distance3.y = pos_a[x].y - pos_b[x].y; distance3.z = pos_a[x].z - pos_b[x].z; // Compute the distance and the output indices range distance = sqrtf(distance3.x * distance3.x + distance3.y * distance3.y + distance3.z * distance3.z); output_start = __fdividef(output_size, 2) + __fdividef(distance, output_size) - region; output_end = output_start + region; // Restrict output range to valid indices output_start = max(output_start, 0U); output_end = min(output_end, output_size); for (uint32_t y = thridx_y; y < output_size; y += stride_y) { if (y >= output_start && y < output_end) { float2 lval = myKernel(1.0f); // Execute warp-level primitives then only call atomic add once per block float2 warp_sum = warpReduceSum(lval); atomicAddWarp(&result[y], warp_sum); } } } } bool eval_arrays(const float2 * d_arr1, const float2 * d_arr2, const uint32_t n) { if (d_arr1 == nullptr || d_arr2 == nullptr) { throw std::invalid_argument("Null array(s)."); } if (n < 1) { throw std::invalid_argument("Invalid array length."); } float2 * h_arr1 = nullptr; float2 * h_arr2 = nullptr; h_arr1 = new float2[n]; h_arr2 = new float2[n]; GPU_ERROR_CHECK(cudaMemcpy(h_arr1, d_arr1, n * sizeof(float2), cudaMemcpyDeviceToHost)); GPU_ERROR_CHECK(cudaMemcpy(h_arr2, d_arr2, n * sizeof(float2), cudaMemcpyDeviceToHost)); for (uint32_t i = 0; i < n; ++i) { if (h_arr1[i].x != h_arr2[i].x || h_arr1[i].y != h_arr2[i].y) { std::cout << "Index: " << i << " Array 1 (" << h_arr1[i].x << "," << h_arr1[i].y << "), Array 2 (" << h_arr2[i].x << "," << h_arr2[i].y << ")\n"; delete [] h_arr1; delete [] h_arr2; return false; } } delete [] h_arr1; delete [] h_arr2; // Every element in both arrays was the same return true; } int main(int argc, char * argv[]) { if (argc != 2) { std::cerr << "\nPass the number of array elements via command line as follows:\n"; std::cerr << "./xOptimize <num_elems>\n\n"; return EXIT_FAILURE; } // Get number of array elements from command line int n_values = std::stoi(argv[1]); if (n_values < 1) { std::cerr << "Invalid number of array elements: " << n_values << std::endl; return EXIT_FAILURE; } // Defined sizes const uint32_t BLOCK_WIDTH = 512; size_t float3_sz = n_values * sizeof(float3); size_t output_sz = n_values * sizeof(float2); // HOST-side positions float3 *pos_a = nullptr; float3 *pos_b = nullptr; pos_a = new float3[n_values]; pos_b = new float3[n_values]; for (int i = 0; i < n_values; ++i) { pos_a[i] = make_float3(i, i + 1, i + 2); pos_b[i] = make_float3(i + 0.5f, i + 1.5f, i + 2.5f); } // DEVICE-side positions float3 *d_pos_a = nullptr; float3 *d_pos_b = nullptr; GPU_ERROR_CHECK(cudaMalloc(&d_pos_a, float3_sz)); GPU_ERROR_CHECK(cudaMalloc(&d_pos_b, float3_sz)); GPU_ERROR_CHECK(cudaMemcpy(d_pos_a, pos_a, float3_sz, cudaMemcpyHostToDevice)); GPU_ERROR_CHECK(cudaMemcpy(d_pos_b, pos_b, float3_sz, cudaMemcpyHostToDevice)); // DEVICE-side outputs float2 *d_out_org = nullptr; float2 *d_out_mod = nullptr; GPU_ERROR_CHECK(cudaMalloc(&d_out_org, output_sz)); GPU_ERROR_CHECK(cudaMalloc(&d_out_mod, output_sz)); GPU_ERROR_CHECK(cudaMemset(d_out_org, 0, output_sz)); GPU_ERROR_CHECK(cudaMemset(d_out_mod, 0, output_sz)); float total_time_org = 0.0f; float total_time_mod = 0.0f; uint32_t first_idx_x = 0; uint32_t last_idx_x = n_values; int region = 3; dim3 nthreads(BLOCK_WIDTH, 1, 1); dim3 nblocks(1, BLOCK_WIDTH, 1); const uint32_t n_passes = 16; for (uint32_t pass = 0; pass < n_passes; ++pass) { auto start = std::chrono::high_resolution_clock::now(); // Original atomic kernel org_kernel<<<nblocks, nthreads>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_org, region, first_idx_x, last_idx_x); GPU_ERROR_CHECK(cudaDeviceSynchronize()); auto stop = std::chrono::high_resolution_clock::now(); total_time_org += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count()); start = std::chrono::high_resolution_clock::now(); // Modified atomic kernel mod_kernel<<<nblocks, nthreads>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_mod, region, first_idx_x, last_idx_x); GPU_ERROR_CHECK(cudaDeviceSynchronize()); stop = std::chrono::high_resolution_clock::now(); total_time_mod += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count()); } std::cout << std::fixed << std::setprecision(4); total_time_org /= n_passes; total_time_mod /= n_passes; std::cout << "\nTotal number of passes: " << n_passes << std::endl; std::cout << "Original CUDA Kernel Time: " << total_time_org << " (us.)\n"; std::cout << "Modified CUDA Kernel Time: " << total_time_mod << " (us.)\n"; std::cout << "Speedup factor: " << (total_time_org / total_time_mod) << std::endl; // Check fidelity if (eval_arrays(d_out_org, d_out_mod, n_values)) { std::cout << "\nFidelity achieved.\n\n"; }else { std::cout << "\nFidelity not achieved.\n\n"; } return EXIT_SUCCESS; } When trying to improve a kernel, your main focus should be correctness, not speed. If you don’t want correct results, you could remove the kernel which will give you the greatest speedup. As already pointed out by others in the first thread, your modified kernels are not equivalent to the original kernel. This is your original code for(uint32_t y = thridx_y; y < output_size; y += stride_y){ if((y < output_end) && (y >= output_start)) { float2 lval = myKernel(1.0f); atomicAdd(&result[y].x, lval.x); } } Let’s plug in some numbers for simplicity. for(uint32_t y = 0; y < 5; y += 1){ if((y < 5) && (y >= 0)) { float2 lval = myKernel(1.0f); atomicAdd(&result[y].x, lval.x); } } This means if result is initialized with [0,0,0,0,0], it will be [1,1,1,1,1] after the loop. But if you accumulate the values for multiple y in any way, and then write to output, you will get different results. local_accum = 0 for(uint32_t y = 0; y < 5; y += 1){ if((y < 5) && (y >= 0)) { float2 lval = myKernel(1.0f); local_accum.x += lval.x; } } atomicAdd(&result[0].x, local_accum.x); This will output [5,0,0,0,0], not [1,1,1,1,1] The simplest solution the reduce the number of atomics in the original kernel is to only perform atomicAdd(result.x, 1) . Then afterwards use a second kernel which computes result.y = result.x / 2 Hi @striker159, Thank you for the reply. Yes, my first goal has always been to ensure the correctness of the modified code with the original code then to speed it up, if possible. This is why it was such a problem, speeding it up is not too difficult but speeding it up and maintaining consistency with the original results, that’s the issue. I understand that the following code would give incorrect results when comparing with the original: local_accum = 0 for(uint32_t y = 0; y < 5; y += 1){ if((y < 5) && (y >= 0)) { float2 lval = myKernel(1.0f); local_accum.x += lval.x; } } atomicAdd(&result[0].x, local_accum.x); That is why I have since abandoned that idea. However, is there any reason that the following snippet of code from above wouldn’t work? ... __inline__ __device__ float2 warpReduceSum(float2 val) { for (int offset = warpSize / 2; offset > 0; offset /= 2) { val.x += __shfl_down_sync(0xffffffff, val.x, offset); val.y += __shfl_down_sync(0xffffffff, val.y, offset); } return val; } __inline__ __device__ void atomicAddWarp(float2 *address, float2 val) { if (threadIdx.x % warpSize == 0) { atomicAdd(&address->x, val.x); atomicAdd(&address->y, val.y); } } __global__ void mod_kernel(const float3 * __restrict__ pos_a, const float3 * __restrict__ pos_b, const uint32_t input_size, const uint32_t output_size, float2 * __restrict__ result, const int32_t region, const uint32_t first_idx_x, const uint32_t last_idx_x) { // Compute indices uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x; uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y; uint32_t stride_x = blockDim.x * gridDim.x; uint32_t stride_y = blockDim.y * gridDim.y; if (thridx_x >= last_idx_x) return; float3 distance3; float distance; uint32_t output_start, output_end; for (uint32_t x = thridx_x; x < last_idx_x; x += stride_x) { // Pre-calculate distance components distance3.x = pos_a[x].x - pos_b[x].x; distance3.y = pos_a[x].y - pos_b[x].y; distance3.z = pos_a[x].z - pos_b[x].z; // Compute the distance and the output indices range distance = sqrtf(distance3.x * distance3.x + distance3.y * distance3.y + distance3.z * distance3.z); output_start = __fdividef(output_size, 2) + __fdividef(distance, output_size) - region; output_end = output_start + region; // Restrict output range to valid indices output_start = max(output_start, 0U); output_end = min(output_end, output_size); for (uint32_t y = thridx_y; y < output_size; y += stride_y) { if (y >= output_start && y < output_end) { float2 lval = myKernel(1.0f); // Execute warp-level primitives then only call atomic add once per block float2 warp_sum = warpReduceSum(lval); atomicAddWarp(&result[y], warp_sum); } } } } ... Thanks again for the help. First of all, there is no guarantee that all threads in the warp reach the reduction code. That aside, the reduction approach has the same problem. Values for multiple y are combined. For example, in thread0 y=0, in thread1 y=1. Then the warpsum will be 2, and both result[0] and result[1] will be set to 2. But the correct result would be result[0] = 1 and result[1] = 1. Okay, but wouldn’t calling one kernel to operate on the x component and then another kernel to operate on the y component end up costing more time than just calling the original kernel once? Unless the input is very large to maybe mitigate. I guess I am trying to understand if maybe the original kernel is as fast as it can be given the parameters of the problem itself. After the first x calculations, store the intermediate results in shared memory, use syncthreads() to synchronize block-wise, then use one or some of the warps for y processing within the block. Hi @Curefab, Thank you for the reply. Using shared memory is a good idea, and I have tried various versions of this methodology. While the results are valid in that they match the original values, the performance is really no better or if it is it is negligible. This is why I am beginning to think that maybe the original kernel is as fast as it can get. This is why I am beginning to think that maybe the original kernel is as fast as it can get. That is quite likely. You can make some further tests with changed parameters or theoretical calculations of the maximum speed (e.g. how many bytes you read/write vs. the bandwidth of device memory) combined with Compute Nsight to support this assumption. Related topics Powered by Discourse, best viewed with JavaScript enabled
Optimizing CUDA.jl performance for small array operations Hi, I’m trying to get better GPU performance with CUDA.jl for small array operations. So, I’ve started to port SymbolicRegression.jl to the GPU using CUDA.jl. It seems I’ve gotten the main evaluation part of the code to use the corresponding CUDA operations (which was REALLY straightforward by the way, great job!) , but it’s slower than I would like. Part of the problem is that during symbolic regression, you typically work on small amounts of data; maybe a matrix of size 5x1000. Without some clever fusion of tree evaluations, this means one needs to worry about the time it takes to launch kernels which makes things tricky. As a MWE, consider the following code: using CUDA, BenchmarkTools, Statistics for N in [1000, 10000] c1 = CUDA.ones(Float32, N) c2 = ones(Float32, N) res1 = @benchmark CUDA.@sync cos.($c1); res2 = @benchmark cos.($c2); println("Size $N: CUDA=$(median(res1.times)); CPU=$(median(res2.times))") end On my v100, this gives me (in microseconds): Size 1000: CUDA=26021.0; CPU=9086.0 Size 10000: CUDA=24419.5; CPU=87287.0 The GPU scales so well the array size is negligible. But the baseline time it takes to launch a kernel means I can’t exploit the full power of the GPU for evaluating trees on these small arrays. Is there something I can do to improve the kernel launch speed here? I also tried: res1 = @benchmark CUDA.@sync blocking=false cos.($c1); which was mentioned in the CUDA.jl docs to be better for profiling short executions. This lowers the the evaluation time to ~11000, but unfortunately this is still not enough. Thanks for any advice! Cheers, Miles This is the order I’d probably try things in: I’m not at all familiar with the algorithms, but could the fusion be easier than it appears? For one, you can always add “batch” dimensions easily like: c1a = CUDA.ones(Float32, N) c1b = CUDA.ones(Float32, N) c1 = cat(c1a, c1b, dims=2) res1a, res2a = eachslice(cos.(c1), dims=2) Also, you can “lazily” create broadcasted objects then evaluate them with a single “fused” kernel call like: using Base: broadcasted, materialize bc1 = broadcasted(cos, c1) bc2 = broadcasted(sin, c2) bc = broadcasted(+, bc1, bc2) res = materialize(bc) # one kernel call, w/o temp arrays You could use multiple threads. I think you may need to be on CUDA#master (might need to do some reading in Automatic task-based concurrency using local streams by maleadt · Pull Request #662 · JuliaGPU/CUDA.jl · GitHub and links therein too) but the latest versions each thread uses its own stream, which can then execute concurrently. In general it’d be good to do some profiling to make sure the GPU is doing what you think it is. You could combine more of your code into single kernels by writing custom kernels (still pure Julia). KernelAbstractions.jl offers a pretty nice interface to it, although even CUDA.@cuda isn’t too bad. Thanks @marius311. Replies to each point numbered below. I should mention first that the C++ CUDA kernel launch cost is only ~10 microseconds, so I definitely think the seeming ~25,000 microseconds launch cost here can be cut down. i.e., this seem to be a software problem rather than hardware or algorithm. But these other tricks to combine kernels might be enough to account for it. Best, Miles I should mention first that the C++ CUDA kernel launch cost is only ~10 microseconds, so I definitely think the seeming ~25,000 microseconds launch cost here Quick note but I think those benchmark times are in nanoseconds, so its not 25,000 microseconds, its 25 microseconds, on the order of the kernel launch. Anyway, thanks for the info, that helps. Are the inputs to the ~1000 equations the same at least? As in, does this reduce to applying an array of functions to some input? If so, search these foums for “array of functions” and you’ll get several links discussing this, some even mentioning GPU (this is probalby a good one to follow links from, also, this may be the identical question). This may be naive since I haven’t really read those threads, but here’s my solution messing around with it briefly: using CUDA fs = (sin, x->x^2, x->tan(x^2)) apply_funcs(fs, x) = ntuple(i->fs[i](x), length(fs)) apply_funcs.(Ref(fs), cu(rand(1000))) I haven’t done a super proper profile, but I’m pretty sure that final call results in a single kernel launch. Certainly benchmarking it on a V100, its faster than 3 consecutive ones: julia> x = cu(rand(Float32, 1000)); julia> @btime CUDA.@sync apply_funcs.($(Ref(fs)), $x); 22.183 μs (47 allocations: 1.55 KiB) julia> @btime CUDA.@sync (fs[1].($x); fs[2].($x); fs[3].($x)); 39.731 μs (78 allocations: 1.84 KiB) Something else which is cool is that if you look at e.g. @code_llvm apply_funcs(fs, 2.) you’ll see that x^2 is only calculated once, I believe in this case the compiler is smart enough to eliminate the common sub-expression. I’d guess the CUDA compiler would do the same thing, although I’m not familiar enough with how to check exactly. A limitation of this is that it stops working once your tuple of functions is longer than length 10 because of this. But maybe batching 10 functions together like this is enough to saturate the GPU, and if not, you can always write your own ntuple-like function and grow that if-statement a bit larger. Thanks! This is very helpful. Okay, good to know re: units. I’m not sure why I thought it was in microseconds… but yeah 25 us seems much more reasonable. Are the inputs to the ~1000 equations the same at least? Only at the leafs of each equation. Two equations might both use the variable x1, for instance. But equations also have constants, and these will always be slightly different due to mutations. In the branch nodes, the inputs end up being completely different unless that particular subtree is identical, which is rare. I tried using memoization for small subtrees at one point, but this didn’t help… there’s just so many different subtrees possible in a typical search. Re: apply_funcs, good idea. Will try this. To be honest, thinking more about this, I wonder how doable it would be to completely batch evaluation of many equations… Actually, maybe this make things easier for your point 1! Right now my recursive evaluation essentially looks like this: function evalTree(X, tree) if tree.degree == 0 if tree.constant return fill(tree.value, size(X, 2)) end return X[tree.feature, :] elseif tree.degree == 1 x = evalTree(X, tree.left) op = unary_operators[tree.op] return op.(x) else x = evalTree(X, tree.left) y = evalTree(X, tree.right) op = binary_operators[tree.op] return op.(x, y) end end (though I do some operator fusing for small subtrees, and also run things behind a function barrier, per your helpful advice in the other thread ). I guess I am wondering if it would be efficient to: (1) pass a list of trees to evaluate, and (2) walk the binary tree for ALL trees up to the maximum depth, using an identity operator for i when tree[i]==nothing at the current depth, and converting all unary operators to binary via (x, y) -> op(x) What do you think? Although, maybe the fact that the tuple of operators changes every single time means that this would be re-compiling the kernel each launch… I guess one could instead pass an array of indices for the functions to call, and manage all of that inside the kernel assuming fixed operators? Related topics Powered by Discourse, best viewed with JavaScript enabled
How to speed up that simple CUDA kernel? Hi! I have a kernel: __global__ void filter_small(unsigned arr_size, float *arr, ResultAndPos *results, float threshold, int *n_results) { const int itemsPerThread = 32; int begin = blockIdx.x * blockDim.x * itemsPerThread + threadIdx.x * itemsPerThread; int end = begin + itemsPerThread; if (end > arr_size) end = arr_size; for (int index = beg; index < end; index++) { if (arr[index] < threshold) { int oldIdx = atomicAdd(n_results, 1); results[oldIdx] = ResultAndPos{arr[index], index}; } } } So basically it's very simple filter, that leaves elements smaller that the threshold. But the array itself is very large, ~ 5-15 millions of floats. And I launch it as follows: int blockSize = 128; int itemsPerThread = 32; int itemsPerBlock = itemsPerThread * blockSize; int numBlocks = (N + itemsPerBlock - 1) / itemsPerBlock; filter_small<<<numBlocks, blockSize>>>(N, arr_ptr, results_ptr, threshold, d_n_results); For ten million, it executes around ~12 ms which is quite slow. How can I speed it up? Thank you! OS: linux ubuntu 18.04, cuda 11.1, nvidia 3060 Create your account and connect with a world of communities. Anyone can view, post, and comment to this community Top Posts
This makes the project proportionally harder in my opinion because you need to be that much more efficient with moving data through the memory hierarchy. With tensor cores, to get anywhere close to cuBLAS, you need to start with something like the most efficient kernel in simon's article, and then do stuff like shared memory swizzling, async global memory copies, double buffering, and writing a really efficient kernel epilogue to accumulate the C matrix into the product.I came across this article a while ago and it inspired me to take a stab at this^, and as of now I have gotten to ~80% of the cuBLAS tensor core performance where the kernel is mostly compute bound, and I am close to giving up on the last ~20%, because I think I may need to write the inner loop in SASS to make sure the instruction mix between shared memory loads, mma instructions, and synchronizations is perfectly balanced so that none of the hardware pipelines get overloaded (see link below), and I have enough compassion for myself to not spend my free time doing stuff like that :). There are also certain things implemented in CUTLASS that seem important (look up serpentine traversal) but NVIDIA engineers wont talk about the hardware details required to understand why this helps.Article on this is forthcominghttps://github.com/NervanaSystems/maxas/wiki/SGEMM I came across this article a while ago and it inspired me to take a stab at this^, and as of now I have gotten to ~80% of the cuBLAS tensor core performance where the kernel is mostly compute bound, and I am close to giving up on the last ~20%, because I think I may need to write the inner loop in SASS to make sure the instruction mix between shared memory loads, mma instructions, and synchronizations is perfectly balanced so that none of the hardware pipelines get overloaded (see link below), and I have enough compassion for myself to not spend my free time doing stuff like that :). There are also certain things implemented in CUTLASS that seem important (look up serpentine traversal) but NVIDIA engineers wont talk about the hardware details required to understand why this helps.Article on this is forthcominghttps://github.com/NervanaSystems/maxas/wiki/SGEMM Article on this is forthcominghttps://github.com/NervanaSystems/maxas/wiki/SGEMM https://github.com/NervanaSystems/maxas/wiki/SGEMM I’d be so happy if SASS were documented and ptxas were open source, sometimes I spend entire days going through whitepapers and various sources of online documentation to get more hardware details… My guess is that people nowadays are gradually moving away from raw CUDA programming and moving towards things like Triton etc, and you won't be focusing on pure GEMM since you tend to do some fusion.The Triton tutorial claims their performance is on par with cuBLAS.https://triton-lang.org/main/getting-started/tutorials/03-ma... The Triton tutorial claims their performance is on par with cuBLAS.https://triton-lang.org/main/getting-started/tutorials/03-ma... https://triton-lang.org/main/getting-started/tutorials/03-ma... Your guess is wrong. Besides the fact that there's much more to life than matmul (for which triton is just ok), the other obvious fact is that triton has exactly 1 frontend (python) and there's much more to life than that frontend.I find that basically in every thread about low-level work there's someone making some weird comment about how triton or mojo or XYZ supplants CUDA or assembly or whatever. I can't understand how this comes about because absolutely no one working in these areas thinks XYZ is going to supplant anything. So it's invariably outsiders making these claims and I cannot fathom why any outsider would be motivated to make claims from the outside. I find that basically in every thread about low-level work there's someone making some weird comment about how triton or mojo or XYZ supplants CUDA or assembly or whatever. I can't understand how this comes about because absolutely no one working in these areas thinks XYZ is going to supplant anything. So it's invariably outsiders making these claims and I cannot fathom why any outsider would be motivated to make claims from the outside. As an outsider CUDA is so intimidating so the promise of Triton etc is very appearing and I wanted to get sold. i have PRs in Triton - i'm well familiar with the fact that triton is an MLIR project.> C++ straight using MLIRthat's like saying llvm ir is usable through C++ ... or hell that's like saying NVPTX is usable through C++. it's not just not a frontend it's the exact opposite: it's emitting IR using IR builders. > C++ straight using MLIRthat's like saying llvm ir is usable through C++ ... or hell that's like saying NVPTX is usable through C++. it's not just not a frontend it's the exact opposite: it's emitting IR using IR builders. that's like saying llvm ir is usable through C++ ... or hell that's like saying NVPTX is usable through C++. it's not just not a frontend it's the exact opposite: it's emitting IR using IR builders. Knowing that reaching broad devex parity is very expensive I think the real win is figuring out what specific problem you have and building community and robust software support around that. It's the fact that AMD doesn't prioritize the reliability of its hardware and software stack. If I run llama.cpp on Vulkan I get a reasonable speedup, but if I raise the batch size to 512, the GPU is starting to make strange noises and shuts the PC down midway. Very cool. 98% of zero is still zero. In fact cuBLAS and CUDA are kinda orthogonal in that you're either calling a pre-built cuBLAS kernel or writing your own CUDA kernel but not really combining the two.I'd say CUDA shines more because of stability, documentation, community support + examples, and ability to use modern C++ features in GPU code. I'd say CUDA shines more because of stability, documentation, community support + examples, and ability to use modern C++ features in GPU code. Targeting nvidia GPUs? Or in general? For whom?Building a performant BLAS library is hard but certainly not impossible. The tricks discussed in this post are hardly anything new either. Now, making a BLAS competitive with Nvidia's on its own GPUs is bound to be tough. But not technically unfeasible (after all, you can drop down to PTX if needed). Building a performant BLAS library is hard but certainly not impossible. The tricks discussed in this post are hardly anything new either. Now, making a BLAS competitive with Nvidia's on its own GPUs is bound to be tough. But not technically unfeasible (after all, you can drop down to PTX if needed). On average over 20 runs:CuBLAS (./sgemm 0) has 50.9 TFLOPS.My kernel has 61.8 TFLOPS, so it's actually +21% speedup in this benchmark.How do I collect my paycheck? CuBLAS (./sgemm 0) has 50.9 TFLOPS.My kernel has 61.8 TFLOPS, so it's actually +21% speedup in this benchmark.How do I collect my paycheck? My kernel has 61.8 TFLOPS, so it's actually +21% speedup in this benchmark.How do I collect my paycheck? How do I collect my paycheck? On a 4090 gpu, average of 20 runs of SGEMM_CUDA: size tflops_cublas tflops_my diff 4096² 50.8-50.9 61.8 +21% 8192² 56.3-56.4 67.1 +19% 16384² 53.6 66.7 +24% I guess the right thing to do now would be to hire a B2B salesman and figure out, which company needs it. size tflops_cublas tflops_my diff 4096² 50.8-50.9 61.8 +21% 8192² 56.3-56.4 67.1 +19% 16384² 53.6 66.7 +24% I guess the right thing to do now would be to hire a B2B salesman and figure out, which company needs it. size tflops_cublas tflops_my diff 4096² 50.8-50.9 61.8 +21% 8192² 56.3-56.4 67.1 +19% 16384² 53.6 66.7 +24% I have seen how those high-performance libraries are made and I'm still in awe at the quality and quantity of the staffing involved. Those were the smartest and most knowledgeable engineers I met in my career. Generalizing from a micro benchmark is typically hubris. Then there are also numerics: being fast is not enough if your implementation accumulates a lot of rounding errors doing so. Floating point arithmetic can and will mess up your results in unexpected ways. -funsafe famously is neither fun nor safe.Maybe tooling will catch up and make it easier. Think tinygrad with beamsearch, triton or halide. Maybe tooling will catch up and make it easier. Think tinygrad with beamsearch, triton or halide.
Maharshi's blog Home Blog Learning CUDA by optimizing softmax: A worklog 04 Jan, 2025 The softmax operation is crucial. It is used extensively as a layer within deep learning models like transformers where it normalizes raw scores (logits) into a probability distribution. This property makes it particularly useful in classification tasks, where each output neuron represents the likelihood of a specific class. Optimizing softmax, especially in the context of GPU programming with CUDA, presents many opportunities for learning. In this worklog, we will start by benchmarking PyTorch's softmax operation then finally we will iteratively optimize it in CUDA. The NVIDIA GPU used for this worklog is one GTX 1050Ti (that's all I have got right now). The full code is available on my GitHub: Optimizing softmax in CUDA Let's start. The math Before getting into it all, let's take a moment to understand the math behind the softmax operation. Softmax for an input vector X having N elements, produces an output vector O with N elements, where the ith element in the output vector is defined as: Note that softmax operation depends on the current element xi and also on the sum of exponentials of all the elements of the input vector X. We will call this sum as the "normalization factor" (or, norm) henceforth. Usually, instead of a single vector we deal with a matrix of shape (M,N) consisting of M rows where each row is a vector of N elements. Softmax is then performed along the columns of this matrix. The output here will be another matrix of the same shape. Throughout this worklog, we will be working with a matrix of shape (1024,32768) i.e. 33,554,432 floating point numbers in total. Example of the softmax output on a vector containing 5 elements: import torch import torch.nn.functional as F vector = torch.randn(5, dtype=torch.float32) print("Input vector:", vector) # softmax along the last dimension output = F.softmax(vector, dim=-1) print("Output vector:", output) Input vector: tensor([-1.3701, 0.7485, 0.1610, -2.0154, 1.0918]) Output vector: tensor([0.0382, 0.3176, 0.1765, 0.0200, 0.4477]) There is a problem though: If the values of xi are very large (or very small), then the exponentials might cause overflow or underflow considering the precision limits of floating point numbers on a modern computer. We cannot represent and work with very large or very small numbers. This means for extreme values, the above version of softmax is NOT numerically stable. But... there is a fix! We can modify the above equation in such a way that the overall operation becomes numerically stable while being correct: We subtract the maximum value xmax of the vector (a constant) from each xi before computing the exponential. This subtraction operation "shifts" the numbers to a range that can work nicely with floating point numbers. The numerically stable softmax equation becomes: How this "shifted" equation results in the correct softmax output is left as an excersice to the reader :) How fast is PyTorch? We can get a baseline metric on how fast PyTorch is for computing the softmax operation, along the last dimension, on a randomly initialized matrix. Following the above example, we can get a quick measure for the execution time of the softmax function: import time import torch import torch.nn.functional as F # Initialize the matrix on devuce matrix = torch.randn(1024, 32768, device='cuda', dtype=torch.float32) # Warm up _ = torch.nn.functional.softmax(matrix, dim=-1) # Ensure all CUDA operations are finished torch.cuda.synchronize() total_time = 0 n_iters = 5 for i in range(n_iters): # Measure time torch.cuda.synchronize() # Ensure all CUDA operations are finished start = time.time() _ = torch.nn.functional.softmax(matrix, dim=-1) torch.cuda.synchronize() # Synchronize again end = time.time() total_time += (end - start) * 1000 print(total_time) print(f"Softmax computation time (average): {(total_time/n_iters):.3f} ms") Softmax computation time (average): 7.226 ms From our quick test, PyTorch takes around 7.2 milliseconds to process and compute softmax on the entire matrix. Now, let's see how far can we go with implementing softmax in CUDA. Kernel 1 - Naive softmax In this kernel, we will assume that each thread in a block processes and computes one entire row of the input matrix. If the number of threads in one block is N_THREADS, then we need a total of ceil(M / N_THREADS) blocks to process the entire matrix. The figure below shows this. Note that row = blockDim.x * blockIdx.x + threadIdx.x is the row which each thread within some block will process. The actual computation is quite intuitive here. Softmax is calculated in three passes over the input array: Pass 1 - Calculation of the maximum: The whole input row is first traversed from left (index = 0) to right (index = N - 1) to find the maximum value xmax. Pass 2 - Calculation of the norm: The whole input row is traversed from left to right again, but this time the normalization factor is computed using the xmax value from the first pass, for each element. Pass 3 - Softmax computation: The whole input row is traversed again from left to right and for each element the exponential of (x−xmax) is divided by the norm calculated in the second pass. Below is the specific code snippet that does this: int row = blockDim.x * blockIdx.x + threadIdx.x; if (row < M) { // maximum of this row float x_max = -INFINITY; // norm factor of this row float norm = 0.0f; // output in 3 passes for (int col = 0; col < N; col++) { int i = row * N + col; x_max = max(x_max, input[i]); } for (int col = 0; col < N; col++) { int i = row * N + col; norm += expf(input[i] - x_max); } for (int col = 0; col < N; col++) { int i = row * N + col; output[i] = expf(input[i] - x_max) / norm; } } Running this kernel results in: >> GPU allocation time: 10.727424 ms >> Host to device transfer time: 26.176161 ms >> Kernel execution time: 124.102112 ms >> Device to host transfer time: 37.320896 ms The naive kernel takes around 124.10 milliseconds to execute. This is 17.24 times slower compared to PyTorch's 7.2 milliseconds. Can we improve it? Ofcourse we can. Kernel 2 - Online softmax Three passes to compute softmax is not at all optimal. Maybe there's a way to "fuse" the first pass (calculating the maximum) and the second pass (calculating the norm) together. To do this, we will exploit the multiplication property of exponentials i.e. To calculate the xmax and norm in just one pass, at each step we need to multiply the "current norm" with a "correction term". For example, consider the following input vector: V=[3,2,5,1] for which we need to compute maximum and norm. We will now iterate through this input vector to see what correction term do we need and when do we need it. Assume that the variables maxi and normi will represent maximum and norm untill the ith element. Starting at i=0: Note that after the first iteration, the values for maximum and norm are the correct values (but till the first index). Next at i=1: We add the "previous norm" value to the "current norm" value at each iteration. Now at i=2: Finally at i=3: After the final iteration, we remain with: and, We just calculated both maximum and norm factor in only one pass by using a correction term and by exploiting the property of multiplying exponentials! The correction term is: Now, to write this algorithm as a CUDA kernel, we simply use the naive kernel and "fuse" the first two loops into one: int row = blockDim.x * blockIdx.x + threadIdx.x; if (row < M) { float x_max = -INFINITY; float norm = 0.0f; // pass 1 for (int col = 0; col < N; col++) { int i = row * N + col; float curr = input[i]; if (curr > x_max) { // correct the global norm here norm = norm * expf(x_max - curr); x_max = curr; } norm += expf(curr - x_max); } // pass 2 for (int col = 0; col < N; col++) { int i = row * N + col; input[i] = expf(input[i] - x_max) / norm; } } Running this kernel results in: >> GPU allocation time: 10.431488 ms >> Host to device transfer time: 25.897375 ms >> Kernel execution time: 88.149567 ms >> Device to host transfer time: 33.533314 ms Using this simple trick (also called online softmax) we see that this kernel is 1.39 times (around 28.12%) faster than the naive kernel. That's a clever improvement, but we can do more. We need to dive deeper into how we can use threads within one block to parallelize the computations even more by collaborating with each other. Kernel 3 - Shared memory and reductions The more you learn about GPU programming with CUDA, the more you will realize that memory is structured into hierarchies. The list below shows the access speeds of different memory hierarchies from fastest to slowest. The kernels above uses only global GPU memory. Reading from and writing to global memory is expensive and time consuming, so we need to somehow reduce the access and storing time. The idea here is to have each block (thread block) process one row of the input matrix and the threads within each block will process only a chunk of the entire row. Have a look at the figure below to understand which elements will each thread load. Here tid = threadIdx.x loads elements spaced by blockDim.x so that the threads with different tids load consecutive elements from the input row. This helps in achieving memory coalescing where accessing consecutive addresses from the global memory is faster than accessing random addresses. There is a problem though: To calculate the values of maximum and norm, we need to have access to all the elements of the input row. How will we do that if different threads have access to only a chunk of the input row? This is where reductions come into play. Bear with me on this one. Let's assume each thread has its own private set of variables called local_max and local_norm and also suppose that there are N_THREADS threads in total. Now, the thread with tid = i will compute the local max and local norm using the elements i, i + blockDim.x, i + 2*blockDim.x and so on. After all the threads in a block complete processing their respective chunks, we will be left with N_THREADS values for local_max and local_norm. To calculate the global maximum value, we need to "reduce" these N_THREADS local maximum values to 1 global maximum value. The figure below will help you understand this. However, to perform this "block-level" reduction we will need to store the local maximum value in the shared memory of the block. Each thread will store its local maximum as: smem[tid] = local_max; __syncthreads(); Note we also add a sync barrier to ensure that each thread correctly stores its local maximum into the corresponding address in the shared memory and waits for other threads before moving on to the reduction step. We will now use the shared memory to reduce the N_THREADS local maximum values to 1 value and then store it in the first address (smem[0]) in the shared memory. The reduction step looks like: for (int stride = blockDim.x / 2; stride > 0; stride /= 2) { if (tid < stride) { smem[tid] = max(smem[tid], smem[tid + stride]); } // sync before next iteration __syncthreads(); } float global_max = smem[0]; __syncthreads(); This code block performs reduction in O(log(N)) time complexity which is faster than reducing linearly i.e. O(N) complexity. Let's see an example of this reduction with 8 threads where the shared memory will contain 8 maximum values in the start: Initially: smem = [3, 7, 2, 8, 6, 4, 5, 1] First Iteration (stride = 4): Each thread with tid < 4 compares smem[tid] with smem[tid + stride] and updates smem[tid] with the maximum. Comparisons: tid = 0: smem[0] = max(smem[0], smem[4]) = max(3, 6) = 6 tid = 1: smem[1] = max(smem[1], smem[5]) = max(7, 4) = 7 tid = 2: smem[2] = max(smem[2], smem[6]) = max(2, 5) = 5 tid = 3: smem[3] = max(smem[3], smem[7]) = max(8, 1) = 8 Updated smem: smem = [6, 7, 5, 8, 6, 4, 5, 1] Second Iteration (stride = 2): Each thread with tid < 2 compares smem[tid] with smem[tid + stride] and updates smem[tid]. Comparisons: tid = 0: smem[0] = max(smem[0], smem[2]) = max(6, 5) = 6 tid = 1: smem[1] = max(smem[1], smem[3]) = max(7, 8) = 8 Updated smem: smem = [6, 8, 5, 8, 6, 4, 5, 1] Third Iteration (stride = 1): Each thread with tid < 1 compares smem[tid] with smem[tid + stride] and updates smem[tid]. Comparison: tid = 0: smem[0] = max(smem[0], smem[1]) = max(6, 8) = 8 Updated smem: smem = [8, 8, 5, 8, 6, 4, 5, 1] Final State: After the reduction, the maximum value is stored in smem[0], which is: global_max = smem[0] = 8 This shows how in only 3 iterations, we performed the reduction and got access to the global maximum value from the 8 threads. We do the same reduction for local_norm as well to find the global norm value. The only difference for local norm value is that, instead of performing the max operation we perform the + operation. Here's how the kernel looks like for reduction of the maximum value: __shared__ float smem[1024]; int row = blockIdx.x; int tid = threadIdx.x; // edge condition (we don't process further) if (row >= M) return; float* input_row = xd + row * N; float* output_row = resd + row * N; float local_max = -INFINITY; float local_norm = 0.0f; for (int i = tid; i < N; i += blockDim.x) { float x = input_row[i]; if (x > local_max) { local_norm *= expf(local_max - x); local_max = x; } local_norm += expf(x - local_max); } __syncthreads(); smem[tid] = local_max; __syncthreads(); for (int stride = blockDim.x / 2; stride > 0; stride /= 2) { if (tid < stride) { smem[tid] = max(smem[tid], smem[tid + stride]); } __syncthreads(); } float global_max = smem[0]; __syncthreads(); The output from this kernel looks like: >> GPU allocation time: 10.464928 ms >> Host to device transfer time: 22.674080 ms >> Kernel execution time: 6.612160 ms >> Device to host transfer time: 41.318016 ms Right away we see that this kernel which uses shared memory and reductions is already around 8.33% (1.09 times) faster than PyTorch's implementation. Can we improve this even more? Let's see. Kernel 4 - Shuffle instructions This kernel will be largely similar to the previous one with one difference. If you notice carefully, in the reduction operations for local maximum value and local norm value we are accessing the shared memory and syncing the threads in every iteration. Even though accessing shared memory is fast, what if we could eliminate the usage of shared memory and syncing barriers while reducing the values? Before explaining how, we need to understand the concept of warps within thread blocks: Warps are a fundamental unit of execution within a thread block. A warp is a group of 32 threads in a thread block that execute the same instruction simultaneously (SIMD: Single Instruction, Multiple Data). All threads in a warp execute instructions in lockstep, meaning all 32 threads execute the same instruction at the same time on different data. If a thread block contains N threads, the number of warps is ceil(N / 32). Also, when threads in a warp follow different execution paths (e.g., due to conditional statements), it leads to warp divergence, reducing performance as the threads execute sequentially instead of in parallel. In our case, if we have blockDim.x = 1024 then each block is composed of 32 warps (each warp consisting of 32 threads). To limit the usage of shared memory, CUDA provides us with shuffle instructions which are specialized intrinsics that allow threads within a warp to directly exchange data without the overhead of shared memory. These are warp-level primitives and are highly efficient because they use registers to exchange data which is faster than using shared memory (according to the hierarchy). Suppose in one block we have N_THREADS threads in total. That means, we have NW = ceil(N_THREADS / warp_size) warps where warp_size is usually 32 threads. Now, instead of doing a block-level reduction using shared memory what if we first perform a warp-level reduction: From N_THREADS values, doing a warp-level reduction for every warp available will leave us with NW values across the block that needs to be reduced further. So, the first available warp can load the values from the remaining warps, and then perform a warp-level reduction again to get the final value. Let's consider an example to ease your mind: Suppose there are 16 threads that have already calculated their respective local maximum values. Also, assume that warp_size = 4 which means there are 4 warps in total. The values are [3, 7, 2, 9, 4, 1, 8, 5, 10, 6, 12, 11, 13, 14, 15, 16]. Step 1: Warp-level reduction The warp size is 4, so there are 4 warps in the block (16 threads / 4 threads per warp). Each warp performs its own reduction. Warp 0 (Threads 0 to 3: Values [3, 7, 2, 9]): Offset = 2: Offset = 1: Result for Warp 0: 9 (stored in Thread 0 of the warp). Warp 1 (Threads 4 to 7: Values [4, 1, 8, 5]): Offset = 2: Offset = 1: Result for Warp 1: 8 (stored in Thread 4 of the warp). Warp 2 (Threads 8 to 11: Values [10, 6, 12, 11]): Offset = 2: Offset = 1: Result for Warp 2: 12 (stored in Thread 8 of the warp). Warp 3 (Threads 12 to 15: Values [13, 14, 15, 16]): Offset = 2: Offset = 1: Result for Warp 3: 16 (stored in Thread 12 of the warp). Step 2 - Block-level reduction At this point, the maximum values from each warp are stored in the first thread of each warp: [9, 8, 12, 16]. The block-level reduction begins. Store Warp Results in Shared Memory: Synchronize Threads: Perform Final Reduction Using First Warp: First Warp Reduction (smem = [9, 8, 12, 16]): Offset = 2: Offset = 1: Global Block Maximum: 16 (stored in smem[0]). At this point, we have the global maximum value for the entire block using warp-level reductions. How to actually perform these warp-level reductions though? CUDA provides us with shuffle instructions for that. We will use the __shfl_down_sync instruction to perform reduction. Here's how it works: It is a CUDA warp-level primitive that shifts data values down within a warp. Threads in the warp exchange data based on a specified offset, and threads that would receive data from out-of-bounds threads are assigned a default value. The syntax for __shfl_down_sync is: T __shfl_down_sync(unsigned mask, T var, int delta, int width=warpSize); Here: Consider the following piece of code: int val = threadIdx.x; int shifted_val = __shfl_down_sync(0xFFFFFFFF, val, 1); For delta = 1: The reduction code for this kernel looks like: float val = local_max; for (int offset = warp_size / 2; offset > 0; offset /= 2) { val = fmaxf(val, __shfl_down_sync(0xffffffff, val, offset)); } if (blockDim.x > warp_size) { if (tid % warp_size == 0) { // which warp are we at? // store the value in its first thread index smem[tid / warp_size] = val; } __syncthreads(); if (tid < warp_size) { val = (tid < CEIL_DIV(blockDim.x, warp_size)) ? smem[tid] : -INFINITY; for (int offset = warp_size / 2; offset > 0; offset /= 2) { val = fmaxf(val, __shfl_down_sync(0xffffffff, val, offset)); } if (tid == 0) smem[0] = val; } } else { if (tid == 0) smem[0] = val; } __syncthreads(); float global_max = smem[0]; __syncthreads(); and the kernel outputs: >> GPU allocation time: 10.542080 ms >> Host to device transfer time: 25.580065 ms >> Kernel execution time: 5.174400 ms >> Device to host transfer time: 45.923008 ms This kernel is around 1.29 times (or, 22.73%) faster than the shared memory kernel! Using shuffle instructions eliminated the need of using sync barriers __syncthreads in each iteration as well. Conclusion In this worklog, we iteratively optimized the softmax operation starting from PyTorch and then writing a custom CUDA kernel for the same. With the above improvements, our custom softmax CUDA kernel became around 1.41 times (or, 29.17%) faster than PyTorch on RTX 1050Ti. Thank you for reading! #CUDA #softmax
CS4402-9635: Optimizing CUDA code Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) UWO-CS4402-CS9635 CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 1 / 114 Plan 1. Optimizing Matrix Transpose with CUDA 2. Performance Optimization 3. Parallel Reduction 4. Parallel Scan 5. Exercises 6. Exercises CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 2 / 114 Outline 1. Optimizing Matrix Transpose with CUDA 2. Performance Optimization 3. Parallel Reduction 4. Parallel Scan 5. Exercises 6. Exercises CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 3 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: ë the host code performs typical tasks: data allocation and transfer between host and device, the launching and timing of several kernels, result validation, and the deallocation of host and device memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: ë the host code performs typical tasks: data allocation and transfer between host and device, the launching and timing of several kernels, result validation, and the deallocation of host and device memory. ∎Benchmarks illustrate this section: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: ë the host code performs typical tasks: data allocation and transfer between host and device, the launching and timing of several kernels, result validation, and the deallocation of host and device memory. ∎Benchmarks illustrate this section: ë we compare our matrix transpose kernels against a matrix copy kernel, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: ë the host code performs typical tasks: data allocation and transfer between host and device, the launching and timing of several kernels, result validation, and the deallocation of host and device memory. ∎Benchmarks illustrate this section: ë we compare our matrix transpose kernels against a matrix copy kernel, ë for each kernel, we compute the effective bandwidth, calculated in GB/s as twice the size of the matrix (once for reading the matrix and once for writing) divided by the time of execution, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: ë the host code performs typical tasks: data allocation and transfer between host and device, the launching and timing of several kernels, result validation, and the deallocation of host and device memory. ∎Benchmarks illustrate this section: ë we compare our matrix transpose kernels against a matrix copy kernel, ë for each kernel, we compute the effective bandwidth, calculated in GB/s as twice the size of the matrix (once for reading the matrix and once for writing) divided by the time of execution, ë Each operation is run NUM_REFS times (for normalizing the measurements), CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: ë the host code performs typical tasks: data allocation and transfer between host and device, the launching and timing of several kernels, result validation, and the deallocation of host and device memory. ∎Benchmarks illustrate this section: ë we compare our matrix transpose kernels against a matrix copy kernel, ë for each kernel, we compute the effective bandwidth, calculated in GB/s as twice the size of the matrix (once for reading the matrix and once for writing) divided by the time of execution, ë Each operation is run NUM_REFS times (for normalizing the measurements), ë This looping is performed once over the kernel and once within the kernel, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (1/2) ∎We optimize a transposition code for a matrix of floats. This operates out-of-place: ë input and output matrices address separate memory locations. ∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛. ∎We focus on the device code: ë the host code performs typical tasks: data allocation and transfer between host and device, the launching and timing of several kernels, result validation, and the deallocation of host and device memory. ∎Benchmarks illustrate this section: ë we compare our matrix transpose kernels against a matrix copy kernel, ë for each kernel, we compute the effective bandwidth, calculated in GB/s as twice the size of the matrix (once for reading the matrix and once for writing) divided by the time of execution, ë Each operation is run NUM_REFS times (for normalizing the measurements), ë This looping is performed once over the kernel and once within the kernel, ë The difference between these two timings is kernel launch and synchronization overheads. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 4 / 114 Matrix Transpose Characteristics (2/2) ∎We present hereafter different kernels called from the host code, each addressing different performance issues. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 5 / 114 Matrix Transpose Characteristics (2/2) ∎We present hereafter different kernels called from the host code, each addressing different performance issues. ∎All kernels in this study launch thread blocks of dimension 32x8, where each block transposes (or copies) a tile of dimension 32x32. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 5 / 114 Matrix Transpose Characteristics (2/2) ∎We present hereafter different kernels called from the host code, each addressing different performance issues. ∎All kernels in this study launch thread blocks of dimension 32x8, where each block transposes (or copies) a tile of dimension 32x32. ∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32 and 8, respectively. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 5 / 114 Matrix Transpose Characteristics (2/2) ∎We present hereafter different kernels called from the host code, each addressing different performance issues. ∎All kernels in this study launch thread blocks of dimension 32x8, where each block transposes (or copies) a tile of dimension 32x32. ∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32 and 8, respectively. ∎Using a thread block with fewer threads than elements in a tile is advantageous for the matrix transpose: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 5 / 114 Matrix Transpose Characteristics (2/2) ∎We present hereafter different kernels called from the host code, each addressing different performance issues. ∎All kernels in this study launch thread blocks of dimension 32x8, where each block transposes (or copies) a tile of dimension 32x32. ∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32 and 8, respectively. ∎Using a thread block with fewer threads than elements in a tile is advantageous for the matrix transpose: ë each thread transposes several matrix elements, four in our case, and much of the cost of calculating the indices is amortized over these elements. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 5 / 114 Matrix Transpose Characteristics (2/2) ∎We present hereafter different kernels called from the host code, each addressing different performance issues. ∎All kernels in this study launch thread blocks of dimension 32x8, where each block transposes (or copies) a tile of dimension 32x32. ∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32 and 8, respectively. ∎Using a thread block with fewer threads than elements in a tile is advantageous for the matrix transpose: ë each thread transposes several matrix elements, four in our case, and much of the cost of calculating the indices is amortized over these elements. ∎This study is based on a technical report by Greg Ruetsch (NVIDIA) and Paulius Micikevicius (NVIDIA). CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 5 / 114 A simple copy kernel (1/2) __global__ void copy(float *odata, float* idata, int width, int height, int nreps) { int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index = xIndex + width*yIndex; for (int r=0; r < nreps; r++) { // normalization outer loop for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { odata[index+i*width] = idata[index+i*width]; } } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 6 / 114 A simple copy kernel (2/2) ∎odata and idata are pointers to the input and output matrices, // removing normalization __global__ void copy(float *odata, float* idata, int width, int height, int nreps) { int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index = xIndex + width*yIndex; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) odata[index+i*width] = idata[index+i*width]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 7 / 114 A simple copy kernel (2/2) ∎odata and idata are pointers to the input and output matrices, ∎width and height are the matrix x and y dimensions, // removing normalization __global__ void copy(float *odata, float* idata, int width, int height, int nreps) { int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index = xIndex + width*yIndex; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) odata[index+i*width] = idata[index+i*width]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 7 / 114 A simple copy kernel (2/2) ∎odata and idata are pointers to the input and output matrices, ∎width and height are the matrix x and y dimensions, ∎nreps determines how many times the loop over data movement between matrices is performed. // removing normalization __global__ void copy(float *odata, float* idata, int width, int height, int nreps) { int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index = xIndex + width*yIndex; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) odata[index+i*width] = idata[index+i*width]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 7 / 114 A simple copy kernel (2/2) ∎odata and idata are pointers to the input and output matrices, ∎width and height are the matrix x and y dimensions, ∎nreps determines how many times the loop over data movement between matrices is performed. ∎In this kernel, xIndex and yIndex are global 2D matrix indices, // removing normalization __global__ void copy(float *odata, float* idata, int width, int height, int nreps) { int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index = xIndex + width*yIndex; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) odata[index+i*width] = idata[index+i*width]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 7 / 114 A simple copy kernel (2/2) ∎odata and idata are pointers to the input and output matrices, ∎width and height are the matrix x and y dimensions, ∎nreps determines how many times the loop over data movement between matrices is performed. ∎In this kernel, xIndex and yIndex are global 2D matrix indices, ∎used to calculate index, the 1D index used to access matrix elements. // removing normalization __global__ void copy(float *odata, float* idata, int width, int height, int nreps) { int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index = xIndex + width*yIndex; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) odata[index+i*width] = idata[index+i*width]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 7 / 114 A naive transpose kernel _global__ void transposeNaive(float *odata, float* idata, int width, int height, int nreps) { int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index_in = xIndex + width * yIndex; int index_out = yIndex + height * xIndex; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { odata[index_out+i] = idata[index_in+i*width]; } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 8 / 114 Naive transpose kernel vs copy kernel The performance of these two kernels on a 2048x2048 matrix using a GTX280 is given in the following table: The minor differences in code between the copy and naive transpose kernels have a profound effect on performance. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 9 / 114 Coalesced Transpose (1/11) ∎Because device memory has a much higher latency and lower bandwidth than on-chip memory, special attention must be paid to: how global memory accesses are performed? CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 10 / 114 Coalesced Transpose (1/11) ∎Because device memory has a much higher latency and lower bandwidth than on-chip memory, special attention must be paid to: how global memory accesses are performed? ∎The simultaneous global memory accesses by each thread of a half-warp (16 threads on G80) during the execution of a single read or write instruction will be coalesced into a single access if: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 10 / 114 Coalesced Transpose (1/11) ∎Because device memory has a much higher latency and lower bandwidth than on-chip memory, special attention must be paid to: how global memory accesses are performed? ∎The simultaneous global memory accesses by each thread of a half-warp (16 threads on G80) during the execution of a single read or write instruction will be coalesced into a single access if: 1 The size of the memory element accessed by each thread is either 4, 8, or 16 bytes. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 10 / 114 Coalesced Transpose (1/11) ∎Because device memory has a much higher latency and lower bandwidth than on-chip memory, special attention must be paid to: how global memory accesses are performed? ∎The simultaneous global memory accesses by each thread of a half-warp (16 threads on G80) during the execution of a single read or write instruction will be coalesced into a single access if: 1 The size of the memory element accessed by each thread is either 4, 8, or 16 bytes. 2 The address of the first element is aligned to 16 times the element’s size. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 10 / 114 Coalesced Transpose (1/11) ∎Because device memory has a much higher latency and lower bandwidth than on-chip memory, special attention must be paid to: how global memory accesses are performed? ∎The simultaneous global memory accesses by each thread of a half-warp (16 threads on G80) during the execution of a single read or write instruction will be coalesced into a single access if: 1 The size of the memory element accessed by each thread is either 4, 8, or 16 bytes. 2 The address of the first element is aligned to 16 times the element’s size. 3 The elements form a contiguous block of memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 10 / 114 Coalesced Transpose (1/11) ∎Because device memory has a much higher latency and lower bandwidth than on-chip memory, special attention must be paid to: how global memory accesses are performed? ∎The simultaneous global memory accesses by each thread of a half-warp (16 threads on G80) during the execution of a single read or write instruction will be coalesced into a single access if: 1 The size of the memory element accessed by each thread is either 4, 8, or 16 bytes. 2 The address of the first element is aligned to 16 times the element’s size. 3 The elements form a contiguous block of memory. 4 The 𝑖-th element is accessed by the 𝑖-th thread in the half-warp. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 10 / 114 Coalesced Transpose (1/11) ∎Because device memory has a much higher latency and lower bandwidth than on-chip memory, special attention must be paid to: how global memory accesses are performed? ∎The simultaneous global memory accesses by each thread of a half-warp (16 threads on G80) during the execution of a single read or write instruction will be coalesced into a single access if: 1 The size of the memory element accessed by each thread is either 4, 8, or 16 bytes. 2 The address of the first element is aligned to 16 times the element’s size. 3 The elements form a contiguous block of memory. 4 The 𝑖-th element is accessed by the 𝑖-th thread in the half-warp. ∎Coalescing happens even if some threads do not access memory (divergent warp) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 10 / 114 Coalesced Transpose (2/11) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 11 / 114 Coalesced Transpose (3/11) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 12 / 114 Coalesced Transpose (4/11) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 13 / 114 Coalesced Transpose (5/11) ∎Allocating device memory through cudaMalloc() and choosing TILE_DIM to be a multiple of 16 ensures alignment with a segment of memory, therefore all loads from idata are coalesced. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 14 / 114 Coalesced Transpose (5/11) ∎Allocating device memory through cudaMalloc() and choosing TILE_DIM to be a multiple of 16 ensures alignment with a segment of memory, therefore all loads from idata are coalesced. ∎Coalescing behavior differs between the simple copy and naive transpose kernels when writing to odata. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 14 / 114 Coalesced Transpose (5/11) ∎Allocating device memory through cudaMalloc() and choosing TILE_DIM to be a multiple of 16 ensures alignment with a segment of memory, therefore all loads from idata are coalesced. ∎Coalescing behavior differs between the simple copy and naive transpose kernels when writing to odata. ∎In the case of the naive transpose, for each iteration of the i-loop a half warp writes one half of a column of floats to different segments of memory: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 14 / 114 Coalesced Transpose (5/11) ∎Allocating device memory through cudaMalloc() and choosing TILE_DIM to be a multiple of 16 ensures alignment with a segment of memory, therefore all loads from idata are coalesced. ∎Coalescing behavior differs between the simple copy and naive transpose kernels when writing to odata. ∎In the case of the naive transpose, for each iteration of the i-loop a half warp writes one half of a column of floats to different segments of memory: ë resulting in 16 separate memory transactions, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 14 / 114 Coalesced Transpose (5/11) ∎Allocating device memory through cudaMalloc() and choosing TILE_DIM to be a multiple of 16 ensures alignment with a segment of memory, therefore all loads from idata are coalesced. ∎Coalescing behavior differs between the simple copy and naive transpose kernels when writing to odata. ∎In the case of the naive transpose, for each iteration of the i-loop a half warp writes one half of a column of floats to different segments of memory: ë resulting in 16 separate memory transactions, ë regardless of the compute capability. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 14 / 114 Coalesced Transpose (6/11) ∎The way to avoid uncoalesced global memory access is CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 15 / 114 Coalesced Transpose (6/11) ∎The way to avoid uncoalesced global memory access is 1 to read the data into shared memory and, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 15 / 114 Coalesced Transpose (6/11) ∎The way to avoid uncoalesced global memory access is 1 to read the data into shared memory and, 2 have each half warp access noncontiguous locations in shared memory in order to write contiguous data to odata. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 15 / 114 Coalesced Transpose (6/11) ∎The way to avoid uncoalesced global memory access is 1 to read the data into shared memory and, 2 have each half warp access noncontiguous locations in shared memory in order to write contiguous data to odata. ∎There is no performance penalty for noncontiguous access patterns in shared memory as there is in global memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 15 / 114 Coalesced Transpose (6/11) ∎The way to avoid uncoalesced global memory access is 1 to read the data into shared memory and, 2 have each half warp access noncontiguous locations in shared memory in order to write contiguous data to odata. ∎There is no performance penalty for noncontiguous access patterns in shared memory as there is in global memory. ∎a __synchthreads() call is required to ensure that all reads from idata to shared memory have completed before writes from shared memory to odata commence. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 15 / 114 Coalesced Transpose (7/11) __global__ void transposeCoalesced(float *odata, float *idata, int width, int height) // no nreps param { __shared__ float tile[TILE_DIM][TILE_DIM]; int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index_in = xIndex + (yIndex)*width; xIndex = blockIdx.y * TILE_DIM + threadIdx.x; yIndex = blockIdx.x * TILE_DIM + threadIdx.y; int index_out = xIndex + (yIndex)*height; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { tile[threadIdx.y+i][threadIdx.x] = idata[index_in+i*width]; } __syncthreads(); for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { odata[index_out+i*height] = tile[threadIdx.x][threadIdx.y+i]; } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 16 / 114 Coalesced Transpose (8/11) 1 The half warp writes four half rows of the idata matrix tile to the shared memory 32x32 array tile indicated by the yellow line segments. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 17 / 114 Coalesced Transpose (8/11) 1 The half warp writes four half rows of the idata matrix tile to the shared memory 32x32 array tile indicated by the yellow line segments. 2 After a __syncthreads() call to ensure all writes to tile are completed, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 17 / 114 Coalesced Transpose (8/11) 1 The half warp writes four half rows of the idata matrix tile to the shared memory 32x32 array tile indicated by the yellow line segments. 2 After a __syncthreads() call to ensure all writes to tile are completed, 3 the half warp writes four half columns of tile to four half rows of an odata matrix tile, indicated by the green line segments. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 17 / 114 Coalesced Transpose (9/11) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 18 / 114 Coalesced Transpose (9/11) While there is a dramatic increase in effective bandwidth of the coalesced transpose over the naive transpose, there still remains a large performance gap between the coalesced transpose and the copy: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 18 / 114 Coalesced Transpose (9/11) While there is a dramatic increase in effective bandwidth of the coalesced transpose over the naive transpose, there still remains a large performance gap between the coalesced transpose and the copy: ∎One possible cause of this performance gap could be the synchronization barrier required in the coalesced transpose. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 18 / 114 Coalesced Transpose (9/11) While there is a dramatic increase in effective bandwidth of the coalesced transpose over the naive transpose, there still remains a large performance gap between the coalesced transpose and the copy: ∎One possible cause of this performance gap could be the synchronization barrier required in the coalesced transpose. ∎This can be easily assessed using the following copy kernel which utilizes shared memory and contains a __syncthreads() call. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 18 / 114 Coalesced Transpose (10/11) _global__ void copySharedMem(float *odata, float *idata, int width, int height) // no nreps param { __shared__ float tile[TILE_DIM][TILE_DIM]; int xIndex = blockIdx.x*TILE_DIM + threadIdx.x; int yIndex = blockIdx.y*TILE_DIM + threadIdx.y; int index = xIndex + width*yIndex; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { tile[threadIdx.y+i][threadIdx.x] = idata[index+i*width]; } __syncthreads(); for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { odata[index+i*width] = tile[threadIdx.y+i][threadIdx.x]; } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 19 / 114 Coalesced Transpose (11/11) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 20 / 114 Coalesced Transpose (11/11) The shared memory copy results seem to suggest that the use of shared memory with a synchronization barrier has little effect on the performance, certainly as far as the Loop in kernel column indicates when comparing the simple copy and shared memory copy. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 20 / 114 Shared memory bank conflicts (1/6) 1 Shared memory is divided into 16 equally-sized memory modules, called banks, which are organized such that successive 32-bit words are assigned to successive banks. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 21 / 114 Shared memory bank conflicts (1/6) 1 Shared memory is divided into 16 equally-sized memory modules, called banks, which are organized such that successive 32-bit words are assigned to successive banks. 2 These banks can be accessed simultaneously, and to achieve maximum bandwidth to and from shared memory the threads in a half warp should access shared memory associated with different banks. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 21 / 114 Shared memory bank conflicts (1/6) 1 Shared memory is divided into 16 equally-sized memory modules, called banks, which are organized such that successive 32-bit words are assigned to successive banks. 2 These banks can be accessed simultaneously, and to achieve maximum bandwidth to and from shared memory the threads in a half warp should access shared memory associated with different banks. 3 The exception to this rule is when all threads in a half warp read the same shared memory address, which results in a broadcast where the data at that address is sent to all threads of the half warp in one transaction. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 21 / 114 Shared memory bank conflicts (1/6) 1 Shared memory is divided into 16 equally-sized memory modules, called banks, which are organized such that successive 32-bit words are assigned to successive banks. 2 These banks can be accessed simultaneously, and to achieve maximum bandwidth to and from shared memory the threads in a half warp should access shared memory associated with different banks. 3 The exception to this rule is when all threads in a half warp read the same shared memory address, which results in a broadcast where the data at that address is sent to all threads of the half warp in one transaction. 4 One can use the warp_serialize flag when profiling CUDA applications to determine whether shared memory bank conflicts occur in any kernel. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 21 / 114 Shared memory bank conflicts (2/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 22 / 114 Shared memory bank conflicts (3/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 23 / 114 Shared memory bank conflicts (4/6) 1 The coalesced transpose uses a 32 × 32 shared memory array of floats. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 24 / 114 Shared memory bank conflicts (4/6) 1 The coalesced transpose uses a 32 × 32 shared memory array of floats. 2 For this sized array, all data in columns k and k+16 are mapped to the same bank. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 24 / 114 Shared memory bank conflicts (4/6) 1 The coalesced transpose uses a 32 × 32 shared memory array of floats. 2 For this sized array, all data in columns k and k+16 are mapped to the same bank. 3 As a result, when writing partial columns from tile in shared memory to rows in odata the half warp experiences a 16-way bank conflict and serializes the request. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 24 / 114 Shared memory bank conflicts (4/6) 1 The coalesced transpose uses a 32 × 32 shared memory array of floats. 2 For this sized array, all data in columns k and k+16 are mapped to the same bank. 3 As a result, when writing partial columns from tile in shared memory to rows in odata the half warp experiences a 16-way bank conflict and serializes the request. 4 A simple way to avoid this conflict is to pad the shared memory array by one column: __shared__ float tile[TILE_DIM][TILE_DIM+1]; CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 24 / 114 Shared memory bank conflicts (5/6) ∎The padding does not affect shared memory bank access pattern when writing a half warp to shared memory, which remains conflict free, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 25 / 114 Shared memory bank conflicts (5/6) ∎The padding does not affect shared memory bank access pattern when writing a half warp to shared memory, which remains conflict free, ∎but by adding a single column now the access of a half warp of data in a column is also conflict free. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 25 / 114 Shared memory bank conflicts (5/6) ∎The padding does not affect shared memory bank access pattern when writing a half warp to shared memory, which remains conflict free, ∎but by adding a single column now the access of a half warp of data in a column is also conflict free. ∎The performance of the kernel, now coalesced and memory bank conflict free, is added to our table on the next slide. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 25 / 114 Shared memory bank conflicts (6/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 26 / 114 Shared memory bank conflicts (6/6) ∎While padding the shared memory array did eliminate shared memory bank conflicts, as was confirmed by checking the warp_serialize flag with the CUDA profiler, it has little effect (when implemented at this stage) on performance. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 26 / 114 Shared memory bank conflicts (6/6) ∎While padding the shared memory array did eliminate shared memory bank conflicts, as was confirmed by checking the warp_serialize flag with the CUDA profiler, it has little effect (when implemented at this stage) on performance. ∎As a result, there is still a large performance gap between the coalesced and shared memory bank conflict free transpose and the shared memory copy. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 26 / 114 Decomposing Transpose (1/6) ∎To investigate further, we revisit the data flow for the transpose and compare it to that of the copy. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 27 / 114 Decomposing Transpose (1/6) ∎To investigate further, we revisit the data flow for the transpose and compare it to that of the copy. ∎There are essentially two differences between the copy code and the transpose: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 27 / 114 Decomposing Transpose (1/6) ∎To investigate further, we revisit the data flow for the transpose and compare it to that of the copy. ∎There are essentially two differences between the copy code and the transpose: ë transposing the data within a tile, and CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 27 / 114 Decomposing Transpose (1/6) ∎To investigate further, we revisit the data flow for the transpose and compare it to that of the copy. ∎There are essentially two differences between the copy code and the transpose: ë transposing the data within a tile, and ë writing data to transposed tile. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 27 / 114 Decomposing Transpose (1/6) ∎To investigate further, we revisit the data flow for the transpose and compare it to that of the copy. ∎There are essentially two differences between the copy code and the transpose: ë transposing the data within a tile, and ë writing data to transposed tile. ∎We can isolate the performance between each of these two components by implementing two kernels that individually perform just one of these components: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 27 / 114 Decomposing Transpose (1/6) ∎To investigate further, we revisit the data flow for the transpose and compare it to that of the copy. ∎There are essentially two differences between the copy code and the transpose: ë transposing the data within a tile, and ë writing data to transposed tile. ∎We can isolate the performance between each of these two components by implementing two kernels that individually perform just one of these components: fine-grained transpose: this kernel transposes the data within a tile, but writes the tile to the location. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 27 / 114 Decomposing Transpose (1/6) ∎To investigate further, we revisit the data flow for the transpose and compare it to that of the copy. ∎There are essentially two differences between the copy code and the transpose: ë transposing the data within a tile, and ë writing data to transposed tile. ∎We can isolate the performance between each of these two components by implementing two kernels that individually perform just one of these components: fine-grained transpose: this kernel transposes the data within a tile, but writes the tile to the location. coarse-grained transpose: this kernel writes the tile to the transposed location in the odata matrix, but does not transpose the data within the tile. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 27 / 114 Decomposing Transpose (2/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 28 / 114 Decomposing Transpose (3/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 29 / 114 Decomposing Transpose (4/6) _global__ void transposeFineGrained(float *odata, float *idata, int width, int height) { __shared__ float block[TILE_DIM][TILE_DIM+1]; int xIndex = blockIdx.x * TILE_DIM + threadIdx.x; int yIndex = blockIdx.y * TILE_DIM + threadIdx.y; int index = xIndex + (yIndex)*width; for (int i=0; i < TILE_DIM; i += BLOCK_ROWS) { block[threadIdx.y+i][threadIdx.x] = idata[index+i*width]; } __syncthreads(); for (int i=0; i < TILE_DIM; i += BLOCK_ROWS) { odata[index+i*height] = block[threadIdx.x][threadIdx.y+i]; } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 30 / 114 Decomposing Transpose (5/6) __global__ void transposeCoarseGrained(float *odata, float *idata, int width, int height) { __shared__ float block[TILE_DIM][TILE_DIM+1]; int xIndex = blockIdx.x * TILE_DIM + threadIdx.x; int yIndex = blockIdx.y * TILE_DIM + threadIdx.y; int index_in = xIndex + (yIndex)*width; xIndex = blockIdx.y * TILE_DIM + threadIdx.x; yIndex = blockIdx.x * TILE_DIM + threadIdx.y; int index_out = xIndex + (yIndex)*height; for (int i=0; i<TILE_DIM; i += BLOCK_ROWS) { block[threadIdx.y+i][threadIdx.x] = idata[index_in+i*width]; } syncthreads(); for (int i=0; i<TILE_DIM; i += BLOCK_ROWS) { odata[index_out+i*height] = block[threadIdx.y+i][threadIdx.x]; } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 31 / 114 Decomposing Transpose (6/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 32 / 114 Decomposing Transpose (6/6) ∎The fine-grained transpose has performance similar to the shared memory copy, whereas the coarse-grained transpose has roughly the performance of the coalesced transpose. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 32 / 114 Decomposing Transpose (6/6) ∎The fine-grained transpose has performance similar to the shared memory copy, whereas the coarse-grained transpose has roughly the performance of the coalesced transpose. ∎Thus the performance bottleneck lies in writing data to the transposed location in global memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 32 / 114 Partition Camping (1/4) ∎Just as shared memory performance can be degraded via bank conflicts, an analogous performance degradation can occur with global memory access through partition camping. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 33 / 114 Partition Camping (1/4) ∎Just as shared memory performance can be degraded via bank conflicts, an analogous performance degradation can occur with global memory access through partition camping. ∎Global memory is divided into either 6 partitions (on 8- and 9-series GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 33 / 114 Partition Camping (1/4) ∎Just as shared memory performance can be degraded via bank conflicts, an analogous performance degradation can occur with global memory access through partition camping. ∎Global memory is divided into either 6 partitions (on 8- and 9-series GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width. ∎To use global memory effectively, concurrent accesses to global memory by all active warps should be divided evenly amongst partitions. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 33 / 114 Partition Camping (1/4) ∎Just as shared memory performance can be degraded via bank conflicts, an analogous performance degradation can occur with global memory access through partition camping. ∎Global memory is divided into either 6 partitions (on 8- and 9-series GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width. ∎To use global memory effectively, concurrent accesses to global memory by all active warps should be divided evenly amongst partitions. ∎partition camping occurs when: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 33 / 114 Partition Camping (1/4) ∎Just as shared memory performance can be degraded via bank conflicts, an analogous performance degradation can occur with global memory access through partition camping. ∎Global memory is divided into either 6 partitions (on 8- and 9-series GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width. ∎To use global memory effectively, concurrent accesses to global memory by all active warps should be divided evenly amongst partitions. ∎partition camping occurs when: ë global memory accesses are directed through a subset of partitions, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 33 / 114 Partition Camping (1/4) ∎Just as shared memory performance can be degraded via bank conflicts, an analogous performance degradation can occur with global memory access through partition camping. ∎Global memory is divided into either 6 partitions (on 8- and 9-series GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width. ∎To use global memory effectively, concurrent accesses to global memory by all active warps should be divided evenly amongst partitions. ∎partition camping occurs when: ë global memory accesses are directed through a subset of partitions, ë causing requests to queue up at some partitions while other partitions go unused. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 33 / 114 Partition Camping (2/4) ∎Since partition camping concerns how active thread blocks behave, the issue of how thread blocks are scheduled on multiprocessors is important. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 34 / 114 Partition Camping (2/4) ∎Since partition camping concerns how active thread blocks behave, the issue of how thread blocks are scheduled on multiprocessors is important. ∎When a kernel is launched, the order in which blocks are assigned to multiprocessors is determined by the one-dimensional block ID defined as: bid = blockIdx.x + gridDim.x*blockIdx.y; which is a row-major ordering of the blocks in the grid. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 34 / 114 Partition Camping (2/4) ∎Since partition camping concerns how active thread blocks behave, the issue of how thread blocks are scheduled on multiprocessors is important. ∎When a kernel is launched, the order in which blocks are assigned to multiprocessors is determined by the one-dimensional block ID defined as: bid = blockIdx.x + gridDim.x*blockIdx.y; which is a row-major ordering of the blocks in the grid. ∎Once maximum occupancy is reached, additional blocks are assigned to multiprocessors as needed. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 34 / 114 Partition Camping (2/4) ∎Since partition camping concerns how active thread blocks behave, the issue of how thread blocks are scheduled on multiprocessors is important. ∎When a kernel is launched, the order in which blocks are assigned to multiprocessors is determined by the one-dimensional block ID defined as: bid = blockIdx.x + gridDim.x*blockIdx.y; which is a row-major ordering of the blocks in the grid. ∎Once maximum occupancy is reached, additional blocks are assigned to multiprocessors as needed. ∎How quickly and the order in which blocks complete cannot be determined. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 34 / 114 Partition Camping (2/4) ∎Since partition camping concerns how active thread blocks behave, the issue of how thread blocks are scheduled on multiprocessors is important. ∎When a kernel is launched, the order in which blocks are assigned to multiprocessors is determined by the one-dimensional block ID defined as: bid = blockIdx.x + gridDim.x*blockIdx.y; which is a row-major ordering of the blocks in the grid. ∎Once maximum occupancy is reached, additional blocks are assigned to multiprocessors as needed. ∎How quickly and the order in which blocks complete cannot be determined. ∎So active blocks are initially contiguous but become less contiguous as execution of the kernel progresses. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 34 / 114 Partition Camping (3/4) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 35 / 114 Partition Camping (3/4) ∎With 8 partitions of 256-byte width, all data in strides of 2048 bytes (or 512 floats) map to the same partition. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 35 / 114 Partition Camping (3/4) ∎With 8 partitions of 256-byte width, all data in strides of 2048 bytes (or 512 floats) map to the same partition. ∎Any float matrix with 512 × 𝑘columns, such as our 2048x2048 matrix, will contain columns whose elements map to a single partition. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 35 / 114 Partition Camping (3/4) ∎With 8 partitions of 256-byte width, all data in strides of 2048 bytes (or 512 floats) map to the same partition. ∎Any float matrix with 512 × 𝑘columns, such as our 2048x2048 matrix, will contain columns whose elements map to a single partition. ∎With tiles of 32 × 32 floats whose one-dimensional block IDs are shown in the figures, the mapping of idata and odata onto the partitions is depectide below. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 35 / 114 Partition Camping (4/4) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 36 / 114 Partition Camping (4/4) ∎Cconcurrent blocks will be accessing tiles row-wise in idata which will be roughly equally distributed amongst partitions CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 36 / 114 Partition Camping (4/4) ∎Cconcurrent blocks will be accessing tiles row-wise in idata which will be roughly equally distributed amongst partitions ∎However these blocks will access tiles column-wise in odata which will typically access global memory through just a few partitions. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 36 / 114 Partition Camping (4/4) ∎Cconcurrent blocks will be accessing tiles row-wise in idata which will be roughly equally distributed amongst partitions ∎However these blocks will access tiles column-wise in odata which will typically access global memory through just a few partitions. ∎Just as with shared memory, padding would be an option (potentially expensive) but there is a better one . . . CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 36 / 114 Diagonal block reordering (1/7) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 37 / 114 Diagonal block reordering (2/7) ∎The key idea is to view the grid under a diagonal coordinate system. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 38 / 114 Diagonal block reordering (2/7) ∎The key idea is to view the grid under a diagonal coordinate system. ∎If blockIdx.x and blockIdx.y represent the diagonal coordinates, then (for block-square matrixes) the corresponding cartesian coordinates are given by the following mapping: blockIdx_y = blockIdx.x; blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x; CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 38 / 114 Diagonal block reordering (2/7) ∎The key idea is to view the grid under a diagonal coordinate system. ∎If blockIdx.x and blockIdx.y represent the diagonal coordinates, then (for block-square matrixes) the corresponding cartesian coordinates are given by the following mapping: blockIdx_y = blockIdx.x; blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x; ∎One would simply include the previous two lines of code at the beginning of the kernel, and write the kernel assuming the cartesian interpretation of blockIdx fields, except using blockIdx_x and blockIdx_y in place of blockIdx.x and blockIdx.y, respectively, throughout the kernel. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 38 / 114 Diagonal block reordering (2/7) ∎The key idea is to view the grid under a diagonal coordinate system. ∎If blockIdx.x and blockIdx.y represent the diagonal coordinates, then (for block-square matrixes) the corresponding cartesian coordinates are given by the following mapping: blockIdx_y = blockIdx.x; blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x; ∎One would simply include the previous two lines of code at the beginning of the kernel, and write the kernel assuming the cartesian interpretation of blockIdx fields, except using blockIdx_x and blockIdx_y in place of blockIdx.x and blockIdx.y, respectively, throughout the kernel. ∎This is precisely what is done in the transposeDiagonal kernel hereafter. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 38 / 114 Decomposing Transpose (3/7) __global__ void transposeDiagonal(float *odata, float *idata, int width, int height) { __shared__ float tile[TILE_DIM][TILE_DIM+1]; int blockIdx_x, blockIdx_y; // diagonal reordering if (width == height) { blockIdx_y = blockIdx.x; blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x; } else { int bid = blockIdx.x + gridDim.x*blockIdx.y; blockIdx_y = bid%gridDim.y; blockIdx_x = ((bid/gridDim.y)+blockIdx_y)%gridDim.x; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 39 / 114 Decomposing Transpose (4/7) int xIndex = blockIdx_x*TILE_DIM + threadIdx.x; int yIndex = blockIdx_y*TILE_DIM + threadIdx.y; int index_in = xIndex + (yIndex)*width; xIndex = blockIdx_y*TILE_DIM + threadIdx.x; yIndex = blockIdx_x*TILE_DIM + threadIdx.y; int index_out = xIndex + (yIndex)*height; for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { tile[threadIdx.y+i][threadIdx.x] = idata[index_in+i*width]; } __syncthreads(); for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) { odata[index_out+i*height] = tile[threadIdx.x][threadIdx.y+i]; } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 40 / 114 Diagonal block reordering (5/7) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 41 / 114 Diagonal block reordering (6/7) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 42 / 114 Diagonal block reordering (7/7) ∎The bandwidth measured when looping within the kernel over the read and writes to global memory is within a few percent of the shared memory copy. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 43 / 114 Diagonal block reordering (7/7) ∎The bandwidth measured when looping within the kernel over the read and writes to global memory is within a few percent of the shared memory copy. ∎When looping over the kernel, the performance degrades slightly, likely due to additional computation involved in calculating blockIdx_x and blockIdx_y. However, even with this performance degradation the diagonal transpose has over four times the bandwidth of the other complete transposes. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 43 / 114 Diagonal block reordering (7/7) ∎The bandwidth measured when looping within the kernel over the read and writes to global memory is within a few percent of the shared memory copy. ∎When looping over the kernel, the performance degrades slightly, likely due to additional computation involved in calculating blockIdx_x and blockIdx_y. However, even with this performance degradation the diagonal transpose has over four times the bandwidth of the other complete transposes. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 43 / 114 Outline 1. Optimizing Matrix Transpose with CUDA 2. Performance Optimization 3. Parallel Reduction 4. Parallel Scan 5. Exercises 6. Exercises CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 44 / 114 Four principles ∎Expose as much parallelism as possible CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 45 / 114 Four principles ∎Expose as much parallelism as possible ∎Optimize memory usage for maximum bandwidth CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 45 / 114 Four principles ∎Expose as much parallelism as possible ∎Optimize memory usage for maximum bandwidth ∎Maximize occupancy to hide latency CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 45 / 114 Four principles ∎Expose as much parallelism as possible ∎Optimize memory usage for maximum bandwidth ∎Maximize occupancy to hide latency ∎Optimize instruction usage for maximum throughput CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 45 / 114 Expose Parallelism ∎Structure algorithm to maximize independent parallelism CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 46 / 114 Expose Parallelism ∎Structure algorithm to maximize independent parallelism ∎If threads of same block need to communicate, use shared memory and __syncthreads() CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 46 / 114 Expose Parallelism ∎Structure algorithm to maximize independent parallelism ∎If threads of same block need to communicate, use shared memory and __syncthreads() ∎If threads of different blocks need to communicate, use global memory and split computation into multiple kernels CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 46 / 114 Expose Parallelism ∎Structure algorithm to maximize independent parallelism ∎If threads of same block need to communicate, use shared memory and __syncthreads() ∎If threads of different blocks need to communicate, use global memory and split computation into multiple kernels ∎Recall that there is no synchronization mechanism between blocks CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 46 / 114 Expose Parallelism ∎Structure algorithm to maximize independent parallelism ∎If threads of same block need to communicate, use shared memory and __syncthreads() ∎If threads of different blocks need to communicate, use global memory and split computation into multiple kernels ∎Recall that there is no synchronization mechanism between blocks ∎High parallelism is especially important to hide memory latency by overlapping memory accesses with computation CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 46 / 114 Expose Parallelism ∎Structure algorithm to maximize independent parallelism ∎If threads of same block need to communicate, use shared memory and __syncthreads() ∎If threads of different blocks need to communicate, use global memory and split computation into multiple kernels ∎Recall that there is no synchronization mechanism between blocks ∎High parallelism is especially important to hide memory latency by overlapping memory accesses with computation ∎Take advantage of asynchronous kernel launches by overlapping CPU computations with kernel execution. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 46 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: ë Especially for GPUs as they devote many more transistors to ALUs than memory CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: ë Especially for GPUs as they devote many more transistors to ALUs than memory ∎Basic strategies: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: ë Especially for GPUs as they devote many more transistors to ALUs than memory ∎Basic strategies: ë Maximize use of low-latency, high-bandwidth memory CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: ë Especially for GPUs as they devote many more transistors to ALUs than memory ∎Basic strategies: ë Maximize use of low-latency, high-bandwidth memory ë Optimize memory access patterns to maximize bandwidth CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: ë Especially for GPUs as they devote many more transistors to ALUs than memory ∎Basic strategies: ë Maximize use of low-latency, high-bandwidth memory ë Optimize memory access patterns to maximize bandwidth ë Leverage parallelism to hide memory latency by overlapping memory accesses with computation as much as possible CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: ë Especially for GPUs as they devote many more transistors to ALUs than memory ∎Basic strategies: ë Maximize use of low-latency, high-bandwidth memory ë Optimize memory access patterns to maximize bandwidth ë Leverage parallelism to hide memory latency by overlapping memory accesses with computation as much as possible ë Write kernels with high arithmetic intensity (ratio of arithmetic operations to memory transactions) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Optimize Memory Usage: Basic Strategies ∎Processing data is cheaper than moving it around: ë Especially for GPUs as they devote many more transistors to ALUs than memory ∎Basic strategies: ë Maximize use of low-latency, high-bandwidth memory ë Optimize memory access patterns to maximize bandwidth ë Leverage parallelism to hide memory latency by overlapping memory accesses with computation as much as possible ë Write kernels with high arithmetic intensity (ratio of arithmetic operations to memory transactions) ë Sometimes recompute data rather than cache it CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 47 / 114 Minimize CPU < −> GPU Data Transfers ∎CPU < −> GPU memory bandwidth much lower than GPU memory bandwidth CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 48 / 114 Minimize CPU < −> GPU Data Transfers ∎CPU < −> GPU memory bandwidth much lower than GPU memory bandwidth ∎Minimize CPU < −> GPU data transfers by moving more code from CPU to GPU CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 48 / 114 Minimize CPU < −> GPU Data Transfers ∎CPU < −> GPU memory bandwidth much lower than GPU memory bandwidth ∎Minimize CPU < −> GPU data transfers by moving more code from CPU to GPU ë Even if sometimes that means running kernels with low parallelism computations CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 48 / 114 Minimize CPU < −> GPU Data Transfers ∎CPU < −> GPU memory bandwidth much lower than GPU memory bandwidth ∎Minimize CPU < −> GPU data transfers by moving more code from CPU to GPU ë Even if sometimes that means running kernels with low parallelism computations ë Intermediate data structures can be allocated, operated on, and deallocated without ever copying them to CPU memory CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 48 / 114 Minimize CPU < −> GPU Data Transfers ∎CPU < −> GPU memory bandwidth much lower than GPU memory bandwidth ∎Minimize CPU < −> GPU data transfers by moving more code from CPU to GPU ë Even if sometimes that means running kernels with low parallelism computations ë Intermediate data structures can be allocated, operated on, and deallocated without ever copying them to CPU memory ∎Group data transfers: One large transfer much better than many small ones. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 48 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: ë Global memory is not cached on G8x. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: ë Global memory is not cached on G8x. ë Global memory has High latency instructions: 400-600 clock cycles CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: ë Global memory is not cached on G8x. ë Global memory has High latency instructions: 400-600 clock cycles ë Shared memory has low latency: a few clock cycles CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: ë Global memory is not cached on G8x. ë Global memory has High latency instructions: 400-600 clock cycles ë Shared memory has low latency: a few clock cycles ∎Optimize access patterns to get: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: ë Global memory is not cached on G8x. ë Global memory has High latency instructions: 400-600 clock cycles ë Shared memory has low latency: a few clock cycles ∎Optimize access patterns to get: ë Coalesced global memory accesses CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: ë Global memory is not cached on G8x. ë Global memory has High latency instructions: 400-600 clock cycles ë Shared memory has low latency: a few clock cycles ∎Optimize access patterns to get: ë Coalesced global memory accesses ë Shared memory accesses with no or few bank conflicts and CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 Optimize Memory Access Patterns ∎Effective bandwidth can vary by an order of magnitude depending on access pattern: ë Global memory is not cached on G8x. ë Global memory has High latency instructions: 400-600 clock cycles ë Shared memory has low latency: a few clock cycles ∎Optimize access patterns to get: ë Coalesced global memory accesses ë Shared memory accesses with no or few bank conflicts and ë to avoid partition camping. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 49 / 114 A Common Programming Strategy 1 Partition data into subsets that fit into shared memory CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 50 / 114 A Common Programming Strategy 1 Partition data into subsets that fit into shared memory 2 Handle each data subset with one thread block CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 50 / 114 A Common Programming Strategy 1 Partition data into subsets that fit into shared memory 2 Handle each data subset with one thread block 3 Load the subset from global memory to shared memory, using multiple threads to exploit memory-level parallelism. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 50 / 114 A Common Programming Strategy 1 Partition data into subsets that fit into shared memory 2 Handle each data subset with one thread block 3 Load the subset from global memory to shared memory, using multiple threads to exploit memory-level parallelism. 4 Perform the computation on the subset from shared memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 50 / 114 A Common Programming Strategy 1 Partition data into subsets that fit into shared memory 2 Handle each data subset with one thread block 3 Load the subset from global memory to shared memory, using multiple threads to exploit memory-level parallelism. 4 Perform the computation on the subset from shared memory. 5 Copy the result from shared memory back to global memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 50 / 114 A Common Programming Strategy Partition data into subsets that fit into shared memory CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 51 / 114 A Common Programming Strategy Handle each data subset with one thread block CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 52 / 114 A Common Programming Strategy Load the subset from global memory to shared memory, using multiple threads to exploit memory-level parallelism. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 53 / 114 A Common Programming Strategy Perform the computation on the subset from shared memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 54 / 114 A Common Programming Strategy Copy the result from shared memory back to global memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 55 / 114 A Common Programming Strategy ∎Carefully partition data according to access patterns CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 56 / 114 A Common Programming Strategy ∎Carefully partition data according to access patterns ∎If read only, use __constant__ memory (fast) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 56 / 114 A Common Programming Strategy ∎Carefully partition data according to access patterns ∎If read only, use __constant__ memory (fast) ∎for read/write access within a tile, use __shared__ memory (fast) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 56 / 114 A Common Programming Strategy ∎Carefully partition data according to access patterns ∎If read only, use __constant__ memory (fast) ∎for read/write access within a tile, use __shared__ memory (fast) ∎for read/write scalar access within a thread, use registers (fast) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 56 / 114 A Common Programming Strategy ∎Carefully partition data according to access patterns ∎If read only, use __constant__ memory (fast) ∎for read/write access within a tile, use __shared__ memory (fast) ∎for read/write scalar access within a thread, use registers (fast) ∎R/W inputs/results cudaMalloc’ed, use global memory (slow) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 56 / 114 Outline 1. Optimizing Matrix Transpose with CUDA 2. Performance Optimization 3. Parallel Reduction 4. Parallel Scan 5. Exercises 6. Exercises CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 57 / 114 Parallel reduction: presentation CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 58 / 114 Parallel reduction: presentation ∎Common and important data parallel primitive. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 58 / 114 Parallel reduction: presentation ∎Common and important data parallel primitive. ∎Easy to implement in CUDA, but hard to get right. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 58 / 114 Parallel reduction: presentation ∎Common and important data parallel primitive. ∎Easy to implement in CUDA, but hard to get right. ∎Serves as a great optimization example. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 58 / 114 Parallel reduction: presentation ∎Common and important data parallel primitive. ∎Easy to implement in CUDA, but hard to get right. ∎Serves as a great optimization example. ∎This section is based on slides and technical reports by Mark Harris (NVIDIA). CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 58 / 114 Parallel reduction: challenges ∎One needs to be able to use multiple thread blocks: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 59 / 114 Parallel reduction: challenges ∎One needs to be able to use multiple thread blocks: ë to process very large arrays, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 59 / 114 Parallel reduction: challenges ∎One needs to be able to use multiple thread blocks: ë to process very large arrays, ë to keep all multiprocessors on the GPU busy, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 59 / 114 Parallel reduction: challenges ∎One needs to be able to use multiple thread blocks: ë to process very large arrays, ë to keep all multiprocessors on the GPU busy, ë to have each thread block reducing a portion of the array. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 59 / 114 Parallel reduction: challenges ∎One needs to be able to use multiple thread blocks: ë to process very large arrays, ë to keep all multiprocessors on the GPU busy, ë to have each thread block reducing a portion of the array. ∎But how do we communicate partial results between thread blocks? CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 59 / 114 Parallel reduction: CUDA implementation strategy CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 60 / 114 Parallel reduction: CUDA implementation strategy ∎We decompose computation into multiple kernel invocations CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 60 / 114 Parallel reduction: CUDA implementation strategy ∎We decompose computation into multiple kernel invocations ∎For this problem of parallel reduction, all kernels are in fact the same code. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 60 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels ë Bandwidth: for memory-bound kernels CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels ë Bandwidth: for memory-bound kernels ∎Reductions have very low arithmetic intensity: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels ë Bandwidth: for memory-bound kernels ∎Reductions have very low arithmetic intensity: ë 1 flop per element loaded (bandwidth-optimal) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels ë Bandwidth: for memory-bound kernels ∎Reductions have very low arithmetic intensity: ë 1 flop per element loaded (bandwidth-optimal) ∎Therefore we should strive for peak bandwidth CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels ë Bandwidth: for memory-bound kernels ∎Reductions have very low arithmetic intensity: ë 1 flop per element loaded (bandwidth-optimal) ∎Therefore we should strive for peak bandwidth ∎We will use G80 GPU (following Mark Harris tech report) for this example: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels ë Bandwidth: for memory-bound kernels ∎Reductions have very low arithmetic intensity: ë 1 flop per element loaded (bandwidth-optimal) ∎Therefore we should strive for peak bandwidth ∎We will use G80 GPU (following Mark Harris tech report) for this example: ë 384-bit memory interface, 1800 MHz CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: what is our goal? ∎We should use the right metric between: ë GFLOP/s: for compute-bound kernels ë Bandwidth: for memory-bound kernels ∎Reductions have very low arithmetic intensity: ë 1 flop per element loaded (bandwidth-optimal) ∎Therefore we should strive for peak bandwidth ∎We will use G80 GPU (following Mark Harris tech report) for this example: ë 384-bit memory interface, 1800 MHz ë 384 × 1800⇑8 = 86.4𝐺𝐵⇑𝑠 CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 61 / 114 Parallel reduction: interleaved addressing (1/2) __global__ void reduce0(int *g_idata, int *g_odata) { extern __shared__ int sdata[]; // each thread loads one element from global to shared mem unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x*blockDim.x + threadIdx.x; sdata[tid] = g_idata[i]; __syncthreads(); // do reduction in shared mem for(unsigned int s=1; s < blockDim.x; s *= 2) { if (tid % (2*s) == 0) { sdata[tid] += sdata[tid + s]; } __syncthreads(); } // write result for this block to global mem if (tid == 0) g_odata[blockIdx.x] = sdata[0]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 62 / 114 Parallel reduction: interleaved addressing (2/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 63 / 114 Parallel reduction: branch divergence in interleaved addressing (1/2) ∎Main performance concern with branching is divergence. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 64 / 114 Parallel reduction: branch divergence in interleaved addressing (1/2) ∎Main performance concern with branching is divergence. ë Branch divergence occurs when threads in the same warp take different paths upon a conditional branch. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 64 / 114 Parallel reduction: branch divergence in interleaved addressing (1/2) ∎Main performance concern with branching is divergence. ë Branch divergence occurs when threads in the same warp take different paths upon a conditional branch. ë Penalty: different execution paths are likely to serialized (at compile time). CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 64 / 114 Parallel reduction: branch divergence in interleaved addressing (1/2) ∎Main performance concern with branching is divergence. ë Branch divergence occurs when threads in the same warp take different paths upon a conditional branch. ë Penalty: different execution paths are likely to serialized (at compile time). ∎One should be careful branching when branch condition is a function of thread ID. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 64 / 114 Parallel reduction: branch divergence in interleaved addressing (1/2) ∎Main performance concern with branching is divergence. ë Branch divergence occurs when threads in the same warp take different paths upon a conditional branch. ë Penalty: different execution paths are likely to serialized (at compile time). ∎One should be careful branching when branch condition is a function of thread ID. ë Below, branch granularity is less than warp size: If (threadIdx.x > 2) { } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 64 / 114 Parallel reduction: branch divergence in interleaved addressing (1/2) ∎Main performance concern with branching is divergence. ë Branch divergence occurs when threads in the same warp take different paths upon a conditional branch. ë Penalty: different execution paths are likely to serialized (at compile time). ∎One should be careful branching when branch condition is a function of thread ID. ë Below, branch granularity is less than warp size: If (threadIdx.x > 2) { } ë Below, branch granularity is a whole multiple of warp size: If (threadIdx.x / WARP_SIZE > 2) { } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 64 / 114 Parallel reduction: branch divergence in interleaved addressing (2/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 65 / 114 Parallel reduction: non-divergent interleaved addressing CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 66 / 114 Parallel reduction: shared memory bank conflicts CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 67 / 114 Parallel reduction: sequential addressing (1/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 68 / 114 Parallel reduction: sequential addressing (2/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 69 / 114 Parallel reduction: performance for 4Mb element reduction CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 70 / 114 Parallel reduction: idle threads (1/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 71 / 114 Parallel reduction: idle threads (2/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 72 / 114 Parallel reduction: instruction bottlenecks (1/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 73 / 114 Parallel reduction: instruction bottlenecks (2/2) ∎At 17 GB/s, we’re far from bandwidth bound: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 74 / 114 Parallel reduction: instruction bottlenecks (2/2) ∎At 17 GB/s, we’re far from bandwidth bound: ë And we know reduction has low arithmetic intensity CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 74 / 114 Parallel reduction: instruction bottlenecks (2/2) ∎At 17 GB/s, we’re far from bandwidth bound: ë And we know reduction has low arithmetic intensity ∎Therefore a likely bottleneck is instruction overhead: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 74 / 114 Parallel reduction: instruction bottlenecks (2/2) ∎At 17 GB/s, we’re far from bandwidth bound: ë And we know reduction has low arithmetic intensity ∎Therefore a likely bottleneck is instruction overhead: ë auxiliary instructions that are not loads, stores, or arithmetic for the core computation, CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 74 / 114 Parallel reduction: instruction bottlenecks (2/2) ∎At 17 GB/s, we’re far from bandwidth bound: ë And we know reduction has low arithmetic intensity ∎Therefore a likely bottleneck is instruction overhead: ë auxiliary instructions that are not loads, stores, or arithmetic for the core computation, ë in other words: address arithmetic and loop overhead. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 74 / 114 Parallel reduction: instruction bottlenecks (2/2) ∎At 17 GB/s, we’re far from bandwidth bound: ë And we know reduction has low arithmetic intensity ∎Therefore a likely bottleneck is instruction overhead: ë auxiliary instructions that are not loads, stores, or arithmetic for the core computation, ë in other words: address arithmetic and loop overhead. ∎Strategy: unroll loops. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 74 / 114 Parallel reduction: unrolling the last warp (1/3) ∎As reduction proceeds, the number of active threads decreases; CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 75 / 114 Parallel reduction: unrolling the last warp (1/3) ∎As reduction proceeds, the number of active threads decreases; ë When 𝑠≤32, we have only one warp left. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 75 / 114 Parallel reduction: unrolling the last warp (1/3) ∎As reduction proceeds, the number of active threads decreases; ë When 𝑠≤32, we have only one warp left. ∎Instructions are SIMD synchronous within a warp CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 75 / 114 Parallel reduction: unrolling the last warp (1/3) ∎As reduction proceeds, the number of active threads decreases; ë When 𝑠≤32, we have only one warp left. ∎Instructions are SIMD synchronous within a warp ∎That implies when 𝑠≤32: CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 75 / 114 Parallel reduction: unrolling the last warp (1/3) ∎As reduction proceeds, the number of active threads decreases; ë When 𝑠≤32, we have only one warp left. ∎Instructions are SIMD synchronous within a warp ∎That implies when 𝑠≤32: ë We do not need to use __syncthreads() CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 75 / 114 Parallel reduction: unrolling the last warp (1/3) ∎As reduction proceeds, the number of active threads decreases; ë When 𝑠≤32, we have only one warp left. ∎Instructions are SIMD synchronous within a warp ∎That implies when 𝑠≤32: ë We do not need to use __syncthreads() ë We do not need to perform the test if (tid < s) because it doesn’t save any work. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 75 / 114 Parallel reduction: unrolling the last warp (1/3) ∎As reduction proceeds, the number of active threads decreases; ë When 𝑠≤32, we have only one warp left. ∎Instructions are SIMD synchronous within a warp ∎That implies when 𝑠≤32: ë We do not need to use __syncthreads() ë We do not need to perform the test if (tid < s) because it doesn’t save any work. ∎Let’s unroll the last 6 iterations of the inner loop! CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 75 / 114 Parallel reduction: unrolling the last warp (2/3) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 76 / 114 Parallel reduction: unrolling the last warp (3/3) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 77 / 114 Parallel reduction: complete unrolling (1/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 78 / 114 Parallel reduction: complete unrolling (2/2) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 79 / 114 Parallel reduction: coarsening the base case (1/6) ∎The work and span of the whole reduction process are Θ(𝑛) and Θ(log(𝑛)), respectively. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 80 / 114 Parallel reduction: coarsening the base case (1/6) ∎The work and span of the whole reduction process are Θ(𝑛) and Θ(log(𝑛)), respectively. ∎If we allocate Θ(𝑛) threads (for each kernel call) we necessarily do Θ(𝑛log(𝑛)) work in total, that is, a significant overhead factor. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 80 / 114 Parallel reduction: coarsening the base case (1/6) ∎The work and span of the whole reduction process are Θ(𝑛) and Θ(log(𝑛)), respectively. ∎If we allocate Θ(𝑛) threads (for each kernel call) we necessarily do Θ(𝑛log(𝑛)) work in total, that is, a significant overhead factor. ∎Therefore, we need to allocate Θ(𝑛⇑log(𝑛))) threads, with each thread doing Θ(log(𝑛)) work. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 80 / 114 Parallel reduction: coarsening the base case (1/6) ∎The work and span of the whole reduction process are Θ(𝑛) and Θ(log(𝑛)), respectively. ∎If we allocate Θ(𝑛) threads (for each kernel call) we necessarily do Θ(𝑛log(𝑛)) work in total, that is, a significant overhead factor. ∎Therefore, we need to allocate Θ(𝑛⇑log(𝑛))) threads, with each thread doing Θ(log(𝑛)) work. ∎On G80, best perf with 64-256 blocks of 128 threads with 1024-4096 elements per thread. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 80 / 114 Parallel reduction: coarsening the base case (2/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 81 / 114 Parallel reduction: coarsening the base case (3/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 82 / 114 Parallel reduction: coarsening the base case (4/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 83 / 114 Parallel reduction: coarsening the base case (5/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 84 / 114 Parallel reduction: coarsening the base case (6/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 85 / 114 Outline 1. Optimizing Matrix Transpose with CUDA 2. Performance Optimization 3. Parallel Reduction 4. Parallel Scan 5. Exercises 6. Exercises CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 86 / 114 Parallel scan: presentation ∎Another common and important data parallel primitive. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 87 / 114 Parallel scan: presentation ∎Another common and important data parallel primitive. ∎This problem seems inherently sequential, but there is an efficient parallel algorithm. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 87 / 114 Parallel scan: presentation ∎Another common and important data parallel primitive. ∎This problem seems inherently sequential, but there is an efficient parallel algorithm. ∎Applications: sorting, lexical analysis, string comparison, polynomial evaluation, stream compaction, building histograms and data structures (graphs, trees, etc.) in parallel. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 87 / 114 Parallel scan: definitions ∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆 with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 88 / 114 Parallel scan: definitions ∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆 with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆. ∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆defined by 𝐵(︀𝑖⌋︀= { 𝐴(︀0⌋︀ if 𝑖= 0 𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀ if 0 < 𝑖< 𝑛 CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 88 / 114 Parallel scan: definitions ∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆 with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆. ∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆defined by 𝐵(︀𝑖⌋︀= { 𝐴(︀0⌋︀ if 𝑖= 0 𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀ if 0 < 𝑖< 𝑛 ∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆: 𝐶(︀𝑖⌋︀= { 0 if 𝑖= 0 𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀ if 0 < 𝑖< 𝑛 CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 88 / 114 Parallel scan: definitions ∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆 with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆. ∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆defined by 𝐵(︀𝑖⌋︀= { 𝐴(︀0⌋︀ if 𝑖= 0 𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀ if 0 < 𝑖< 𝑛 ∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆: 𝐶(︀𝑖⌋︀= { 0 if 𝑖= 0 𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀ if 0 < 𝑖< 𝑛 ∎An exclusive scan can be generated from an inclusive scan by shifting the resulting array right by one element and inserting the identity. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 88 / 114 Parallel scan: definitions ∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆 with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆. ∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆defined by 𝐵(︀𝑖⌋︀= { 𝐴(︀0⌋︀ if 𝑖= 0 𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀ if 0 < 𝑖< 𝑛 ∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆: 𝐶(︀𝑖⌋︀= { 0 if 𝑖= 0 𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀ if 0 < 𝑖< 𝑛 ∎An exclusive scan can be generated from an inclusive scan by shifting the resulting array right by one element and inserting the identity. ∎Similarly, an inclusive scan can be generated from an exclusive scan. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 88 / 114 Parallel scan: definitions ∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆 with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆. ∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆defined by 𝐵(︀𝑖⌋︀= { 𝐴(︀0⌋︀ if 𝑖= 0 𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀ if 0 < 𝑖< 𝑛 ∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆: 𝐶(︀𝑖⌋︀= { 0 if 𝑖= 0 𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀ if 0 < 𝑖< 𝑛 ∎An exclusive scan can be generated from an inclusive scan by shifting the resulting array right by one element and inserting the identity. ∎Similarly, an inclusive scan can be generated from an exclusive scan. ∎We shall focus on exclusive scan. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 88 / 114 Parallel scan: sequential algorithm void scan( float* output, float* input, int length) { output[0] = 0; // since this is a prescan, not a scan for(int j = 1; j < length; ++j) { output[j] = input[j-1] + output[j-1]; } } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 89 / 114 Parallel scan: naive parallel algorithm (1/4) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 90 / 114 Parallel scan: naive parallel algorithm (1/4) ∎This algorithm is not work-efficient since its work is 𝑂(𝑛log2(𝑛)). We will fix this issue later. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 90 / 114 Parallel scan: naive parallel algorithm (1/4) ∎This algorithm is not work-efficient since its work is 𝑂(𝑛log2(𝑛)). We will fix this issue later. ∎In addition is not suitable for a CUDA implementation either. Indeed, it works in place which is not feasible for a sufficiently large array requiring several thread blocks CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 90 / 114 Parallel scan: naive parallel algorithm (2/4) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 91 / 114 Parallel scan: naive parallel algorithm (2/4) In order to realize CUDA implementation potentially using many thread blocks, one needs to use a double-buffer. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 91 / 114 Parallel scan: naive parallel algorithm (3/4) ∎Computing a scan of an array of 8 elements using the naïve scan algorithm. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 92 / 114 Parallel scan: naive parallel algorithm (3/4) ∎Computing a scan of an array of 8 elements using the naïve scan algorithm. ∎The CUDA version (next slide) can handle arrays only as large as can be processed by a single thread block running on 1 GPU multiprocessor. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 92 / 114 Parallel scan: naive parallel algorithm (4/4) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 93 / 114 Parallel scan: work-efficient parallel algorithm (1/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 94 / 114 Parallel scan: work-efficient parallel algorithm (2/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 95 / 114 Parallel scan: work-efficient parallel algorithm (3/6) x[n-1] := 0; for i := log(n) downto 1 do for k from 0 to n-1 by 2^(2*d) in parallel do { t := x[k + 2^d -1]; x[k + 2^d -1] := x[k + 2^(d-1) -1]; x[k + 2^(d-1) -1] := t + x[k + 2^(d-1) -1]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 96 / 114 Parallel scan: work-efficient parallel algorithm (4/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 97 / 114 Parallel scan: work-efficient parallel algorithm (5/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 98 / 114 Parallel scan: work-efficient parallel algorithm (6/6) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635 99 / 114 Parallel scan: performance ∎See above the performance of the work-efficient, bank-conflict-free Scan implemented in CUDA compared to a sequential scan implemented in C++. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635100 / 114 Parallel scan: performance ∎See above the performance of the work-efficient, bank-conflict-free Scan implemented in CUDA compared to a sequential scan implemented in C++. ∎The CUDA scan was executed on an NVIDIA GeForce 8800 GTX GPU, the sequential scan on a single core of an Intel Core Duo Extreme 2.93 GHz. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635100 / 114 Outline 1. Optimizing Matrix Transpose with CUDA 2. Performance Optimization 3. Parallel Reduction 4. Parallel Scan 5. Exercises 6. Exercises CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635101 / 114 Outline 1. Optimizing Matrix Transpose with CUDA 2. Performance Optimization 3. Parallel Reduction 4. Parallel Scan 5. Exercises 6. Exercises CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635102 / 114 Exercise 1 (1/4) (1) Write a C function incrementing a float array A of size N (2) Write a CUDA kernel incrementing a float array A of size N for a 1D grid, using 1D thread blocks, and assuming that each thread increments one element. (3) Assuming that each thread block counts 64 threads, write the host code launching the kernel (including memory allocation on the device and host-device data transfers) CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635103 / 114 Exercise 1 (2/4) (1) Write a C function incrementing a float array A of size N void increment_Array_On_Host(float* A, int N) { int i; for (i=0; i< N; i++) A[i] = A[i] + 1.f; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635104 / 114 Exercise 1 (3/4) (2) Write a CUDA kernel incrementing a float array A of size N for a 1D grid, using 1D thread blocks, and assuming that each thread increments one element. __global__ void increment_On_Device(float *A, int N) { int idx = blockIdx.x*blockDim.x + threadIdx.x; if (idx<N) A[idx] = A[idx]+1.0f; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635105 / 114 Exercise 1 (4/4) (3) Assuming that each thread block counts 64 threads, write the host code launching the kernel (including memory allocation on the device and host-device data transfers) float *A_h; float *A_d; cudaMalloc((void **) &A_d, size); // Allocate memory on the host for A and initialize A .................................................. cudaMemcpy(A_d, A_h, sizeof(float)*N, cudaMemcpyHostToDevice); int bSize = 64; intnBlocks = N/bSize + (N%bSize == 0?0:1); Increment_On_Device <<< nBlocks, bSize >>> (A_d, N); cudaMemcpy(A_h, A_d, sizeof(float)*N, cudaMemcpyDeviceToHost free(A_h); cudaFree(A_d); CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635106 / 114 Exercise 2 (1/4) We recall below the Sieve of Eratosthenes def eratosthenes_sieve(n): # Create a candidate list within which non-primes will be # marked as None; only candidates below sqrt(n) need be checked candidates = range(n+1) fin = int(n**0.5) # Loop over the candidates, marking out each multiple. for i in xrange(2, fin+1): if not candidates[i]: continue candidates[2*i::i] = [None] * (n//i - 1) # Filter out non-primes and return the list. return [i for i in candidates[2:] if i] Write a CUDA kernel implementing the Sieve of Eratosthenes on an input n: (1) Start with a naive single thread-block kernel not using shared memory; (2) Then, use shared memory and multiple thread blocks. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635107 / 114 Exercise 2 (2/4) (1) A naive kernel not using shared memory. __global__ static void Sieve(int * sieve,int sieve_size) { int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx > 1) { for(int i=idx+idx;i < sieve_size;i+=idx) sieve[i] = 1; } } The launching code could be: cudaMalloc((void**) &device_sieve, sizeof(int) * sieve_size); Sieve<<<1, sqrt(sieve_size), 0>>>(device_sieve, sieve_size); But this would be quite inefficient. Why? CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635108 / 114 Exercise 2 (3/4) (1) A kernel using shared memory. __global__ static void Sieve(int * sieve,int sieve_size) { int b_x = blockIdx.x; int b_w = blockDim.x; int t_x = threadIdx.x; int offset = b_x * b_w; int ix = offset + tid; int t_y = threadIdx.y; // copy the segment (tile) to shared memory _shared__ int A[b_w]; A[tid] = sieve[ix]; __syncthreads(); knocker = tid; // tid knocks down numbers that are multiple // of knocker in the range [offset, offset + b_w) } This code is almost correct . . . Let’s fix it! CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635109 / 114 Exercise 2 (4/4) (1) A kernel using shared memory. knocker = t_y; // tid knocks down numbers that are multiple // of knocker in the range [offset, offset + b_w[ int start = (offset % knocker == 0) ? offset : (offset / knocker +1) * knocker; for (int jx = start; jx < offset + b_w; jx += knoecker) A[jx - offset] =1; __syncthreads(); sieve[ix] = A[tid]; } This code is almost correct . . . Let’s fix it! CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635110 / 114 Exercise 3 (1/4) Write a CUDA kernel (and the launching code) implementing the reversal of an input integer n. This reversing process will be out-of-place. As in the previous exercise: (1) start with a naive kernel not using shared memory (2) then develop a kernel using shared memory. CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635111 / 114 Exercise 3 (2/4) __global__ void reverseArrayBlock(int *d_out, int *d_in) { int inOffset = blockDim.x * blockIdx.x; int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x); int in = inOffset + threadIdx.x; int out = outOffset + (blockDim.x - 1 - threadIdx.x); d_out[out] = d_in[in]; } int numThreadsPerBlock = 256; int numBlocks = dimA / numThreadsPerBlock; dim3 dimGrid(numBlocks); dim3 dimBlock(numThreadsPerBlock); reverseArrayBlock<<< dimGrid, dimBlock >>>( d_b, d_a ); CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635112 / 114 Exercise 3 (3/4) __global__ void reverseArrayBlock(int *d_out, int *d_in) { extern __shared__ int s_data[]; int inOffset = blockDim.x * blockIdx.x; int in = inOffset + threadIdx.x; // Load one element per thread from device memory and store it // *in reversed order* into temporary shared memory s_data[blockDim.x - 1 - threadIdx.x] = d_in[in]; // Block until all threads in the block have // written their data to shared mem __syncthreads(); // write the data from shared memory in forward order, // but to the reversed block offset as before int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x); int out = outOffset + threadIdx.x; d_out[out] = s_data[threadIdx.x]; } CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635113 / 114 Exercise 3 (4/4) int numThreadsPerBlock = 256; int numBlocks = dimA / numThreadsPerBlock; int sharedMemSize = numThreadsPerBlock * sizeof(int); // launch kernel dim3 dimGrid(numBlocks); dim3 dimBlock(numThreadsPerBlock); reverseArrayBlock<<< dimGrid, dimBlock, sharedMemSize >>>( d_b, d_a ); CS4402-9635: Optimizing CUDA code UWO-CS4402-CS9635114 / 114
AMS 148 Chapter 8: Optimization in CUDA, and Advanced Topics Steven Reeves 1 Optimizing Data Transfers in CUDA C/C++ This section we will discuss code optimization with how to efficiently transfer data between the host and the device. The peak bandwidth between the device memory and the GPU is much higher (720 GB/s for the Tesla P100 found in Hummingbird) than the peak bandwidth between host memory and device memory (8 GB/s on a PCIex16 Generation 2). This disparity means that if your implementation requires many data transfers from GPU to host or vice-versa it will greatly halter your performance. Let us begin with a few general guidlines for host-device data transfers. We wish to: • Minimize the ammount of data transferred between host and device when possible, even if that means running kernels on the GPU that get little or no speed-up compared to running them on the host CPU. • Hihger bandwidth is possible between the host and the device when using page-locked (or ’pinned’) memory. • Batching many small transfers into one larger transfer performs better as it eliminates most of the per-transfer overhead. • Data transfers between the host and device can sometimes be overlapped with kernel execution and other data transfers. First let us talk about how to measure time spent in data transfers without modifying source code. 1.1 Measuring Data Transfers Times with nvprof To measure the time spent during each data transfer, we could record a CUDA event before and after each transfer and use cudaEventElaspsedTime() as we have in the past. However, we can retrieve the elapsed transfer time without deploying CUDA events by using nvprof, a command-line CUDA profiler included with the CUDA Toolkit. Let’s try the following code example: Listing 1: Example Code for Profiling int main () { const unsigned int N = 1048576; const unsigned int bytes = N*sizeof(int); int *h_a = (int*) malloc(bytes); int *d_a; cudaMalloc (( int **)&d_a , bytes); memset(h_a , 0, bytes); cudaMemcpy(d_a , h_a , bytes , cudaMemcpyHostToDevice ); cudaMemcpy(h_a , d_a , bytes , cudaMemcpyDeviceToHost ); return 0; } To profile this code, we just need to compile it using nvcc, and the run nvprof with the program fileneame as an argument. Listing 2: Running nvprof $ nvcc profile.cu -o profile.exe $ nvprof ./ profile.exe 1 Using the Citrisdance (NVIDIA K20) server I recieve the following output: ==17821== NVPROF is profiling process 17821 , command: ./ profile.exe ==17821== Profiling application : ./ profile.exe ==17821== Profiling result: Type Time (%) Time Calls Avg Min Max Name GPU activities: 51.35% 1.5589 ms 1 1.5589 ms 1.5589 ms 1.5589 ms [CUDA memcpy DtoH] 48.65% 1.4772 ms 1 1.4772 ms 1.4772 ms 1.4772 ms [CUDA memcpy HtoD] API calls: 98.80% 480.32 ms 1 480.32 ms 480.32 ms 480.32 ms cudaMalloc 0.93% 4.5109 ms 2 2.2554 ms 1.7877 ms 2.7232 ms cudaMemcpy 0.21% 1.0257 ms 188 5.4550 us 220 ns 205.44 us cuDeviceGetAttribute 0.04% 197.15 us 2 98.574 us 66.692 us 130.46 us cuDeviceTotalMem 0.02% 93.452 us 2 46.726 us 44.398 us 49.054 us cuDeviceGetName 0.00% 6.8620 us 4 1.7150 us 342 ns 5.2240 us cuDeviceGet 0.00% 3.6260 us 3 1.2080 us 347 ns 2.4570 us cuDeviceGetCount We see that nvprof gives a full breakdown of the program, including the CUDA API calls and the GPU activities. We see that the majority of the time is used on the memory allocation, but barring this API call, the next most time consuming are the memory transfers. Memory transfers are much more common than allocation in most applications. 1.2 Minimizing Data Transfers We should not use only the GPU exectuation time of a kernel relative to CPU functions to decided whether to run the GPU or CPU version. We also need to consider the cost of moving data across the PCI-e bus, especially when we are initially porting code to CUDA. Because of the hetrogenous programming model of CUDA (using both the CPU and GPU), code can be ported to the GPU one kernel at a time. In the initial stages of writing CUDA code, data transfers may donimate the overall execution time. It is worth while to monitor time spent on data transfer separately from time spent during computation within a kernel. It is easy to use the command-line profiler for this, as demonstrated above. As more of our code is ported to CUDA, we will remove intermediate transfers and descrease the overall execution time correspondingly. 1.3 Pinned Memory Pageable memory space means memory contents that can be paged in/ paged out between DRAM and a secondary storage device. Host memory allocations are generally pageable. The main motivation for using pinned memory is to perform asynchronous transfers of data from the host to the device. This is accomplished by using the CUDA primitive cudaMemcpyAsync and related functions. Additionally, certain performance benefits come from using pinned (or page-locked memory). In this section we will give a few examples about how to allocate pinned memory and we investigate features of this type of memory. 1.3.1 About pinned memory The workings of paged memory is best described by the post on http://devblogs.nvidia.com which will be paraphrased here. Data allocations on the CPU are pageable by default. The GPU cannot access data directly from pageable host memory, so when a data transfer from pageable host memory to device is invoked, the CUDA driver will first allocate a temporary page-locked, or ”pinned” host array. Then the data is copied into this pinned array for transfer to the device. An illustration of this process is provided by Nvidia blogpost: 2 Figure 1: Regular Data Transfer vs Pinned Data Transfer; NVIDIA Developer Blog As shown in figure 1, pinned memory is used as a staging area for transfers from the device to the hsot. We can avoid the cost of the transfer betweeen pageable and pinned host arrays by directly allocation the host arrays in pinned memory. Allocate pinned host memory in CUDA C/C++ using cudaMallocHost() or cudaHostAlloc() and free the memory with cudaFreeHost(). It is possible for pinned memory allocation to fail, so we should check for errors using the cudaError t class. The following code demonstrates allocation of pinned memory with error chekcing. cudaError_t status = cudaMallocHost (( void **)&h_aPinned , bytes); if (status != cudaSuccess) printf("Error allocating pinned host memory\n"); Data transfers using host pinned memory use the same cudaMemcpy() syntax as transfers with pageable memory. We can use the following ”bandwidthtest” program to compare the rates between pagable and pinned transfer rates. Listing 3: Bandwidth Test #include <stdio.h> #include <assert.h> // Convenience function for hcekcing CUDA runtime API results // can be wrapped around any runtime API call. No -op in release builds. inline cudaError_t checkCuda(cudaError_t result) { #if deifned(DEBUG) || defined(_DEBUG) if( result != cudaSuccess ) { fprintf(stderr , "CUDA Runtime Error: %s\n", cudaGetErrorString (result)); assert(result == cudaSuccess ); } #endif return result; } void profileCopies (float *h_a , float *h_b , float *d, unsigned int n, char *desc) { printf("\n%s transfers\n", desc); unsigned int bytes = n * sizeof(float); // events for timing cudaEvent_t startEvent , stopEvent; checkCuda( cudaEventCreate (& startEvent) ); checkCuda( cudaEventCreate (& stopEvent) ); checkCuda( cudaEventRecord (startEvent , 0) ); checkCuda( cudaMemcpy(d, h_a , bytes , cudaMemcpyHostToDevice ) ); checkCuda( cudaEventRecord (stopEvent , 0) ); checkCuda( cudaEventSynchronize (stopEvent) ); 3 float time; checkCuda( cudaEventElapsedTime (&time , startEvent , stopEvent) ); printf(" Host to Device bandwidth (GB/s): %f\n", bytes * 1e-6 / time); checkCuda( cudaEventRecord (startEvent , 0) ); checkCuda( cudaMemcpy(h_b , d, bytes , cudaMemcpyDeviceToHost ) ); checkCuda( cudaEventRecord (stopEvent , 0) ); checkCuda( cudaEventSynchronize (stopEvent) ); checkCuda( cudaEventElapsedTime (&time , startEvent , stopEvent) ); printf(" Device to Host bandwidth (GB/s): %f\n", bytes * 1e-6 / time); for (int i = 0; i < n; ++i) { if (h_a[i] != h_b[i]) { printf("*** %s transfers failed ***\n", desc); break; } } // clean up events checkCuda( cudaEventDestroy (startEvent) ); checkCuda( cudaEventDestroy (stopEvent) ); } int main () { unsigned int nElements = 4*1024*1024; const unsigned int bytes = nElements * sizeof(float); // host arrays float *h_aPageable , *h_bPageable; float *h_aPinned , *h_bPinned; // device array float *d_a; // allocate and initialize h_aPageable = (float *) malloc(bytes); // host pageable h_bPageable = (float *) malloc(bytes); // host pageable checkCuda( cudaMallocHost (( void **)&h_aPinned , bytes) ); // host pinned checkCuda( cudaMallocHost (( void **)&h_bPinned , bytes) ); // host pinned checkCuda( cudaMalloc (( void **)&d_a , bytes) ); // device for (int i = 0; i < nElements; ++i) h_aPageable [i] = i; memcpy(h_aPinned , h_aPageable , bytes); memset(h_bPageable , 0, bytes); memset(h_bPinned , 0, bytes); // output device info and transfer size cudaDeviceProp prop; checkCuda( cudaGetDeviceProperties (&prop , 0) ); printf("\nDevice: %s\n", prop.name); printf("Transfer size (MB): %d\n", bytes / (1024 * 1024)); // perform copies and report bandwidth profileCopies (h_aPageable , h_bPageable , d_a , nElements , "Pageable"); profileCopies (h_aPinned , h_bPinned , d_a , nElements , "Pinned"); printf("n"); // cleanup cudaFree(d_a); cudaFreeHost (h_aPinned); cudaFreeHost (h_bPinned); free( h_aPageable ); free( h_bPageable ); return 0; } The data transfer rate can depend on the type of host stysem (motherboard, CPU, and chipset) as well as the GPU. Running this program on HummingBird we have the following output Listing 4: Output of Bandwidth Test Device: Tesla P100 -PCIE -16GB Transfer size (MB): 16 4 Pageable transfers Host to Device bandwidth (GB/s): 2.990821 Device to Host bandwidth (GB/s): 4.364375 Pinned transfers Host to Device bandwidth (GB/s): 12.017788 Device to Host bandwidth (GB/s): 12.726673 On Hummingbird the CPU is an AMD Opteron processor and sets a decent pageable transfer rate. However, we find that the pinned transfer rate is much more impressive, offering more than 4 times bandwidth Host to Device and 3 times bandwidth Device to Host. Note that the pageable transfer rate depends on the speed of the CPU. Faster CPUs will offer a better pageable bandwidths, however with modern GPUs pinned bandwidths will exceed this capability. A word of warning, you should not over-allocate pinned memory. Doing so can reduce overall system performance because it reduces the amount of physical memory available to the operating system and other programs. How much is too much is difficult to tell a priory, so as with most optimizaitons, test your code and the systems they run on for optimal performance parameters. 1.3.2 Batching Small Transfers Due to the overhead associated with CPU to GPU memory transfers, it is better to batch many small transfers together into a single transfer. This is easy to do by using a temporary array, preferable pinned, and packing it with the data to be transferred. 2 CUDA Streams In the previous section we discussed how to transfer data efficiently between the host and device. In this section, we discuss how to overlap data transfers with computation on the host, computation on the device, and other data transfers between the hsot and device. Through the use of CUDA streams, overlap between data transfers and other operations can be achieved. A stream in CUDA is a sequence of operations that execute on the device in the order in which they are issued by the host code. While operations within a stream are garaunteed to execute in the prescribed order, operations in different streams can be interleaved and, when possible, they will run concurrently. 2.1 The default stream All device operations (kernels and data transfers) in CUDA run in a stream. When no stream is specified, the default stream (also referred to the ”null stream”) is used. The default stream is different from other streams as it is a synchronizing stream with respect to the device: no operation on the default stream will begin until all previous issued operation in any stream on the device have completed, and an operation in the default stream must complete before any other operation (in any stream on the device) will begin. Let us check out a simple code snippet that use the default stream: Listing 5: Code Snippet Default Stream cudaMemcpy(d_a , a, numBytes , cudaMemcpyHostToDevice ); increment <<<1,N>>>(d_a) cudaMemcpy(a, d_a , numBytes , cudaMemcpyDeviceToHost ); In this snippet, from the persepctive of the PGU¡ all three operation are issued to the same (default) stream and will execute in the order that they were issued. From the perspective of the CPU, the implicit data transfers are blocking or synchronous transfers, while kernel launching is asynchronous. Since the host-to-device transfer on the first line is synchronous, the CPU thread will not reach the kernel call on the second line until the host to device transfer is complete. Once the kernel is issued, the CPU thread moves to the third line, but the transfer on that line will not begin due to the device-side order of execution. The asynchronous behavior of the kernel launches from the CPU’s perspective makes overlapping device and host computations very easy, take the following snippet for example: 5 cudaMemcpy(d_a , a, numBytes , cudaMemcpyHostToDevice ); increment <<<1,N>>>(d_a) myCpuFunction (b) cudaMemcpy(a, d_a , numBytes , cudaMemcpyDeviceToHost ); In this code, as soon as the kernel is launched on the device the CPU thread executes myCpuFunction(), overlapping the CPU function with the kernel execution on the GPU. Whether the host function or device kernel completes first doesn’t afftect the subsequent device to host memory transfer, which begins only after the kernel is completed. From the perspective of the GPU, nothign has changed from the previous example; the device is completely unaware of myCpuFunction(). 2.2 Non-default streams Streams other than the null CUDA C/C++ are declared, created, and destroyed in the host code, as in this example: Listing 6: Streams cudaStream_t stream1; cudaError_t result; result = cudaStreamCreate (& stream1); result = cudaStreamDestroy (& stream1); In order to issue a data transfer to a non default CUDA stream, we use the cudaMemcpyAsync() function, which is similar to the regular cudaMemcpy() function we have been using before, but it takes another argument. result = cudaMemcpyAsyn (d_a , a, N, cudaMemcpyHostToDevice , stream1); This function is asynchronous with the host, so control returns to the host thread immediately after the transfer is issued. There are 2D and 3D extensions of this as well. To issue a kernel to a non-default stream, we specify the stream in question as a fourth argument in the kernel configuration: increment <<<1,N,0,stream1 >>>(d_a); 2.3 Synchronization with streams Since all operations in non-default streams are asynchronous with respect to the host code, you will find that there may be situations that will require you to synchronize all streams. There are several ways to do this. The brute force way to do this is to utilize cudaDeviceSynchronize() which creates a barrier on the host thread until all previously issued GPU operations are complete. In most cases this is too much, and can hurt performance due to stalling the device and host thread. The CUDA stream API has multiple less severe methods of synchronization. The function cudaStreamSynchronize(stream) can be used to block the host from prorceeding until all previous is- sued operations on the specified stream have completed. The function cudaStreamQuery(stream) tests whether all operations issued to the specified stream have completed, without blocking host execution. The fuctions cudaEventSyncrhonize(event) and cudaEventQuery(event) do the similar only that their result is based offof whether a particular event has been recorded. You can also synchronize operations within a single stream on a specific event using cudaStreamWaitEvent(event). 2.4 Overlapping Kernel Execution and Data Transfer Previously we demonstrated overlapping kernel execution in the default stream with execution of operations on the CPU. But our goal in this section is to show how to overlap kernel execution with data transfers. There are several requirements for this to happen: • The device must be capable of ”concurrent copy and exectuion” • The kernel execution and the data transfer to be overlapped must both occur in different, non default streams • The host memory involved in the data transfer must be pinned memory. 6 So we’re going to modify the simple code from the previous section, the full code available on github. In this modified snippet, we break up the array of size N into chunks of streamSize elements. SInce the kernel operates independently on all elements, each of the chunks can be processed indepednently. The number of (non-default) streams used is nStreams=N/streamSize. There are multiple ways to implement the domain decompostion of the data and processing, one way is to loop over the operations for each chunk. for (int i = 0; i < nStreams; ++i) { int offset = i * streamSize; cudaMemcpyAsync (& d_a[offset], &a[offset], streamBytes , cudaMemcpyHostToDevice , stream[i]); kernel <<<streamSize/blockSize , blockSize , 0, stream[i]>>>(d_a , offset); cudaMemcpyAsync (&a[offset], &d_a[offset], streamBytes , cudaMemcpyDeviceToHost , stream[i]); } Another approach is to batch similar operations together, issuing all the host to device transfers first, followed by the kernel lauches, lastly the device to host transfers. for (int i = 0; i < nStreams; ++i) { int offset = i * streamSize; cudaMemcpyAsync (& d_a[offset], &a[offset], streamBytes , cudaMemcpyHostToDevice , cudaMemcpyHostToDevice , stream[i]); } for (int i = 0; i < nStreams; ++i) { int offset = i * streamSize; kernel <<<streamSize/blockSize , blockSize , 0, stream[i]>>>(d_a , offset); } for (int i = 0; i < nStreams; ++i) { int offset = i * streamSize; cudaMemcpyAsync (&a[offset], &d_a[offset], streamBytes , cudaMemcpyDeviceToHost , cudaMemcpyDeviceToHost , stream[i]); } Both asyncrhonous methods shown above yield the correct results, and in both cases dependent operations are issued to the same stream in the order in which they need to be executed. The results can vary depending on GPU architectures: On citrisdance a Kepler based server I get the following results: Device : Tesla K20c Time for sequential transfer and execute (ms): 7.590112 max error: 1.192093e -07 Time for asynchronous V1 transfer and execute (ms): 3.995456 max error: 1.192093e -07 Time for asynchronous V2 transfer and execute (ms): 3.975712 max error: 1.192093e -07 However, on Hummingbird the performance is faster: Device : Tesla P100 -PCIE -16 GB Time for sequential transfer and execute (ms): 3.154144 max error: 1.192093e -07 Time for asynchronous V1 transfer and execute (ms): 1.971200 max error: 1.192093e -07 Time for asynchronous V2 transfer and execute (ms): 1.959584 max error: 1.192093e -07 This is mostly due to hardware advances in the years, but also due to changes in compute combatibility. For devices that have compute cability 3.5 or higher, the Hyper-Q feature eliminates the need to tailor the launch order so either approach yields similar results. However, on older devices, such as the Tesla C1060, a compute 1.5 device the results are different. Device : Tesla C1060 Time for sequential transfer and execute (ms ): 12.92381 max error : 2.3841858E -07 Time for asynchronous V1 transfer and execute (ms ): 13.63690 max error : 2.3841858E -07 Time for asynchronous V2 transfer and execute (ms ): 8.84588 max error : 2.3841858E -07 We see that with this device the version one transfer runs slower than the serially transfered method. A good way to understand this is that the C1060 only has one copy engine and one kernel engine. The following diagram illustrates the method of operation on the C1060 7 Figure 2: Singe Engine, Nvidia Developer Blogs whereas a newer device, say a C2050, which contains two copy engines and kernel engines, the timelines are more like this: 8 Figure 3: Multiple Engines, Nvidia Developer Blogs The number of engines in these previous models dictated the nature of asynchronous operations. The Hyper-Q firmware allows for more effective use of the grid management. The following illustrations show the profiling of an applicaiton without and with hyper-Q. Figure 4: Profiling of older multi-engine GPUs without Hyper-Q 9 Figure 5: Multi-engine GPUs with Hyper-Q GPUs with Hyper-Q allow the hardward to compact asynchronous launches using either method. Allow- ing the developer to not worry (as much) about the hardware implementation. 3 Optimizing Calculations In the previous sections we looked into to optimizing memory transactions, in this section we’ll look into optimizing calculations within kernels. As an example we will follow Mark Harris’ Optimizing Parallel Reduction in CUDA presentation. Using this, we will discuss 7 different versions of the reduction kernel. Using these versions we will look at several important optimization strategies. Utilizing these strategies we strive to reach GPU peak performance. We will need to choose the right metric, FLOP/s for compute-bound kernels, and Bandwidth for memory-bound kernels. Reductions have very low arithmetic intensity, 1 flop per element loaded, making this bandwidth optimal. The following code is tested on aa Nvidia Titan XP GPU, which has a theoretical peak bandwidth of 547.6 GB/s. In what follows we will try to design an algorithm that gets close to this theoretical bandwidth. A true measurement is the effictive bandwdith BWeffective = (RB + WB)/(t × 109) where RB is the number of bytes read by the kernel, and WB is the number of bytes written. We multiply t by 109 to retrieve GB/s. The reduction that we have had before utilizes a method called address interleaving. Listing 7: First Reduction template <class T> __global__ void reduce0(T *g_idata , T *g_odata){ extern __shared__ T sdata []; // each thread loads one element from global to shared mem unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x*blockDim.x + threadIdx.x; sdata[tid] = g_idata[i]; __syncthreads (); // do reduction in shared mem for(unsigned int s=1; s < blockDim.x; s *= 2) { if (tid % (2*s) == 0) { sdata[tid] += sdata[tid + s]; } __syncthreads (); } // write result for this block to global mem if (tid == 0) g_odata[blockIdx.x] = sdata [0]; } The if statement within this code is highly divergent, it branches the threads and many launched threads are not active, and can result in very poor performance. We will test this code and the code that follows using 1 million integers. 10 Upon execution of this first reduce, the elapsed time is 0.132096 ms, with a effective bandwidth of 31.814 GB/s. This is far from the the theoretical bandwidth. Let us change this kernel to remove the divergent branching. Listing 8: Reduction without divergent branching template <class T> __global__ void reduce1(T *d_out , const T *d_in) { // sdata is allocated in the kernel call: via dynamic shared memeory extern __shared__ T sdata []; int myId = threadIdx.x + blockDim.x*blockIdx.x; int tid = threadIdx.x; // load shared mem from global mem sdata[tid] = d_in[myId ]; __syncthreads (); // always sync before using sdata //do reduction over shared memory for(int s = 1; s<blockDim.x; s *=2) { int index = 2*s*tid; // Strided indexing! if(index < blockDim.x) { sdata[index] += sdata[index + s]; } __syncthreads (); // make sure all additions are finished } // only tid 0 writes out result! if(tid == 0) { d_out[blockIdx.x] = sdata [0]; } } All we changed in this code was the divergent inner loop. In this case we moved to a strided indexing scheme to create non-divergent branches. Upon execution of this kernel, we find that the elapsed time is now 0.071264 ms with an effective bandwidth of 58.9709GB/s. This shows that we have a 1.85 times speed up over the original. There’s an additional problem here, we have shared memory bank conflicts, induced by the strided index. We will change our looping scheme to yield sequential addressing, which will alleviate the shared memory bank conflict. To do this we swap the inner loop to be the following Listing 9: Reduce with sequential addressing for(unsigned int s = blockDim.x/2; s >0; s >>=1) { if(tid < s) { sdata[tid] += sdata[tid+s]; } __syncthreads (); // make sure all additions are finished } all else the same. By executing this kernel, we have the following reports: Elapsed time : 0.062816ms, Effective Bandwidth : 66.9017 GB/s, yielding a 1.13× speed up from the previous kernel. The problem with this implementation is that there are many idle threads, we only use half of the threads in the thread block upon the first loop of the iteration. In this case, we will half the number of blocks used, and change the way we load into shared memeory. For this purpose we will do two loads, and do the first addition in the reduction kernel upon loading into shared memory. So, we change our kernel to use: Listing 10: First Add During Load int myId = threadIdx.x + (blockDim.x*2)*blockIdx.x; int tid = threadIdx.x; // load shared mem from global mem sdata[tid] = d_in[myId] + d_in[myId + blockDim.x]; __syncthreads (); // always sync before using sdata 11 Upon execution, we reduce the elapsed time to 0.034528ms and increase the bandwidth to 121.713GB/s. Giving a speed up of 1.8192 times. At 121GB/s we’re far from the bandwidth upper bound. Therefore there is likely a bottleneck in the instruction overhead. Ancillary instructions that are not loads, stores, or arithmetic for core computation can become bottlenecks. In this case, address arithmetic and loop overhead. Our strategy for mitigating this will be to unroll loops. In the next kernel we will unroll the last warp. That is, we will change our loop to end at s = 32. Note that this will save useless work in every warp, since we will be taking stages out of the for loop. Without the unroll, every warp executes all iterations of the for loop and if statement. Listing 11: Unrolling the last warp for(unsigned int s = blockDim.x/2; s >32; s >>=1) { if(tid < s) { sdata[tid] += sdata[tid+s]; } __syncthreads (); // make sure all additions are finished } if(tid < 32) { sdata[tid] += sdata[tid +32]; sdata[tid] += sdata[tid +16]; sdata[tid] += sdata[tid +8]; sdata[tid] += sdata[tid +4]; sdata[tid] += sdata[tid +2]; sdata[tid] += sdata[tid +1]; } The effects of unrolling the last warp is fairly noticable. We reduce the kernel time to 0.023936ms with bandwidth being 175.572 GB/s, giving a speed up of 1.442513369 times from the last kernel. Now if we know the know the number of iterations at compile time, then we could completely unroll the reduction. Luckily, the block size is limiited by the GPU to 1024 threads, and we’re sticking to powers of 2 for blocksizes. So we can easily unroll for a fixed block size, which we will use a precompiler derictive to define blockSize. Using this value all if statements involving blockSize will be evaulated at compile time. Therefore we can change the above for loop to a different statment Listing 12: Unrolling all the loops if(blockSize >= 512){ if(tid < 256) { sdata[tid] += sdata[tid + 256]; } __syncthreads (); } if(blockSize >= 256){ if(tid < 128) { sdata[tid] += sdata[tid + 128]; } __syncthreads (); } if(blockSize >= 128){ if(tid < 64) { sdata[tid] += sdata[tid + 64]; } __syncthreads (); } if(tid < 32) { sdata[tid] += sdata[tid +32]; sdata[tid] += sdata[tid +16]; sdata[tid] += sdata[tid +8]; sdata[tid] += sdata[tid +4]; sdata[tid] += sdata[tid +2]; sdata[tid] += sdata[tid +1]; } 12 Using this method we shave the time down 0.02192 ms, and raise the bandwidth to 191.72GB/s, yielding a moderate speed up just about 1.1. Before getting to the last optimization, lets consider the complexity of the reduce algorithm. We know that there are log2(N) parallel steps, where each step S does N/2S independent operations. For N = 2D, the reduction algorithm performs X S∈[1···D] 2D−S = N −1 operations, making is work efficient. With P threads operating in parallel, the time complexity of the reduce is O(N/P + log2(N)). Now we need to think about cost, the cost of the parallel algorithm is the number of processors times the time complexity. If we allocate N processors, then the cost is N log(N), which is not cost effecient. Brent’s theorem suggests that we use O(N/ log(N))threads. Each thread will do O(log(N)) sequential work. So all O(N/ log(N)) threads will cooperatie for O(log N) steps, therefore resulting in O(N) cost. This is called algorithm cascading, and in practice can lead to significant speed ups. To cascade the reduction, we will combine sequential and parallel reduction methods. Each thread loads and sums multiple elements into shared memory, then we will perform the tree based reduction in shared memory. Brent’s theorem suggests that each thread should sum O(log(N)) elements. It can be sometimes beneficial to push this further, we can hide latency with more work per thread. We will do this using 32 elements per thread. The changes in this last reduction are almost entirely in the load/serial reduce: Listing 13: Algorithm Cascading int myId = threadIdx.x + (blockDim.x*2)*blockIdx.x; int tid = threadIdx.x; int gridSize = blockDim.x*2* gridDim.x; sdata[tid] = 0; // load shared mem from global mem while(myId < n) { sdata[tid] += d_in[myId] + d_in[myId + blockDim.x]; myId += gridSize; } __syncthreads (); Then upon kernel launch we have reduce6 <<<dimGrid6 , dimBlock , 1024* sizeof(int) >>>(reduced , array , 32); Using the algorithm cascading we reduce the elapsed time down to 0.008192 ms, offering 513GB/s bandwidth, and giving a 2.67578 times speed up. Kernel Time 220 integers Bandwidth Step Speed Up Cummulative Speed Up Kernel 1 0.132096 ms 31.814 GB/s - - Kernel 2 0.071264 ms 58.8198 GB/s 1.853x 1.853x Kernel 3 0.062816 ms 66.9017 GB/s 1.134x 2.102x Kernel 4 0.034528 ms 131.713 GB/s 1.819x 3.826x Kernel 5 0.023936 ms 175.572 GB/s 1.442x 5.519x Kernel 6 0.021920 ms 191.720 GB/s 1.092x 6.026x Kernel 7 0.008192 ms 513.001 GB/s 2.676x 16.13x Table 1: Performance of Reduction via optimization So putting it all together, we can achieve a speed up of over 16 times. Here is an interesting oberservation: • Algorithmic Optimizations By changing the addressing and using algorithm cascading we achieved 10.23× speed up collectively. • Coding Optimization By unrolling the loops we only achieved 1.58× speed up collectively. So a good rule of thumb is to optimizing your algorithm first, then optimize your code using unrolling of loops. 13 In conclusion – to fully optimize CUDA code, you should understand CUDA performance characteristics. Mostly, memory coalescing, divergent branching, bank conflicts, and latency hiding. Consider the algorithm that you are programming, ascertain if it is a compute limited algorithm or bandwidth limited. Using parallel algorithm complexity theory, we found how to cascade the algorithm, allowing for quite a substantial optimization. Identify bottlenecks in your algorithm, like we did with the memory and instruction overhead. Finally, be sure to optimize the your algorithm, and then optimize your code. 14
Fundamental CUDA Optimization NVIDIA Corporation Outline Fermi/Kepler Architecture Kernel optimizations Launch configuration Global memory throughput Shared memory access Instruction throughput / control flow Optimization of CPU-GPU interaction Maximizing PCIe throughput Overlapping kernel execution with memory copies Most concepts in this presentation apply to any language or API on NVIDIA GPUs Most concepts in this presentation apply to any language or API on NVIDIA GPUs 512 Scalar Processor (SP) cores execute parallel thread instructions 16 Streaming Multiprocessors (SMs) each contains 32 scalar processors 32 fp32 / int32 ops / clock, 16 fp64 ops / clock 4 Special Function Units (SFUs) Shared register file (128KB) 48 KB / 16 KB Shared memory 16KB / 48 KB L1 data cache 20-Series Architecture (Fermi) Kepler cc 3.5 SM (GK110) “SMX” (enhanced SM) 192 SP units (“cores”) 64 DP units LD/ST units 4 warp schedulers Each warp scheduler is dual- issue capable K20: 13 SMX’s, 5GB K20X: 14 SMX’s, 6GB K40: 15 SMX’s, 12GB Software Hardware Threads are executed by scalar processors Thread Scalar Processor Thread Block Multiprocessor Thread blocks are executed on multiprocessors Thread blocks do not migrate Several concurrent thread blocks can reside on one multiprocessor - limited by multiprocessor resources (shared memory and register file) ... Grid Device A kernel is launched as a grid of thread blocks Execution Model Thread Block Multiprocessor 32 Threads 32 Threads 32 Threads ... Warps A thread block consists of 32-thread warps A warp is executed physically in parallel (SIMD) on a multiprocessor = Warps Host CPU Chipset DRAM Device DRAM Global Constant Texture Local GPU Multiprocessor Registers Shared Memory Multiprocessor Registers Shared Memory Multiprocessor Registers Shared Memory Constant and Texture Caches L1 / L2 Cache Memory Architecture © NVIDIA Corporation 2011 Launch Configuration Launch Configuration Key to understanding: Instructions are issued in order A thread stalls when one of the operands isn’t ready: Memory read by itself doesn’t stall execution Latency is hidden by switching threads GMEM latency: 400-800 cycles Arithmetic latency: 18-22 cycles How many threads/threadblocks to launch? Conclusion: Need enough threads to hide latency Launch Configuration Hiding arithmetic latency: Need ~18 warps (576 threads) per SM Or, latency can also be hidden with independent instructions from the same warp For example, if instruction never depends on the output of preceding instruction, then only 9 warps are needed, etc. Maximizing global memory throughput: Depends on the access pattern, and word size Need enough memory transactions in flight to saturate the bus Independent loads and stores from the same thread Loads and stores from different threads Larger word sizes can also help (float2 is twice the transactions of float, for example) Maximizing Memory Throughput Increment of an array of 64M elements Two accesses per thread (load then store) The two accesses are dependent, so really 1 access per thread at a time Tesla C2050, ECC on, theoretical bandwidth: ~120 GB/s Several independent smaller accesses have the same effect as one larger one. For example: Four 32-bit ~= one 128-bit Launch Configuration: Summary Need enough total threads to keep GPU busy Typically, you’d like 512+ threads per SM More if processing one fp32 element per thread Of course, exceptions exist Threadblock configuration Threads per block should be a multiple of warp size (32) SM can concurrently execute up to 8 thread blocks Really small thread blocks prevent achieving good occupancy Really large thread blocks are less flexible I generally use 128-256 threads/block, but use whatever is best for the application For more details: Vasily Volkov’s GTC2010 talk “Better Performance at Lower Occupancy” (http://www.gputechconf.com/page/gtc-on-demand.html#session2238) Global Memory Throughput Memory Hierarchy Review Local storage Each thread has own local storage Mostly registers (managed by the compiler) Shared memory / L1 Program configurable: 16KB shared / 48 KB L1 OR 48KB shared / 16KB L1 Shared memory is accessible by the threads in the same threadblock Very low latency Very high throughput: 1+ TB/s aggregate L2 All accesses to global memory go through L2, including copies to/from CPU host Global memory Accessible by all threads as well as host (CPU) High latency (400-800 cycles) Throughput: up to 177 GB/s Memory Hierarchy Review L2 Global Memory Registers L1 SM-N SM-N SMEM Registers L1 SM-0 SM-0 SMEM Registers L1 SM-1 SM-1 SMEM GMEM Operations Two types of loads: Caching Default mode Attempts to hit in L1, then L2, then GMEM Load granularity is 128-byte line Non-caching Compile with –Xptxas –dlcm=cg option to nvcc Attempts to hit in L2, then GMEM – Do not hit in L1, invalidate the line if it’s in L1 already Load granularity is 32-bytes Stores: Invalidate L1, write-back for L2 Load Operation Memory operations are issued per warp (32 threads) Just like all other instructions Operation: Threads in a warp provide memory addresses Determine which lines/segments are needed Request the needed lines/segments Caching Load Warp requests 32 aligned, consecutive 4-byte words Addresses fall within 1 cache-line Warp needs 128 bytes 128 bytes move across the bus on a miss Bus utilization: 100% ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Non-caching Load Warp requests 32 aligned, consecutive 4-byte words Addresses fall within 4 segments Warp needs 128 bytes 128 bytes move across the bus on a miss Bus utilization: 100% ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Caching Load Warp requests 32 aligned, permuted 4-byte words Addresses fall within 1 cache-line Warp needs 128 bytes 128 bytes move across the bus on a miss Bus utilization: 100% ... 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses addresses from a warp 0 Non-caching Load Warp requests 32 aligned, permuted 4-byte words Addresses fall within 4 segments Warp needs 128 bytes 128 bytes move across the bus on a miss Bus utilization: 100% ... 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses addresses from a warp 0 Caching Load Warp requests 32 misaligned, consecutive 4-byte words Addresses fall within 2 cache-lines Warp needs 128 bytes 256 bytes move across the bus on misses Bus utilization: 50% 96 192 128 160 224 288 256 ... addresses from a warp 32 64 0 352 320 384 448 416 Memory addresses Non-caching Load Warp requests 32 misaligned, consecutive 4-byte words Addresses fall within at most 5 segments Warp needs 128 bytes 160 bytes move across the bus on misses Bus utilization: at least 80% Some misaligned patterns will fall within 4 segments, so 100% utilization 96 192 128 160 224 288 256 ... addresses from a warp 32 64 0 352 320 384 448 416 Memory addresses Caching Load All threads in a warp request the same 4-byte word Addresses fall within a single cache-line Warp needs 4 bytes 128 bytes move across the bus on a miss Bus utilization: 3.125% ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Non-caching Load All threads in a warp request the same 4-byte word Addresses fall within a single segment Warp needs 4 bytes 32 bytes move across the bus on a miss Bus utilization: 12.5% ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Caching Load Warp requests 32 scattered 4-byte words Addresses fall within N cache-lines Warp needs 128 bytes N*128 bytes move across the bus on a miss Bus utilization: 128 / (N*128) ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Non-caching Load Warp requests 32 scattered 4-byte words Addresses fall within N segments Warp needs 128 bytes N*32 bytes move across the bus on a miss Bus utilization: 128 / (N*32) ... addresses from a warp 96 192 128 160 224 288 256 32 64 352 320 384 448 416 Memory addresses 0 Impact of Address Alignment Warps should access aligned regions for maximum memory throughput L1 can help for misaligned loads if several warps are accessing a contiguous region ECC further significantly reduces misaligned store throughput Experiment: – Copy 16MB of floats – 256 threads/block Greatest throughput drop: – CA loads: 15% – CG loads: 32% GMEM Optimization Guidelines Strive for perfect coalescing Align starting address (may require padding) A warp should access within a contiguous region Have enough concurrent accesses to saturate the bus Process several elements per thread Multiple loads get pipelined Indexing calculations can often be reused Launch enough threads to maximize throughput Latency is hidden by switching threads (warps) Try L1 and caching configurations to see which one works best Caching vs non-caching loads (compiler option) 16KB vs 48KB L1 (CUDA call) Shared Memory Shared Memory Uses: Inter-thread communication within a block Cache data to reduce redundant global memory accesses Use it to improve global memory access patterns Organization: 32 banks, 4-byte wide banks Successive 4-byte words belong to different banks Performance: 4 bytes per bank per 2 clocks per multiprocessor smem accesses are issued per 32 threads (warp) serialization: if N threads of 32 access different 4-byte words in the same bank, N accesses are executed serially multicast: N threads access the same word in one fetch Could be different bytes within the same word Bank Addressing Examples No Bank Conflicts No Bank Conflicts Bank 31 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 31 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 31 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 31 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank Addressing Examples 2-way Bank Conflicts 8-way Bank Conflicts Thread 31 Thread 30 Thread 29 Thread 28 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 31 Bank 7 Bank 6 Bank 5 Bank 4 Bank 3 Bank 2 Bank 1 Bank 0 Thread 31 Thread 7 Thread 6 Thread 5 Thread 4 Thread 3 Thread 2 Thread 1 Thread 0 Bank 9 Bank 8 Bank 31 Bank 7 Bank 2 Bank 1 Bank 0 x8 x8 Shared Memory: Avoiding Bank Conflicts 32x32 SMEM array Warp accesses a column: 32-way bank conflicts (threads in a warp access the same bank) 31 2 1 0 31 2 1 0 31 2 1 0 warps: 0 1 2 31 Bank 0 Bank 1 … Bank 31 2 0 1 31 Shared Memory: Avoiding Bank Conflicts Add a column for padding: 32x33 SMEM array Warp accesses a column: 32 different banks, no bank conflicts 31 2 1 0 31 2 1 0 31 2 1 0 warps: 0 1 2 31 padding Bank 0 Bank 1 … Bank 31 31 2 0 1 Instruction Throughput & Control Flow Runtime Math Library and Intrinsics Two types of runtime math library functions __func(): many map directly to hardware ISA Fast but lower accuracy (see CUDA Programming Guide for full details) Examples: __sinf(x), __expf(x), __powf(x, y) func(): compile to multiple instructions Slower but higher accuracy (5 ulp or less) Examples: sin(x), exp(x), pow(x, y) A number of additional intrinsics: __sincosf(), __frcp_rz(), ... Explicit IEEE rounding modes (rz,rn,ru,rd) Control Flow Instructions are issued per 32 threads (warp) Divergent branches: Threads within a single warp take different paths if-else, ... Different execution paths within a warp are serialized Different warps can execute different code with no impact on performance Avoid diverging within a warp Example with divergence: if (threadIdx.x > 2) {...} else {...} Branch granularity < warp size Example without divergence: if (threadIdx.x / WARP_SIZE > 2) {...} else {...} Branch granularity is a whole multiple of warp size CPU-GPU Interaction Pinned (non-pageable) memory Pinned memory enables: faster PCIe copies memcopies asynchronous with CPU memcopies asynchronous with GPU Usage cudaHostAlloc / cudaFreeHost instead of malloc / free cudaHostRegister / cudaHostUnregister pin regular memory after allocation Implication: pinned memory is essentially removed from host virtual memory Streams and Async API Default API: Kernel launches are asynchronous with CPU Memcopies (D2H, H2D) block CPU thread CUDA calls are serialized by the driver Streams and async functions provide: Memcopies (D2H, H2D) asynchronous with CPU Ability to concurrently execute a kernel and a memcopy Stream = sequence of operations that execute in issue-order on GPU Operations from different streams may be interleaved A kernel and memcopy from different streams can be overlapped Overlap kernel and memory copy Requirements: D2H or H2D memcopy from pinned memory Kernel and memcopy in different, non-0 streams Code: cudaStream_t stream1, stream2; cudaStreamCreate(&stream1); cudaStreamCreate(&stream2); cudaMemcpyAsync( dst, src, size, dir, stream1 ); kernel<<<grid, block, 0, stream2>>>(…); potentially overlapped Call Sequencing for Optimal Overlap CUDA calls are dispatched to the hw in the sequence they were issued Fermi can concurrently execute: Up to 16 kernels Up to 2 memcopies, as long as they are in different directions (D2H and H2D) A call is dispatched if both are true: Resources are available Preceding calls in the same stream have completed Scheduling: Kernels are executed in the order in which they were issued Threadblocks for a given kernel are scheduled if all threadblocks for preceding kernels have been scheduled and there still are SM resources available Note that if a call blocks, it blocks all other calls of the same type behind it, even in other streams Type is one of { kernel, memcopy} Stream Examples (current HW) K1,M1,K2,M2: K1 K1 M1 M1 K2 K2 M2 M2 K1,K2,M1,M2: K1 K1 M1 M1 K2 K2 M2 M2 K1,M1,M2: K1 K1 M1 M1 M2 M2 K1,M2,M1: K1 K1 M1 M1 M2 M2 K1,M2,M2: K1 K1 M2 M2 M2 M2 Time K: Kernel M: Memcopy Integer: Stream ID More on Dual Copy Fermi is capable of duplex communication with the host PCIe bus is duplex The two memcopies must be in different streams, different directions Not all current host systems can saturate duplex PCIe bandwidth: Likely issues with IOH chips If this is important to you, test your host system Duplex Copy: Experimental Results CPU CPU--00 IOH IOH X58 X58 DRAM DRAM GPU GPU--00 CPU CPU--00 IOH IOH D36 D36 DRAM DRAM GPU GPU--00 CPU CPU--11 DRAM DRAM 10.8 GB/s 7.5 GB/s QPI, 6.4 GT/s 25.6 GB/s 3xDDR3, 1066 MHz 25.8 GB/s PCIe, x16 16 GB/s Duplex Copy: Experimental Results CPU CPU--00 IOH IOH X58 X58 DRAM DRAM GPU GPU--00 CPU CPU--00 IOH IOH D36 D36 DRAM DRAM GPU GPU--00 CPU CPU--11 DRAM DRAM 10.8 GB/s 11 GB/s QPI, 6.4 GT/s 25.6 GB/s 3xDDR3, 1066 MHz 25.8 GB/s PCIe, x16 16 GB/s Unified Virtual Addressing No UVA: Multiple Memory Spaces UVA : Single Address Space System Memory CPU CPU GPU0 GPU0 GPU0 Memory GPU1 GPU1 GPU1 Memory PCI-e 0x0000 0xFFFF 0x0000 0xFFFF 0x0000 0xFFFF System Memory CPU CPU GPU0 GPU0 GPU0 Memory GPU1 GPU1 GPU1 Memory PCI-e 0x0000 0xFFFF Easier to Program with Single Address Space Summary Kernel Launch Configuration: Launch enough threads per SM to hide latency Launch enough threadblocks to load the GPU Global memory: Maximize throughput (GPU has lots of bandwidth, use it effectively) Use shared memory when applicable (over 1 TB/s bandwidth) GPU-CPU interaction: Minimize CPU/GPU idling, maximize PCIe throughput Use analysis/profiling when optimizing: “Analysis-driven Optimization” part of the tutorial following Questions?
Abstract Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing. Keywords Data Layout Optimization, CUDA Performance Optimization, GPU Memory Optimization, Dynamic Programming, Matrix Multiplication, Memory Access Pattern Optimization in CUDA Share and Cite: 1. Introduction Graphics Processing Units (GPUs) have become ubiquitous accelerators in modern computing systems, offering tremendous parallel processing capabilities. Today, a single GPU provides thousands of compute cores capable of delivering teraflops of computational power, making GPU accelerator cards increasingly deployed in everything from mobile devices to cloud servers for a wide range of applications including artificial intelligence, scientific computing, and graphics [1] [2] . Despite these advancements, designing preferment and efficient GPU code is fraught with programming complexities and architectural constraints, particularly in threading, memory access, and parallelism management. A critical challenge in leveraging GPU capabilities is managing data movement and organization on systems where there are processing-speed mismatches across components. For GPUs, which feature wide vector units and high memory bandwidth, disorganized or sparse data access patterns can incur high latency, leading to inefficiencies as memory controllers struggle to keep pace [3] . Optimizing data layout and access patterns is thus essential for feeding compute units efficiently and maximizing floating-point throughput. Another significant concern is Amdahl’s law, which posits that the speedup potential from parallel hardware is limited by the serial portions of code, indicating that various overheads such as kernel launch latency, host-to-device transfers, and synchronization delays can undermine the benefits of extensive parallelism. The efficient execution of matrix multiplication operations is a cornerstone in various computational domains, including deep learning, scientific simulations, and big data analytics. Optimizing these operations on GPUs can unlock significant performance gains, enabling faster training of neural networks, more accurate simulations, and accelerated data processing pipelines. In this paper, we present a comprehensive approach aimed at addressing the aforementioned challenges, highlighting the efficacy of our proposed data layout optimization mechanism through the application of matrix multiplication. By focusing on the chained matrix multiplication (CMM) problem, recognized for its importance in various fields such as machine learning, physics simulations, and graphics, we develop and implement a dynamic programming algorithm optimized for CMM, utilizing CUDA code. The integration of a data layout transformation demonstrates a substantial improvement in memory locality and coalescing during parallel processing, underscoring the effectiveness of our approach [4] . Additionally, to further enhance performance, we explore additional parallelization techniques at the data level, including parallel diagonal computation and 2D thread block mapping, to maximize fine-grained concurrency. At the task level, we leverage streams and events to enable concurrent execution of multiple problem instances, optimizing the overlap between data transfers and computational processes, thereby further refining our optimization strategy. This paper’s principles and techniques serve as a case study for unlocking the full potential of modern parallel accelerators. As the adoption of heterogeneous and GPU-based high-performance computing continues to grow rapidly, the programming practices and optimization strategies we discuss will be essential for harnessing the benefits of this technology [5] . Our work addresses key optimization challenges around parallelism management, data organization, and orchestration strategy [6] , offering insights that can be applied to adapt and implement various complex workloads on massively parallel processors. 2. Related Work Prior work has developed optimizations for irregular data access [7] , data layout selection [8] , and communication reducing techniques when mapping algorithms onto GPU systems. While these existing approaches have demonstrated varying degrees of success, they often overlook the intricate interplay between memory access patterns, data layout, and the underlying GPU architecture. Additionally, many techniques are tailored to specific application domains or workloads, limiting their broader applicability. Our work aims to address these limitations by proposing a novel Data Layout strategy that restructures memory data arrangement to enhance locality and coalescing, thereby optimizing performance across a wide range of GPU-accelerated applications. Li et al. [9] proposed a simple yet effective data layout arbitration framework that automatically picks up the beneficial data layout for different DNNs under different pruning schemes. The proposed framework is built upon a formulated cache estimation model. Experimental results indicate that their approach is always able to select the most beneficial data layout and achieves the average training performance improvement with 14.3% and 3.1% compared to uniformly using two popular data layouts. Zhenkun et al. [10] proposed a system dubbed Distributed Sampling and Pipelining (DSP) for multi-GPU GNN training. DSP adopts a tailored data layout to utilize the fast NVLink connections among the GPUs, which stores the graph topology and popular node features in GPU memory. For efficient graph sampling with multiple GPUs, they introduced a collective sampling primitive (CSP), which pushes the sampling tasks to data to reduce communication. They also design a producer-consumer-based pipeline, which allows tasks from different mini-batches to run congruently to improve GPU utilization. They compare DSP with state-of-the-art GNN training frameworks, and the results show that DSP consistently outperforms the baselines under different datasets, GNN models, and GPU counts. The speedup of DSP can be up to 26x and is over 2x in most cases. Wan et al. [11] introduced two online data layout reorganization approaches for achieving good tradeoffs between read and write performance. They demonstrated the benefits of using two approaches for the ECP particle-in-cell simulation WarpX, which serves as a motif for a large class of important Exascale applications. They showed that by understanding application I/O patterns and carefully designing data layouts they increased read performance by more than 80%. Stoltzfus and Emani [12] proposed a machine learning-based approach to build a classifier to determine the best class of GPU memory that will minimize GPU kernel execution time. This approach utilizes a set of performance counters obtained from profiling runs along with hardware features to generate the trained model. They evaluated their approach on several generations of NVIDIA GPUs, including Kepler, Maxwell, Pascal, and Volta on a set of benchmarks. Their results showed that the trained model achieves prediction accuracy of over 90%. Majeti et al. Zhong et al. [13] introduce a new graph format with a data layout such that it supports coalesced access. Despite these promising results, existing optimization techniques for CUDA programs have inherent limitations. Memory bandwidth constraints, latency, and the non-uniform memory access (NUMA) architecture of GPUs may limit the applicability or performance benefits of these techniques in some scenarios. Furthermore, the dynamic nature of data access patterns in certain applications could reduce the effectiveness of static data layout optimizations. 3. Data Layout Technique To optimize data access for parallel I/O, a data layout technique has been proposed and developed. This technique is successful in reducing the execution time of CUDA kernels and reducing their memory consumption. One of the types of data layout techniques implemented in this article is related to changing the arrangement of data in memory to improve memory access patterns and locality. The underlying principle of our Data Layout strategy is to restructure the memory layout of data structures, such as matrices, to align with the access patterns of the target algorithm. By storing elements that are accessed consecutively in contiguous memory locations, we can enhance spatial locality and leverage hardware caching mechanisms more effectively. This approach reduces cache misses and improves coalesced memory accesses, leading to more efficient utilization of the GPU’s memory subsystem. Consider the case of matrix multiplication, a critical operation in various domains. Traditionally, matrices are stored in row-major or column-major order, which may not be optimal for certain access patterns. Our Data Layout technique explores alternative storage formats, such as diagonal-based or blocked layouts, to improve memory access locality for the specific algorithm being executed. Here we want to implement this technique on a matrix of numbers. Before we use the data layout on the matrix, it is important to understand the various data layout patterns. In a matrix, there are different ways in which data is written to memory, including row-based and column-based data layouts: 3.1. Row-Based Storage In row-based storage, data for a single row of a table is stored consecutively on memory. This means that all the columns of a given row are stored together, which can make it efficient for operations that need to access an entire row of data at once. 3.2. Column-Based Storage In column-based storage, each column of a matrix is stored consecutively on memory. This can be more efficient for operations that only need to access a subset of the columns in a matrix. Despite the promising results, certain inherent limitations of data layout optimization techniques in GPU programming warrant consideration. Issues such as memory bandwidth constraints, latency, and the non-uniform memory access (NUMA) architecture of GPUs may limit the applicability or performance benefits of these techniques in some scenarios. Furthermore, the dynamic nature of data access patterns in certain applications could reduce the effectiveness of static data layout optimizations. 4. Case Study: Chained Matrix Multiplication Problem We address the problem of chained matrix multiplication (CMM), a cornerstone in computing, with a dynamic programming algorithm. This algorithm optimizes the order of matrix multiplication operations, a task crucial for minimizing computational workloads. The goal is to develop an algorithm that determines the optimal order for multiplying n matrices, as the optimal order depends only on the matrix dimensions. Consider the multiplication of the following n matrices: A 1 × A 2 × A 3 × ⋯ × A n The number of multiplications required to multiply two matrices An×m × Bm×k is n × m × k times. Matrix multiplication is an associative operation, meaning that the order in which we multiply do not matter. Therefore, the number of multiplications is dependent on the order of matrix multiplication. For example, for four matrix multiplication of A20×2 × B2×30 × C30×12 × D12×8 (n = 4), there are five different orders in which we can multiply four matrices, each possibly resulting in a different number of elementary multiplications: • A(B(CD)): 30 × 12 × 8 + 2 × 30 × 8 + 20 × 2 × 8 = 3680 • (AB)(CD): 20 × 2 × 30 + 30 × 12 × 8 + 20 × 30 × 8 = 8880 • A((BC)D): 2 × 30 × 12 + 2 × 12 × 8 + 20 × 2 × 8 = 1232 • ((AB)C)D: 20 × 2 × 30 + 20 × 30 × 12 + 20 × 12 × 8 = 10320 • (A(BC))D: 2 × 30 × 12 + 20 × 2 × 12 + 20 × 12 × 8 = 3120 The third order is the optimal order for multiplying the four matrices. In this problem, the goal is to develop an algorithm that determines the optimal order for multiplying n matrices. The optimal order depends only on the dimensions of the matrices. Therefore, besides n, these dimensions would be the only input to the algorithm. 5. Serial Algorithm by Dynamic Programming Method In this section, the dynamic programming solution for the problem of chain multiplication of matrices is described. We first present the serial dynamic programming solution for the CMM problem [14] , which avoids redundant calculations by breaking down the problem into subproblems and storing the results in a matrix M. The provided code is a serial implementation, without taking advantage of parallel processing. Input: int n, int dim [ 0 ⋯ n ]. Here, n is the number of matrices and dim contains the dimensions of the matrices. For instance, for A20×2 × B2×30 × C30×12 × D12×8, inputs are n = 4 and dim = [20, 2, 30, 12]. Output: int A[n + 1][n + 1], int M[n + 1][n + 1]. Here, M [i, j] is the minimum number of multiplications in the ith to jth matrix multiplication. Also, if A[i, j] = k(i ≤ k < j), then the optimal order of multiplication in the ith to jth matrix multiplication is (Ai × ∙∙∙ × Ak) × (Ak+1 × ∙∙∙ × Aj) and the optimal number of multiplications is M[1, n]. Algorithm: Inside each parenthesis, the multiplications are obtained according to the optimal order for the matrices inside the parentheses. Of these factorizations, the one that yields the minimum number of multiplications must be the optimal one. The number of multiplications for the kth factorization is the minimum number needed to obtain each factor plus the number needed to multiply the two factors. This means that it equals: M [ 1 ] [ n ] = min 1 ≤ k < n ( M [ 1 ] [ k ] + M [ k + 1 ] [ n ] + d 1 d k d n ) To calculate the intermediate values, the formula is as follows: M [ 1 ] [ n ] = min 1 ≤ k < n ( M [ 1 ] [ k ] + M [ k + 1 ] [ n ] + d 1 d k d n ) With: M [ i ] [ i ] = 0 , fori = 0 ⋯ n Calculations are performed diagonally, starting from the main diameter as shown in Figure 1. Figure 1. The order of calculations in algorithm. 5.1. Serial Algorithm by Dynamic Programming Method for CMM Problem This is an implementation of a dynamic programming solution for the Chain Matrix Multiplication (CMM) problem [14] . This problem involves finding the most efficient way to multiply a sequence of matrices together. The dynamic programming approach efficiently avoids redundant calculations by breaking down the problem into subproblems and storing the results of those subproblems in the matrix M. The provided code is a serial implementation, meaning it doesn’t take advantage of parallel processing. The CMM problem and its dynamic programming solution are commonly used in algorithmic optimization for matrix chain multiplication scenarios. The serial code of this algorithm is presented in Listing 1. Listing 1. Serial version by dynamic programming method for CMM problem. 5.2. Parallel Version in CUDA C++ Considering that in dynamic programming algorithms, arrays are used to store data, there is a very good parallelization capability in them. Each data item in a diagonal of this matrix is calculated by a thread. In the first step, n threads set to zero the values of the main diameter. Then, at each step, one thread is set aside, so that finally, in the last round, only one thread calculates the value of M [1][n]. Considering that the values calculated in each diameter are dependent on the values of the previous diameters, therefore, at the end of each round, synchronization must be done between all the threads. This action is done through the __syncthreads() instruction. This instruction only synchronizes the threads in a block. So, we are only able to use one block in calculations. Global synchronization between all blocks of a kernel has not been implemented in the CUDA programming model, and no instructions have been published for it by NVIDIA. Therefore, the kernel configuration in this program is <<<1, n>>>. In the CUDA version, the matrix is converted to a one-dimensional array. In the CUDA programming guide [15] , it is recommended to use a one-dimensional array instead of a matrix in the kernel. So, M[i][j] in the matrix is mapped to M[(n + 1) × i + j] in the one-dimensional array. Kernal launching in this program is: CMM_CUDA_kernel<<<1, n>>>(dev_dim, dev_m, dev_result, n). dev_dim, which is sent as an argument to the kernel, is a one-dimensional array of the dimensions of the matrix. dev_m is also the matrix M that is used for calculations and has the same function as the serial version. Before kernel launching, the data must be transferred from the main memory of the CPU to the global memory of the GPU. After the execution of the kernel, the results should be transferred in the reverse direction. 6. Tuning of Algorithm by Data Layout Technique In the serial algorithm, the computation in the matrix M is done diagonally. However, in the C++ language, matrices are stored as rows in memory. Therefore, thread accesses to memory are not consecutive, reducing locality and increasing cache misses and uncoalesced accesses to global memory, which decreases performance. To address this issue, we apply the Data Layout technique by storing the elements of each diagonal together in memory, as described in Section 3. This change requires modifications to the indices accessed in the algorithm. By restructuring the data layout, we increase locality and improve the probability of cache hits and coalesced accesses, leading to significant performance improvements. The proposed Data Layout strategy is based on the principle of organizing data in a way that aligns with the access patterns of the program. By storing related data elements consecutively in memory, we can increase the likelihood of cache hits and coalesced memory accesses, reducing memory bandwidth bottlenecks and improving overall throughput. The use of this technique requires changes in the codes and the indices accessed in the algorithm must be changed. We explain this technique with an example. For instance, consider the data of a matrix as shown in Figure 2. Figure 2. Initial Matrix data. We only describe an upper triangular matrix because this type of matrix is also used in the chained multiplication problem. Figure 3 illustrates the typical way to store this matrix in the memory. Figure 3. Data layout in memory. Figure 4 exhibits how our proposed data layout technique that we used in this problem stores the data of the above matrix in the memory. Figure 4. Proposed data layout technique in memory. Therefore, Locality increases strongly and increases the possibility of cache hit and coalesced accesses. We applied this technique to the algorithm of chained multiplication of matrices and obtained promising results which are reported in the following section. 7. Additional Parallelization Techniques 7.1. Data Level Parallelism To further exploit parallelism in the chain matrix multiplication algorithm, we apply techniques to partition the data at a finer granularity. Parallelizing Diagonal Computations. Our existing implementation maps one GPU thread to compute each element along the diagonal. We modify this to use multiple threads to compute each element by decomposing the computation in dimensions as shown in Listing 2. Listing 2. Parallelizing diagonal computation. We use a (16 × 16) thread block, enabling (256) threads to cooperate in computing each matrix element. This adds finer-grained parallelism within each diagonal. On our GPU with (128) CUDA cores per SM, this enables each SM to process 2 diagonal elements in parallel. The (16 × 16) block also improves memory access patterns. Compared to the one thread per element approach, this parallel diagonal computation reduces kernel time by 41% 2D Thread Block Decomposition. We also decompose the total computation into 2D thread blocks, assigning each diagonal across multiple blocks as specified in Listing 3. Listing 3. Block decomposition. This spreads the work of a diagonal over more GPU cores for greater parallelism. This allows more SMs and CUDA cores to operate on a diagonal in parallel. With N/16 blocks, more SMs participate, and overall parallelism improves. Using 2D thread blocks gives a 23% kernel speedup over the parallel diagonal method alone. 7.2. Task Level Parallelism To overlap computation and transfers between the CPU and GPU, we leverage streams, events, and concurrency [16] as illustrated in Listing 4. Listing 4. Task level parallelism. We also parallelize across multiple independent problem instances by allocating separate streams and CUDA contexts for each instance. This enables entirely concurrent execution. The streams and asynchronous calls prevent these operations from blocking each other. This improves GPU utilization and end-to-end runtime. With 2 streams per instance, we get up to 4× speedup with 4 problems run in parallel. 8. Evaluation We conducted a series of experiments to evaluate the performance of our proposed Data Layout strategy compared to existing optimization techniques, focusing on the chained matrix multiplication (CMM) problem. The experiments were performed on a system equipped with an NVIDIA Tesla V100 GPU, utilizing the CUDA programming model. We implemented our optimization technique and compared it against conventional memory layouts such as row-major and column-major order, as well as other state-of-the-art optimization strategies proposed in prior literature. We leverage cuda Event tool for profiling the execution time of programs, which provides very good accuracy compared to the clock() function. To use this tool for recording the execution time, we use the solution shown in Listing 5. Listing 5. Recording execution time. Table 1 presents the execution times for the serial CPU dynamic programming algorithm, the baseline CUDA GPU parallel implementation, and the CUDA version optimized with the Data Layout technique, across different numbers of input matrices (n = 1016 to 1024). Note that it is not possible to run the program for n > 1024, as the maximum number of threads per block is 1024. All times are measured in milliseconds. As shown in Figure 5, the CUDA implementation demonstrates more than 2× speedup compared to the serial algorithm across all matrix sizes. For n = 1024 matrices, the serial algorithm takes 1287 ms, while the CUDA implementation requires only 542 ms, achieving a 2.4× runtime improvement by harnessing the parallel processing power of the GPU. Table 1. Execution time comparison. Figure 6 provides deeper insight into the performance trends for smaller input sizes in the matrix chain multiplication problem. We plot execution times for a range of n = 10 to 400 matrices to examine why the serial CPU implementation outperforms the CUDA GPU code at very small n. The breakeven point where CUDA becomes faster occurs around 218 matrices. Below this threshold, the parallel CUDA overheads of copying memory between host and device as well as launching computational kernels overwhelm the relatively minor parallelism benefits for small inputs. Figure 5. Execution time comparison between serial and CUDA version. However, beyond n = 218 matrices, the runtime of the serial algorithm grows super linearly due to its algorithmic complexity of O(n3). In contrast, CUDA runtime grows roughly linearly thanks to exploiting parallel hardware. Ultimately, this allows the CUDA performance curve to cross below the serial line. Profiling shows kernel launch overheads are relatively fixed at around 0.4 ms, while serial algorithm runtime scales worse than linearly. This highlights why CUDA provides increasing returns as the problem size grows—parallel hardware continues delivering a fixed amount of extra throughput, surpassing serial execution. Figure 6. Execution time comparison between serial and CUDA version. Figure 7 compares the execution time between our baseline naive CUDA implementation and the version optimized with the data layout transformation technique. The optimized CUDA code accesses matrix data with significantly improved locality and coalescing, providing up to a 2× faster runtime, with an average speedup exceeding 1.8×. Performance gains are consistent across all input sizes, demonstrating the effectiveness of our Data Layout technique in accelerating memory access patterns. Compared to previous optimization approaches that relied on compiler auto-vectorization or manual code transformations, our Data Layout strategy offers a more systematic and architecture-aware solution. By explicitly restructuring the memory layout, we can ensure optimal data access patterns tailored to the GPU’s memory hierarchy, leading to superior performance gains. This runtime comparison shows the clear performance benefits of optimizing memory access patterns on the GPU using our data layout transformation. Rather than relying on the default row-major matrix storage, we rearrange elements to store diagonals consecutively in memory. This matches the access pattern of our dynamic programming algorithm, which iterates diagonally through the matrix. Laying out data to improve locality directly accelerates these memory reads and writes. We see execution time reduced by over 50%, with the optimized CUDA implementation running over 1.8× faster across all input sizes. At 1024 matrices, runtime drops from 542ms in the baseline CUDA code down to just 272 ms with data layout improvements. By enhancing memory coalescing and exploiting caching, the GPU no longer wastes cycles waiting on scattered uncoalesced memory accesses. This demonstrates data layout changes can unlock substantial performance gains by alleviating bottlenecks related to noncontiguous data access. The optimization builds on earlier CUDA speedups for a combined 4× total improvement over the original serial algorithm. Figure 7. Comparison of CUDA version and Optimized CUDA version. Figure 8 depicts the speedup attained by our optimized CUDA GPU algorithm with the data layout strategy, relative to the performance of the baseline naive CUDA implementation. We observe an average 1.9× speedup, reaching as high as 2.04× for some matrix input sizes. This demonstrates the effectiveness of improved memory locality in unlocking performance, cutting execution time by up to 50%. Figure 8. Comparison of CUDA version and Optimized CUDA version. 9. Discussion and Future Work The remarkable performance improvements achieved by our Data Layout strategy highlight its potential for unlocking the true computational power of GPUs across a wide range of applications. While our case study focused on chained matrix multiplication, the underlying principles of our approach are applicable to various algorithms and data structures that exhibit non-contiguous or irregular memory access patterns. Despite the promising results, certain inherent limitations of our Data Layout technique warrant consideration. Issues such as memory bandwidth constraints, latency, and the non-uniform memory access (NUMA) architecture of GPUs may limit the applicability or performance benefits in some scenarios. Furthermore, the dynamic nature of data access patterns in certain applications could reduce the effectiveness of static data layout optimizations. Looking ahead, our future work will focus on exploring the scalability and effectiveness of our Data Layout strategy in large-scale GPU clusters and cloud environments. By conducting extensive experiments across distributed systems, we aim to provide deeper insights into the potential challenges and opportunities of our approach in these advanced computing paradigms. Additionally, we plan to investigate the integration of our technique with other optimization strategies, such as dynamic data layout transformations and adaptive memory management, to further enhance performance and mitigate the limitations mentioned above. 10. Conclusions In this study, we presented a novel Data Layout strategy for optimizing CUDA programs, particularly focusing on dynamic programming algorithms for chained matrix multiplication. By restructuring memory data arrangement to improve locality and coalescing, our approach achieved significant performance enhancements, reducing execution time by up to 50% and memory consumption compared to the baseline implementation. The effectiveness of our technique underscores the importance of architecture-aware optimizations in unlocking the full potential of GPU-accelerated applications. As the adoption of heterogeneous and GPU-based computing continues to grow rapidly across various domains, the principles and strategies discussed in this work will be instrumental in harnessing the benefits of these powerful parallel architectures. While our findings are promising, we acknowledge the limitations and challenges associated with our approach and emphasize the need for further research to extend its applicability and address potential scalability concerns. By continuing to explore innovative optimization techniques and leveraging the synergies between hardware and software, we can pave the way for more efficient and high-performance GPU computing solutions. It’s important to acknowledge that the scope of our experiments was limited by time constraints, restricting our ability to conduct a more extensive investigation. Future studies will aim to address this limitation by allocating more time for rigorous experimentation and analysis. Acknowledgements This work is supported by NSF award #2348330. Conflicts of Interest The authors declare no conflicts of interest regarding the publication of this paper. References [1] Harris, M. (2007) Optimizing Parallel Reduction in CUDA. Nvidia Developer Technology, 2, 70. [2] Hong, S. and Hyesoon, K. (2010) An Integrated GPU Power and Performance Model. ACM SIGARCH Computer Architecture News, 38, 280-289. https://doi.org/10.1145/1816038.1815998 [3] Volkov, V. and Demmel, J.W. (2008) Benchmarking GPUs to Tune Dense Linear Algebra. Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, Austin, 15-21 November 2008. https://doi.org/10.1109/SC.2008.5214359 [4] Wu, Y.N., Tsai, P.-A., Muralidharan, S., Parashar, A., Sze, V. and Emer, J. (2023) HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity. Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, Association for Computing Machinery, New York, NY, USA, 1106-1120. https://doi.org/10.1145/3613424.3623786 [5] Liu, G., et al. (2018) A Scalable Parallel Method for Large-Scale Matrix Computations. The Journal of Supercomputing, 74, 6641-6656. [6] Peled, L., Mannor, S., Weiser, U. and Etsion, Y. (2015) Semantic Locality and Context-Based Prefetching Using Reinforcement Learning. ACM SIGARCH Computer Architecture News, 43, 285-297. https://doi.org/10.1145/2872887.2749473 [7] Aldinucci, M., Drocco, M., Mastrostefano, F. and Vanneschi, M. (2018) Hardware-Conscious Autonomic Management of Distributed Workflows. International Conference on Algorithms and Architectures for Parallel Processing, Springer, Cham, 27-31 August 2018, 343-359. [8] Ballard, G., Zheng, G., Demmel, J. and Yelick, K. (2017) An Efficient and Generic Event-Based Profiling Framework for GPU Architectures. IEEE Transactions on Parallel and Distributed Systems, 29, 169-182. [9] Li, B.Y., et al. (2022) Optimizing Data Layout for Training Deep Neural Networks. Companion Proceedings of the Web Conference 2022, New York, April 2022, 548-554. https://doi.org/10.1145/3487553.3524856 [10] Cai, Z., Hu, L., Shi, B., Chen, Y., Hu, C. and Tang, J. (2023) DSP: Efficient GNN Training with Multiple GPUs. Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, Montreal, February 2023, 392-404. https://doi.org/10.1145/3572848.3577528 [11] Wan, L.P., et al. (2022) Improving I/O Performance for Exascale Applications through Online Data Layout Reorganization. IEEE Transactions on Parallel and Distributed Systems, 33, 878-890. https://doi.org/10.1109/TPDS.2021.3100784 [12] Stoltzfus, L., et al. () Data Placement Optimization in GPU Memory Hierarchy Using Predictive Modeling. Proceedings of the Workshop on Memory Centric High Performance Computing, Dallas, November 2018, 45-49. https://doi.org/10.1145/3286475.3286482 [13] Zhong, J.L. and He, B.S. (2014) Medusa: Simplified Graph Processing on GPUs. IEEE Transactions on Parallel and Distributed Systems, 25, 1543-1552. https://doi.org/10.1109/TPDS.2013.111 [14] Neapolitan, R. and Naimipour, K. (2008) Foundations of Algorithms Using C Pseudocode. 3rd Edition, Jones and Bartlett Publishers, Inc., Sudbury, USA. [15] NVIDIA (2023) CUDA C Programming Guide (Version 12.2). https://docs.nvidia.com/cuda/archive/12.2.0/pdf/CUDA_C_Best_Practices_Guide.pdf [16] Segura, A., Arnau, J.-M. and González, A. (2019) SCU: A GPU Stream Compaction Unit for Graph Processing. Proceedings of the 46th International Symposium on Computer Architecture, Phoenix, Arizona, 22-26 June 2019, 424-435. https://doi.org/10.1145/3307650.3322254 Journals Menu Copyright © 2025 by authors and Scientific Research Publishing Inc. This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License. Home About SCIRP Service Policies
On this page GPU MODE Lecture 6: Optimizing Optimizers in PyTorch Christian Mills September 2, 2024 Introduction Optimization Analogy: Towing Cars Optimizer Basics and Optimization Levels parameter = parameter - learning_rate * gradient Implementation: Processes parameters one by one in a for loop. Simplified Example: for param in params: # Retrieve necessary data for the current parameter # Perform operations (add, multiply, lerp, etc.) # Update the parameter Visualization: Each parameter update is a sequence of operations (gray circles), represented as a column. M operations per parameter, N parameters total, resulting in M x N operations. Implementation: PyTorch’s current default; operates on entire parameter lists at once using vectorized operations. Simplified Example: # Add a constant to all parameters in the list # Multiply all parameters by a constant # ... other operations Visualization: Each operation (blue circles) is performed on all parameters simultaneously. M operations total. Multi-Tensor Apply: The Powerhouse Standard Add (Simplified): __device__ void add_kernel(float* self, float* other, float* res, float alpha=1); ForEach Add (Challenge): How would you design a CUDA kernel signature to handle tensor lists? Attempt 1: Passing Standard Vector (Failed) Attempt 2: Passing Pointers to Pointers (Failed) Attempt 3: Passing by Chonky Boy (Partially Successful) Idea: Pass tensor data pointers by value using a struct. Implementation: Outcome: Works initially, but encounters issues with the kernel argument space limit. Kernel Argument Space Limit: The kernel argument space has a maximum size of 4 kilobytes. Problem: If the struct containing tensor pointers exceeds 4 kilobytes, only a portion of the struct gets passed to the kernel, leading to illegal memory access when accessing pointers beyond the limit. Repro Example: params = [torch.rand(2,3, device="cuda") for _ in range(N)] torch._foreach_norm(params, ord=1) torch.cuda.synchronize() Observation: Illegal memory access occurs when the number of tensors exceeds 423. Conclusion: The struct approach works as long as the number of tensor pointers does not exceed the 4 kilobyte limit. Solution 1: Batching (Current Implementation) Solution 2: Revisiting Pointers to Pointers with Memcpy Solution 3: Unified Memory (Future Exploration) Fused Optimizers and Multi-Tensor Apply Torch Compile and the Future of Fused Optimizers Torch Compile (Inductor): PyTorch’s compiler that excels at vertical fusion of operations. Potential: Automate the vertical fusion of optimizer operations, eliminating the need for handwritten CUDA kernels. Benefits: Current Status: Example: optimizer = torch.optim.AdamW(model.parameters()) @torch.compile(fullgraph=False) def compiled_step(): optimizer.step() # ... training loop compiled_step() Limitations: Future Directions: Q&A Session Obtaining Triton Kernel Code Visualizing Kernel Graphs Compile Time Dependency on Number of Tensors Caching Compiled Results Memory Management with Structs Containing Pointers Memcpy Size Limit Memcpy Direction Argument Device-to-Device Copy I’m Christian Mills, a deep learning consultant specializing in practical AI implementations. I help clients leverage cutting-edge AI technologies to solve real-world problems. Interested in working together? Fill out my Quick AI Project Assessment form or learn more about me. Content licensed under CC BY-NC-SA 4.0 © 2025 Christian J. Mills Code samples licensed under the MIT License
CUDA Convolution - GPGPU Programming - Dec, 2008 Sangyoon Lee (sjames @ evl.uic.edu) Electronic Visualization Laboratory University of Illinois at Chicago * This project is a part of CS525 GPU Programming Class instructed by Andy Johnson. 1. Concept and Brief A given final exam is to explore CUDA optimization with Convoluiton filter application from nvidia's CUDA 2.0 SDK. There are three type of convolution filter in SDK. I mainly used convolutionTexture and convolutionSeparable application. - Dataset (Images) Images used in final is provided by Andy (see class website). I used 1kby1k, 2kby2k and 4kby4k image for performance testing. For some reason, 8kby8k image does not work well on my system. - Development platform Mac OSX 10.5, MacBook Pro 2.5GHz, Geforce 8600M GT 512MB, nvidia CUDA SDK 2.0 Intermediate and Final version of application is available to download and test. See more details in section 7 below. 2. Starting Since there is well optimized application is available in SDK, I started to look at their code. First of all, original code uses random data instead of real images. Each data pixel in image represented as single float in these application. To make this application work with real image, I implemeted image loader and writer in RAW format. This code was slightly modified the module we used in project 2. Each color component of pixel is composed of three values, RGB. To apply convolution filter on image, there are two ways. The first one is simply to map each component as single float and run convolution filter three times for each channel. The second apporach is to modify the original code to use uchar4 or int type as dataset so that we can compute separate channel value within CUDA kernel. I implemeted both ways in convolutionTexuture and convolutionSeparable but later on I only used the first method since it makes kernel code much simpler. First thing I tried is top-down approach. I mean I took nvidia application and started to remove some of optimization techniques. Then, realized that it is not easy to make this all the way down to naive approach. So, restarted implementation from the most naive way and optimized it to close to the nvidia's application. Later section will explain these steps. If you are not familiar with convolution filter, please take a look wikipedia entry, Gaussian blur or Convolution. Also, class lecture note (week4, convolution) is useful. 3. Step 0: the most Naive approach From the idea of convolutio filter itself, the most naive approach is to use global memory to send data to device and each thread accesses this to compute convolution kernel. Our convolution kernel size is radius 8 (total 17x17 multiplicaiton for single pixel value). In image border area, reference value will be set to 0 during computation. This naive approach includes many of conditional statements and this causes very slow execution. There is no idle threads since total number of threads invoked is the same as total pixel numbers. CUDA kernel block size is 16x16. Below execution time is a mean value over 10 times execution. As you can see, it is extremely slow here. Figure 1. Memory Access Pattern in Naive approach: each threads in block access 17x17 times global memory. Below is CUDA kernel code. __global__ void convolutionGPU( ...............................float *d_Result, ...............................float *d_Data, ...............................int dataW, ...............................int dataH ) { ....////////////////////////////////////////////////////////////////////// ....// most slowest way to compute convolution ....////////////////////////////////////////////////////////////////////// ....// global mem address for this thread ....const int gLoc = threadIdx.x + .....................blockIdx.x * blockDim.x + .....................threadIdx.y * dataW + .....................blockIdx.y * blockDim.y * dataW; ....float sum = 0; ....float value = 0; ....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++) // row wise ........for (int j = -KERNEL_RADIUS; j <= KERNEL_RADIUS; j++) // col wise ........{ ............// check row first ............if (blockIdx.x == 0 && (threadIdx.x + i) < 0) // left apron ................value = 0; ............else if ( blockIdx.x == (gridDim.x - 1) && ........................(threadIdx.x + i) > blockDim.x-1 ) // right apron ................value = 0; ............else ............{ ................// check col next ................if (blockIdx.y == 0 && (threadIdx.y + j) < 0) // top apron ....................value = 0; ................else if ( blockIdx.y == (gridDim.y - 1) && ............................(threadIdx.y + j) > blockDim.y-1 ) // bottom apron ....................value = 0; ................else // safe case ....................value = d_Data[gLoc + i + j * dataW]; ............} ............sum += value * d_Kernel[KERNEL_RADIUS + i] * d_Kernel[KERNEL_RADIUS + j]; ........} ........d_Result[gLoc] = sum; } 4. Step 1: Shared Memory We all experienced the importance of shared memory throughout project 2. Now, it is time to incorporate with this feature from naive code. When I read nvidia convolution document, I thought that it is OK to invoke many of threads and each thread load data from global mem to shared mem. Then, let some of threads idle. Those are thread loaded apron pixels and do not compute convolution. The first attempt was to keep active thread size as same as previous and increase block size for apron pixels. This did not work since convolution kernel radius is 8 and it make block size to 32 x 32 (1024). This is bigger than G80 hardware limit (512 threads max per block). Therefore, I changes scheme as all threads are active and each thread loads four pixels and keep the block size 16x16. Shared Memory size used is 32x32 (this includes all necessary apron pixel values for 16x16 active pixels). Below shows quite a bit of performance improve. This is almost x2.8 speed up over naive approach (in 2048 resolution). Figure 2. Shared Memory Model for naive approach: each threads in block load 4 values from global memory. Threfore, total shared memory size is 4 times bigger than active convolution pixels to include apron area (kernel radius 8 and block size 16x16. active pixels 256 float vs. shared memory size is 1024 float). Below codes illustrate the convolution kernel. __global__ void convolutionGPU( ................................float *d_Result, ................................float *d_Data, ................................int dataW, ................................int dataH ................................) { ....// Data cache: threadIdx.x , threadIdx.y ....__shared__ float data[TILE_W + KERNEL_RADIUS * 2][TILE_W + KERNEL_RADIUS * 2]; ....// global mem address of this thread ....const int gLoc = threadIdx.x + ........................IMUL(blockIdx.x, blockDim.x) + ........................IMUL(threadIdx.y, dataW) + ........................IMUL(blockIdx.y, blockDim.y) * dataW; ....// load cache (32x32 shared memory, 16x16 threads blocks) ....// each threads loads four values from global memory into shared mem ....// if in image area, get value in global mem, else 0 ....int x, y; // image based coordinate ....// original image based coordinate ....const int x0 = threadIdx.x + IMUL(blockIdx.x, blockDim.x); ....const int y0 = threadIdx.y + IMUL(blockIdx.y, blockDim.y); ....// case1: upper left ....x = x0 - KERNEL_RADIUS; ....y = y0 - KERNEL_RADIUS; ....if ( x < 0 || y < 0 ) ........data[threadIdx.x][threadIdx.y] = 0; ....else ........data[threadIdx.x][threadIdx.y] = d_Data[ gLoc - KERNEL_RADIUS - IMUL(dataW, KERNEL_RADIUS)]; ....// case2: upper right ....x = x0 + KERNEL_RADIUS; ....y = y0 - KERNEL_RADIUS; ....if ( x > dataW-1 || y < 0 ) ........data[threadIdx.x + blockDim.x][threadIdx.y] = 0; ....else ........data[threadIdx.x + blockDim.x][threadIdx.y] = d_Data[gLoc + KERNEL_RADIUS - IMUL(dataW, KERNEL_RADIUS)]; ....// case3: lower left ....x = x0 - KERNEL_RADIUS; ....y = y0 + KERNEL_RADIUS; ....if (x < 0 || y > dataH-1) ........data[threadIdx.x][threadIdx.y + blockDim.y] = 0; ....else ........data[threadIdx.x][threadIdx.y + blockDim.y] = d_Data[gLoc - KERNEL_RADIUS + IMUL(dataW, KERNEL_RADIUS)]; ....// case4: lower right ....x = x0 + KERNEL_RADIUS; ....y = y0 + KERNEL_RADIUS; ....if ( x > dataW-1 || y > dataH-1) ........data[threadIdx.x + blockDim.x][threadIdx.y + blockDim.y] = 0; ....else ........data[threadIdx.x + blockDim.x][threadIdx.y + blockDim.y] = d_Data[gLoc + KERNEL_RADIUS + IMUL(dataW, KERNEL_RADIUS)]; ....__syncthreads(); ....// convolution ....float sum = 0; ....x = KERNEL_RADIUS + threadIdx.x; ....y = KERNEL_RADIUS + threadIdx.y; ....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++) ........for (int j = -KERNEL_RADIUS; j <= KERNEL_RADIUS; j++) ............sum += data[x + i][y + j] * d_Kernel[KERNEL_RADIUS + j] * d_Kernel[KERNEL_RADIUS + i]; ....d_Result[gLoc] = sum; } One more optimization tested here. The use of faster integer multiplication instruction (above code already has this change), __mul24. After replacing all integer multiplication with __mul24, I got slight better performance. 5. Step 2: Filter Separation Here very important aspect of convolution filter is that it can be separated by row and column. This will reduce computation complexity from m*m to m+m. Basically we apply two separate convolution. The first one is row-wise and the second one is column-wise from the first result data (apply column convolution over row-wise filtered data). This also reduces some of conditional statement and total number of apron pixel data in each path since we do not need to consider vertical apron in row-convolution kernel and horizontal apron in column-convolution kernel. This gives me a great improvement over the last shared memory optimized version. This is almost x6.2 speed-up from the last version (in resolution 2048). Code already includes __mul24 instruction. Figure 3. Shared Memory for separate filter: this time only twice bigger memory is necessary for each filter Unfortunately compiler directive for loopunroll (#pragma unroll 17) does not give any significat improvement. Below shows kernel code for separable convolution filter. Applicatin also modified to run twice of the kernel for row and col convolution. //////////////////////////////////////////////////////////////////////////////// // Row convolution filter //////////////////////////////////////////////////////////////////////////////// __global__ void convolutionRowGPU( ....................................float *d_Result, ....................................float *d_Data, ....................................int dataW, ....................................int dataH ..................................) { ....// Data cache: threadIdx.x , threadIdx.y ....__shared__ float data[TILE_W + KERNEL_RADIUS * 2][TILE_H]; ....// global mem address of this thread ....const int gLoc = threadIdx.x + ........................IMUL(blockIdx.x, blockDim.x) + ........................IMUL(threadIdx.y, dataW) + ........................IMUL(blockIdx.y, blockDim.y) * dataW; ....int x; // image based coordinate ....// original image based coordinate ....const int x0 = threadIdx.x + IMUL(blockIdx.x, blockDim.x); .... // case1: left ....x = x0 - KERNEL_RADIUS; ....if ( x < 0 ) ........data[threadIdx.x][threadIdx.y] = 0; ....else ........data[threadIdx.x][threadIdx.y] = d_Data[ gLoc - KERNEL_RADIUS]; .... // case2: right ....x = x0 + KERNEL_RADIUS; ....if ( x > dataW-1 ) ........data[threadIdx.x + blockDim.x][threadIdx.y] = 0; ....else ........data[threadIdx.x + blockDim.x][threadIdx.y] = d_Data[gLoc + KERNEL_RADIUS]; .... __syncthreads(); ....// convolution ....float sum = 0; ....x = KERNEL_RADIUS + threadIdx.x; .... ....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++) ........sum += data[x + i][threadIdx.y] * d_Kernel[KERNEL_RADIUS + i]; .... d_Result[gLoc] = sum; } //////////////////////////////////////////////////////////////////////////////// // Column convolution filter //////////////////////////////////////////////////////////////////////////////// __global__ void convolutionColGPU( ....................................float *d_Result, ....................................float *d_Data, ....................................int dataW, ....................................int dataH .................................) { ....// Data cache: threadIdx.x , threadIdx.y ....__shared__ float data[TILE_W][TILE_H + KERNEL_RADIUS * 2]; .... // global mem address of this thread ....const int gLoc = threadIdx.x + ........................IMUL(blockIdx.x, blockDim.x) + ........................IMUL(threadIdx.y, dataW) + ........................IMUL(blockIdx.y, blockDim.y) * dataW; ....int y; // image based coordinate ....// original image based coordinate ....const int y0 = threadIdx.y + IMUL(blockIdx.y, blockDim.y); ....// case1: upper ....y = y0 - KERNEL_RADIUS; ....if ( y < 0 ) ........data[threadIdx.x][threadIdx.y] = 0; ....else ........data[threadIdx.x][threadIdx.y] = d_Data[ gLoc - IMUL(dataW, KERNEL_RADIUS)]; ....// case2: lower ....y = y0 + KERNEL_RADIUS; ....if ( y > dataH-1 ) ........data[threadIdx.x][threadIdx.y + blockDim.y] = 0; ....else ........data[threadIdx.x][threadIdx.y + blockDim.y] = d_Data[gLoc + IMUL(dataW, KERNEL_RADIUS)]; ....__syncthreads(); ....// convolution ....float sum = 0; ....y = KERNEL_RADIUS + threadIdx.y; ....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++) ........sum += data[threadIdx.x][y + i] * d_Kernel[KERNEL_RADIUS + i]; ....d_Result[gLoc] = sum; } 6. Step 3: Reorganize Shared Memory Until step 2, I used 2D array of shared memory to make indexing a bit simpler. Inside computation loop, there is possibility of bank conflict for warp since each thread access first column major memory at the same time. Now, let's re-arrange this shared memory to 1D array so that all threads access data consequently and optimize memory bus here. This only requires changes of indexing in kernel code. Below table shows performance after this re-arrange. Figure 4. 1D Shared Memory Access Pattern for row filter: shows the first four iteration of convolution computataion. red area is indicating values accessed by half warp threads. Even we obtained fair amount speed up with this re-arrangement, memory access pattern is not aligned well enough to meet the requirement of half-warp alignment for optimal performance. As you can see above table data, it achived about x3.2 speed-up over the first separable convoluiton implementation (in resolution 2048). Following shows kernel code for this optimization. From step 0 to this point, we made x57 speed-up overall (in resolution 2048). //////////////////////////////////////////////////////////////////////////////// // Row convolution filter //////////////////////////////////////////////////////////////////////////////// __global__ void convolutionRowGPU( ................................................float *d_Result, ................................................float *d_Data, ................................................int dataW, ................................................int dataH .............................................) { ....// Data cache: threadIdx.x , threadIdx.y ....__shared__ float data[ TILE_H * (TILE_W + KERNEL_RADIUS * 2) ]; ....// global mem address of this thread ....const int gLoc = threadIdx.x + ............................IMUL(blockIdx.x, blockDim.x) + ............................IMUL(threadIdx.y, dataW) + ............................IMUL(blockIdx.y, blockDim.y) * dataW; ....int x; // image based coordinate ....// original image based coordinate ....const int x0 = threadIdx.x + IMUL(blockIdx.x, blockDim.x); ....const int shift = threadIdx.y * (TILE_W + KERNEL_RADIUS * 2); ....// case1: left ....x = x0 - KERNEL_RADIUS; ....if ( x < 0 ) ........data[threadIdx.x + shift] = 0; ....else ........data[threadIdx.x + shift] = d_Data[ gLoc - KERNEL_RADIUS]; ....// case2: right ....x = x0 + KERNEL_RADIUS; ....if ( x > dataW-1 ) ........data[threadIdx.x + blockDim.x + shift] = 0; ....else ........data[threadIdx.x + blockDim.x + shift] = d_Data[gLoc + KERNEL_RADIUS]; ....__syncthreads(); ....// convolution ....float sum = 0; ....x = KERNEL_RADIUS + threadIdx.x; ....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++) ........sum += data[x + i + shift] * d_Kernel[KERNEL_RADIUS + i]; ....d_Result[gLoc] = sum; } //////////////////////////////////////////////////////////////////////////////// // Row convolution filter //////////////////////////////////////////////////////////////////////////////// __global__ void convolutionColGPU( ....................................float *d_Result, ....................................float *d_Data, ....................................int dataW, ....................................int dataH ..................................) { ....// Data cache: threadIdx.x , threadIdx.y ....__shared__ float data[TILE_W * (TILE_H + KERNEL_RADIUS * 2)]; ....// global mem address of this thread ....const int gLoc = threadIdx.x + ........................IMUL(blockIdx.x, blockDim.x) + ........................IMUL(threadIdx.y, dataW) + ........................IMUL(blockIdx.y, blockDim.y) * dataW; ....int y; // image based coordinate ....// original image based coordinate ....const int y0 = threadIdx.y + IMUL(blockIdx.y, blockDim.y); ....const int shift = threadIdx.y * (TILE_W); ....// case1: upper ....y = y0 - KERNEL_RADIUS; ....if ( y < 0 ) ........data[threadIdx.x + shift] = 0; ....else ........data[threadIdx.x + shift] = d_Data[ gLoc - IMUL(dataW, KERNEL_RADIUS)]; ....// case2: lower ....y = y0 + KERNEL_RADIUS; ....const int shift1 = shift + IMUL(blockDim.y, TILE_W); ....if ( y > dataH-1 ) ........data[threadIdx.x + shift1] = 0; ....else ........data[threadIdx.x + shift1] = d_Data[gLoc + IMUL(dataW, KERNEL_RADIUS)]; ....__syncthreads(); ....// convolution ....float sum = 0; ....for (int i = 0; i <= KERNEL_RADIUS*2; i++) ........sum += data[threadIdx.x + (threadIdx.y + i) * TILE_W] * d_Kernel[i]; ....d_Result[gLoc] = sum; } 7. Step 4: nvidia convolution app In step 3, we made many of optimizations and improved performance greately. Now, there is a bit of further possible optimization to maximize memory bandwidth by change block organization as we see in nvidia's convolution document. Instead of changing code from step 3, I modified nvidia's original code to use image data to see the difference in performance. As I explained in the very beginning, there are two version of convolution app from nvidia. Following table shows those two application performance (modiifed version). Compared to the result from step 3, convolutionSeparable optimization shows x2 speed-up (in resolution 2048). This application's kernel code is the same as the original one from nvidia. Only applicaiton side code is modified. Below table shows a couple of experiments I ran at the beginning of top-down approach with this code (original nvidia convolutionSeparable app). As we can see here, loop unrolling does not impact on performance that much but __mul24 intruction gives x1.3 speed-up. Here is a brief performance chart from step 1 to step 4 (step 0 is excludes due to its huge number). 8. Application Here are three different version of convolution application. The first one is my own implemetation until step 3 and the other two applicaitons are the one I modified to use texture image instead of random data from nvidia application (see details in section 2 & 7). Image Data: hubble.tar.gz (34MB. only include 1kby1k, 2kby2k, 4kby4k raw image from Andy's ditribution) Need to copy these image file in directory name of hubble at the same level of each convolution application directory. (if you copied application in xxx/convolution directory, then image directory must be xxx/hubble ) Download application source & executable: convolution.tar.gz (version of step 3) convolutionTexture.tar.gz (version of step 4, texture) convolutionSeparable.tar.gz (version of step 4, separable) Execution inside bin/darwin/release ./convolution [-i=image resolution] [-n=number of total run] ./convolutionTexture [-i=image resolution] [-n=number of total run] ./convolutionSeparable [-i=image resolution] [-n=number of total run] default image resolution is 1024 and total run is 10 times. Compile in each application directory, type 'make' 9. References [1] Nvidia CUDA Programming Guide 2.0, http://www.nvidia.com/object/cuda_develop.html [2] Victor Podlozhnyuk, Image Convolution with CUDA, nvidia CUDA 2.0 SDK convolutionSpeparable document
Optimize CUDA Host/Device Transfers justin.p.mckennon · This post is Topic #2 (part 2) in our series Parallel Code: Maximizing your Performance Potential. In my previous post, CUDA Host/Device Transfers and Data Movement, I provided an introduction into the bottlenecks associated with host/device transfers and data movement. This post will delve a bit further into the subject and provide a few nifty ways to mitigate these very costly operations. In every single CUDA application (well any useful ones, that is) there is at the very least one host-to-device transfer and one device-to-host transfer. More complicated applications often have many transfers between the host and device. In CUDA programming, this is one of the most expensive operations in terms of timing. So, if these host/device data transfers are so costly, how do you avoid them? Well, you can’t. But what you can do is minimize the number of transfers between host and device in your application, and mask their impact on the performance of your application. First, any intermediate data structures that are used within your kernel should always be allocated and destroyed solely on the device. This removes the need to map these structures to host memory and removes the need to transfer this data between the host and device. If your application has multiple host/device transfers, every effort should be made to batch these transfers into one large transfer. I like to think of this as if you were carrying groceries. Why make multiple trips out to the car when you can load up your arms and do it all at once? Most GPUs support transfer speeds between 5GB/sec and 11GB/sec. For situations where there is no way around transferring data between host and device, more advanced techniques can be employed to lessen the impact on your application: pinned (also known as page-locked, or mapped) memory and asynchronous transfers. Pinned Memory The cudaHostAlloc() function allows you to allocate host memory that can be read from the device and written directly to by the device. This allocated memory is called pinned memory. Pinned memory transfers attain the highest bandwidth between the host and device. During execution, a block that requires host data only needs to wait for a small portion of the data to be transferred (when operating through pinned memory). Typical host-to-device copies make all blocks wait until all of the data associated with the copy operation is transferred. Keep in mind, however, that pinning too much memory can degrade overall system performance by reducing the amount of memory available to the system for paging operations. How much memory you can safely pin differs from system to system, so definitely experiment with this to find the optimal amount. Asynchronous Transfers Standard host/device transfers are known as blocking transfers. Control of the main thread is returned only after the data transfer is complete. The cudaMemcpyAsync() function is effectively a non-blocking version of the standard cudaMemcpy(). When executing an asynchronous transfer via cudaMemcpyAsync(), control is returned immediately to the main thread. If you’re not jumping up and down with excitement after hearing that, you should be! Asynchronous transfers required pinned memory and make use of CUDA streams. In CUDA, streams are essentially sequences of operations that are performed in order on the device. Creating multiple streams is a bit more of an advanced CUDA technique, but one that must be learned if you want the most bang for your buck. With multiple streams in a single application, operations within separate streams can be overlapped, providing a great way to mask the host/device transfer time. Let’s look at an example of how using multiple streams can benefit you and your application: cudaMemcpyAsync(deviceArray,hostArray,size,cudaMemcpyHostToDevice,0); kernel<<>>(deviceArray); //your code Here, both the transfer and kernel are using the default stream, 0. During execution, the kernel will not be launched until the entire copy operation is complete and control has been returned back to the main thread. This is because both the kernel and memory copy are part of the same stream. Now, let’s look at the code using multiple streams: cudaStreamCreate(&mystream1); cudaStreamCreate(&mystream2); cudaMemcpyAsync(deviceArray,hostArray,size,cudaMemcpyHostToDevice,mystream1); kernel<<>>(otherDataArray); //your code By defining two new streams, we are able to make use of concurrent copy and compute. The memory copy is executing in one stream while the kernel is off in another stream, asynchronous from one another. An important note is to make sure that your device supports concurrent copy and execute before you put this in all of your code. This can be done via the deviceOverlap field of the cudaDeviceProp structure. While this is an advanced technique, if your data can be broken into chunks and transferred in various stages, you can launch multiple kernel instances to operate on each chunk of data as it arrives on the device. Doing so will almost completely mask the transfer time between the host and device. So, armed with the knowledge of streams, asynchronous transfers, and pinned memory, you now have some insight on how to squeeze out some more performance from your application. My next post will discuss how to efficiently make use of the available memory types accessible to you within your GPU application. You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
=אa1@�ى�i��4�t��E`��w�t�p��L��J� t����_�a95��V�!�y ��� ytך(��_�bG�o�B��&���l��k��"�G?^��痔ޚ����g�k �,���v��e�������;pu�ڸ��q�)� �p��B�û8��w ���{��'��7���V֗4��T�RȤ �_�w��c��3�}��,#����t,Z\e�Z ��`�o���7ݙ�5��g|��x����2�ܰa�W���2������@X�.���r*_n�,����-Wx�i��Z-$���D]�N=8.� ���.K�_��1:󲇼1�d�nhE`h@���; ��wnw ?�� /�����EO��޻,�C7^�5Sh �s�B֙�dSx�OW�s-~�V�y������x�.6Q �{|xQ֧G�Z#F�Z�8��āR3&ܖ���HP�g�f �-G�O��x_��{zF�հ�)�������6�U��{��&�mJ����9�7�!��o�����&�>ќj��X��eanJF*MNj�|K��$Oʴ9[�2��P�xH��[W'ʍ7�>L��+ ����A=8o�Е�Q{����;�)�I���|�#�CJ�M�����^�&R/�2q ar�Ǥ��E�w�&�#}��U��/h�ϑOyW��_Ѧ�%�E32�J���}B�7U.����pݖlE��B<M�a�{G*Ɖ�l�1��@�jʸ �n�M�cFU��‚쬫PD���Y޳�@œi���#��v�p�|��S�������xO�E�ѐӃ��\q3P���&�������� ��������O��u��3��Θ;��ȷ:P�������嫞�W���)��G�����%�:'�4>���ǁ��b���J�Y3�g�����������ALM_�4�/[�����?������\��U閱?DȐ�h��a�&�<(("2`ɩ�����# �����eaG���d�z ��?��*m��MS�>� �9����'6�^S�0#��'c%l���=l�`k����e���_�|`u(��:������ѧ �'�U=�y��\>?)��0��M���vF��F� �pSyVF��ZqJ���{��ۍ������ b�=�d1*P��RG.>&���J����Xy�ϲ�p���P,�U�|G�]��C�&=%���w'.�8�W��88#Ð�\vjFr���D�^p6ȳ0?��ΐ���wޤ�-;3#��|��R��������iK/���=-i�mX�ܚ��%�1z�] �0®��!�?>_!��2�ݭ��4��DMX兄��*�a ���Z8VU)�ߋ��Y�2���5����з��r�g�Yvv�K��1���Ң8 endstream endobj 113 0 obj <> endobj 135 0 obj <>stream x��=K�@�&V�_��/=С��VQ$R)�E�:�ڣi0�p9 ���p��C�"4K'�@F?�?������~��1������t���8�Ȥ���� |�IM��:��5�[��e�^A �^�\g~�wڶ �f���Q��W�9s)�4-�����k��y ��P�%��-�\��aX�ܠ�x��X"aG���?�9g� �KɌV�u��������E����">%��f����'Fo�X��e����3�=_26������%�j�+����E�Ø�D,��(�҃��� ���>�,$��@�&YB���n� endstream endobj 55 0 obj <> endobj 136 0 obj <>stream x��W tSe��!4\ �d^�roY�EYD� ���,�-m�k�6m�$͞��7��fi�.t��B���\�>:#�Tt�3ϗz{�y_��Ggμ3���MN�}�����1r!D+7lX�$�4=�� ���$a>o��T�:� a��C��_���y�ǣ=��A~��YYPX.��ʖ%�L���Ē%��&/�?I�LiNzZ~�4Yvf^� ��M�Z���)+O��,[&+\���r������� �Y�͚�,ϑe'o�,Δ�ff$�P�/Kޘ���<�c�W���2�� 22�����9��O1yŎ� v�,\�ZZ,+Y��Ҿ��227e�l;����g��'�)�b*��XJL#^&~GL'�ۈW���l�yb.���G�"#V/��:b!���@<@�'&� 1׋It ���? ?y0ቄ���QO��J.%o�V�yx욱�+��WՉߎ_?��`{p��)��L��д���3&����j�� �:4Z���c�Q��5�&u�� �Is�଱N��j�BУd��������!��S$NB>�Yf�1K-�=`$+�/�TC��a��8Sah`�40�� ?ͫ�&��b�W��0�1VH�@��T�|�� b����o&$�߂�~A� f�=$�6�s�!�rE]�K���?�yX7 �bt�t9&�QjFPely<�r�s u8&���hs�E��3 �����(U��ѱ�"=f��ak�Z�F.3�����)�`�b��K0TBy�<�*��Fc�00600�b���J��(_�ڸ1#}�Nɨ������d�!A1�xx��T ��?X_�a�8h�������&�};jlr�$����F_Q��HꕌBm78����t�ld���طL |�W ��zwXM'�q˯FQg��?�!Z��X�eR/�L��qA��"�4��|{}�n���n����ȏ��oD�J�I'D,A0�Z��*�3���� ��L�#��8H�#�{�e����h~/[\�C7�5��;����ڒ�fѥ˅�gΟˏ�c:U46#*��u,�������>����S��)�� =�&! M��(���~��S����l�Y {{s��T��XGܶ5}��������.�o|v�|��`-Np��UK� ϸB�97� }%n�FGLn-�y��W�$~4:����r��7���@Х6k1B�B:� c����@�p��$��o�ϭ�C�3��O�˫_a ���>���!���#,G� �r� 3Q&�=N�(5�WA�� 8��h��j�'���s���k����E%���Z}_ ���K�fO<\�e\*�Ѩ�R_4� $ 0��^�}~*�|9/�|�CY��������w>��,��dW�L:C�� ���f9{�y4�׏|���n|w�.�Hx�i~D�Q Z�����J�y���h�`Sc�?��?(���{�Ta(8����l�>��Z�6ބԚ��"Q�T�;m�&K��D�\����a|�F�A���?����t�~�F���]�9���U���-` �v���Y�V*�i���Cp�R�T�4��\�5 ������� ��Pj��~B�v:gKZ�~ �U|j�qV֊��������h�/i�n����F�h�B4��O�w�W�0��xB�ѽ�/�◥�S�ΤwmY���[2N�K��1�/�������,��W�K �I���2��zX��B�A�-q�Z$��+�S�e���9W�tm�zl>;׎��!B6�-��灖�X�N|D��"hc��ʨ�h8��tꌬ�`���� �$�BJ�� ���&k�z��@[��F�02��O޷�_�s������Ut�'*@?|-�͌��) ��,�)�M���G�ZR��m-���o�gB�J�eY�����\]�-��kP�󋓴��b#Y���k�7��)O�ӱN4���Ym�׿� ���#UEO�F�?��=�z��0696F��`�-��V��C���S:W�g�ؕ�{�T��IwD۫v������_T��@���z u���;ThS�gV���r�hY'��Uڠ��EMm�v���bd�������������a8���Q�Š��V�2,�vM�rb��߈��A�� D��҄@����&��D����ʔ���o���QM�A�dsoΛ�Σi.��^S�Z*w�Δ �Jo{��c�c}�rH�Q�{�X�Tlk���E«�TU��ZJ_��o�(�Jߵ � p9��=ՃFҎz�B���S��S������P�GO�1W�+ ��b:��P������ܱí Th��`���"G���+��x�JF_f��.���X���a�a:�����+ò�`��bW�� ����� �ud2Z�����E��99� U��Ճ&Lc�����с�$u���9 K0������ B�C=0����O�@�)|_����g���������� �#���1Hl(~?��W�]=s6���N�������.�,ᒧJ�r�������=���X|�0��WK59zj��ˀ�(�ڻ��Z����T ���v���n �����z���8\4ol��A�<�K=]��m��ɲjJ:�&̶���x����i��*��ht�UJ"M�L��N�U]���$�j��OAb�N�V������gUm�ޡJ�h��<8H.|c�~:�v��.�QU[���}��~6����G���S~�ﰽ��nۦ�X�^���� �f'���"�I�b(f�1֋��-gsY����Cz�+j�uD\�Xɱ{�Z�5�A��r��E�J g���oVEѡ���u%Xʧ�&�3y>H��F�'|����/���͸��{4��{^�֡�3�JMmZ1_�*�[�w�r{�G�|�2�ม!�L�Q6���kb;;��O��� w����M$�$V� $ \KZ�`hd�$��� ��>T���j���<4`K�~�@��H���σ���@�-3��k|5R�j:�-��òfz$v,������j� ��es�����m~��{C�ă�@�Q���~6 a\�n&��יB�m��q��v|����kJ��V��/�ʞ�ɬ�PgH�u �Z�Y3�b��P��*��@g��u&V�!q0�E'?B�N!����p�x���}:�s�m�A͝�/������uD�*�v��þ��=*>��-�����XŮ�����Q_r���f�黱�a?[-F &#�/�xv��Q�4�"�[I �xV<g�h�h�M$�o|w��C ��l�Ⱥ"~�"��,��x� ���������������m�0n�1J*���Z��g�8��5R+��wu���I��sh "�< �S�DU.��5`l����{Mu��b��J�tZf��rh��ut���x���6�}ӎ��t*��Y|[p�j�!��)���������h3�(����%��eY�qDu�|���@O�9W�=.����nJ���OU�d�_��;��gʓ��e�*~��L��sI���}�vaÁ�����;��b?�mu������ ������P��L3�-ޡ=���Q�hkem`� ��wE��0ڲ ��$���E�����m�[h~��ȣ�f0jt&-^����YF�d�Tc������&�Z�f8���x�]�Hg�O�A*UWH��JO�0�0���10�S����_�@��֓���t}��S:0+� U���|,������ ��]f� ����wH_L2��X�5���8x��盞s �E��������oy��mP퇪A]�<�^ܞ�1%eo�Zjq�� 2�WE�'�bK��y2��S �a����O�F3����^U`����;�9R禂G<�#��r(��S�>�U��g�K]"��_����uQ!Z�MzɊU:]�z�Z�����4�Rɉ�9򢂂pQ��ouc�.�_�B�㬸E�͕I��륭����,o���5,8�*�m�O�Wx�[�TU4&����u�x L�j���rX�v*��{G$v>,�C�H��b�"vZ�6�n�� �Y:%5p��� *TK�ʩw�c��$�ւ�'x��j-*+=Ё<؍>�Q�݊�� N},DQ��o�>��4�T�����n�& �B���Sk����O��?�O����k���:�-6U}�$C��W{�0�:Ӊ%u��n�ԉ�4���1��ƌ\7:j7� �\�� endstream endobj 11 0 obj <</Type/FontDescriptor/FontName/PAWSOM+CMR10/FontBBox[0 -250 1009 750]/Flags 4 /Ascent 750 /CapHeight 750 /Descent -250 /ItalicAngle 0 /StemV 151 /MissingWidth 500 /CharSet(/A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/R/S/T/U/V/W/X/a/b/bracketleft/bracketright/c/comma/d/dieresis/e/eight/equal/f/ff/fi/five/fl/four/g/h/hyphen/i/k/l/m/n/nine/o/one/p/parenleft/parenright/percent/period/plus/q/quoteright/r/s/seven/six/slash/t/three/two/u/v/w/x/y/z/zero)/FontFile3 137 0 R>> endobj 137 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 7606>>stream x��yyXS�������J<Ԟ�<[q�N�8�΢̃�Sd�0��$��A ��Uq��C��Z�j���m���v���};l��������yxHr��{���w��݋�%�6�M7�7�.�G���־�w$Y@!��}pĸAV��`T1mD ����%A�Q���}�m'�O���;�~������:x��sw �]����N^��:��� �����'<<x�{�EDDLs �����)���}l7{�y�����]n��5��ּ�i�K���ឡ��<<C)�Z��cqВ�!�B���_)]�u��H��Q��=�{n�����y�������O�=9���{�N{o�݌��fϱ�؅�'���Q�j.5��H��6Qc���8ʑOm�&P[���6j���L��SS���j*��ZJM�vS˨����tjeG��VQ3���,j6���C������T_j՟J��T �eI �Q�)+JL �X�j(eM�P�(�Q �GЗZC*D��Ԃa�g�|z]��、���-�8�5:�nav1���#�󤯾߇�~�l�À"K;˂�3� T5x�`��׬�NJ��r��e~'k���k�^��ج���9?�r��a/��~k���#������w���\�d���|ۋ#w�=�s���F�+,;���v�U:A]���ձ�M-Pf�@ (�qؽ���[U�?]�>�i#�)�T���6�;C UZu�Z#9��Z ��S�i9��8B0�O7jo�!8 'T-J�-Q��˒�3X���ߵ�R�%�FޢE��B��#��C��Q/�����^�$jP���*�����W�����y��弭�9�o�����e�# ����W�����%h#���G�,�i���)�$�t��ʎ�B~���E{�u���6F�C� w� �Z�t�Y��B8$*�)2,� �1��*��ِn����j�H�(�'��:���tW}�Rʂ�Zl����Ը��=����Lݢ��&8� �����@)�hӳ��(�e�f�dM�6 6� �v1զA{�$��m�#I��3�ˎ������ I����*�Q(ߋE*�^��ʡA�� s���e������eׯ������w��^���;������歔}����#D��I��Ͻ����u�����Y�T�6�/Z��\�}��%����zA�CT�P��!_ ���'a��_'�>h��?!1�C�����'B��'���l�'Z˛��V�T��&�Jo�*u��u� $F/��c����+ޯ!�0T��<����h�C��OH��\Ȋ��<Q�%�Ѷs���x�AtZ"����?A���x���ā��bЖKP"]Y��4HH�v:x5�85l���<��.$�6�͈��y��P������P�s�!ߟ�F�1&d�̒;K�_w��@���MSല��p^@��1-[����'�"���D�|!�:GH-.bQ]s�d�i`n}2[����=uG�%q�IGz����6t,b�+��T��-@��6��x8�Ë$x��mYs�C]�x)��KO/=�1g6,,���+"x�q$�|ew� :��Q1�ܯ�A��x���� �=�[��Wǻ]�?3�&�w����C�t���0Z���4�EI�������Y��Ѩߍ�_�o�������KFTP�F��]4٠5ш�lO�_�ZP�E�(O _�?�tU��~��7Gb@����2�I�Ju�+BS���b,a � $��������z}�����;h>��E�h,1U��"����au������-��vP�5J�4�&�_ei@����a��c�����:�N|g1x��s!�%a�щ�0��h���ױ��<�=�{J+j��%�s���_z 4H���cnO�������(�;��"4 �Ř#[�L��)��7�~�� � �#��է=��%����������D`�!%J���hH��P���8�4�e�\��� ���}q&�6!_�[��Mx�1oe��a UG�^v�N��` �L��R���)�����nK�g��ѧ8�T"n� ��G�c��t(�ȏ5���v����X��e�@��LdI�b�pz�U�H{��(��o3���O�2/�Z܊{cqR�j���&�������������궞�6x=��g`W�f�h�D{ �W���̀��P����z #(u�ǰh��FkCE����鷸���%4ެ�y����&z�k%|_�u �O��ь��v!����6j��p�F��P�'|>樺���6O��UP�ϗd����`�O���E� ��U�ezi i���<�~���ј< #�,9�$i�h<�m������+ۧ��q���O�b�O��t4�G��c�5�2�����6�]�����y��*Sd H\��;�*���;2�lb��=���p�{�!��yϕ�r~�QP�� �Y��SQr��ir�+U�{4�j�oR�C$$)��dcq�5�������PJ��rT�C0���Þ� ��4#��8Mj@zNz.AE�T���GPo4��C��v:n�����\[~mUc�.H�/�����_nݸ�'Qg��2�F�H!\&+:+=UN� 3��x#���z�� �Ԥ������� ���$,)�^MdWX�*}y-ej�t�c=ե�4�as�0�����0������X��!� Ȋ��S�� ]j>�Xz�]yH�mj�;NA�"�Y�߭3�ɦ<h23 F�ֿ�t��-��v�F!�G������z��;��"���x8�}Ŧ�pm���¯%�C�M�r�o�� 9�D �n�8��x70s�;� �EWU���S���%N��s�� �->zf�b�bm���B��33�y�e�,r N4��2)�}v<=eɾ���N�ky��f�V,���ޞE+�Cv���_�Tb��6þ��I�������6kt�АKh��� �g&��C�M�D�$�Yw7��n9Ho����L#�4���|^-D�(��=t�de���HQ� !�U)�6M|,ʕ��� �$Ƨ��-�^�sR.)q� ���w�V)c`?8����|Ԫ� /0k=�۽i�r�I�Vi��*��ee��jS ��5��|��x�lYA _RRl�I�֪�k�D6V��� �a��B0=Z��Zz�V,[Xt�[m� ��V�� �2��E-;��#�&��"�� ��)&�H�b��P�����eW�qh ���r�n7�Rnܻ���w+��ʰ&���PJf��bw�uq񷦁���P�ll��K �t_K���z��[1�<҃(�Si�Td�U���ox)��o��LҶU�O���-E��F�U��8�؛D�25aVYc!ҧ�*�IV5A�U��?��G$�B4�S`��+��n Z�R���,5�g롘��,����m�>u�ȅ ��%��L���M����4�`�}P ����ɹ��n����gBJ#��t%��/:���� N��b��:�?77�A�J.p��/`����G�l�� �}e[�W~-D�l�^A6���o����1f�w�إz8٣/�@� ߨ����2�3^m�@����T�՝7E=]դ������4����ʞ�� E!��(�yɚd ���ŀ>%�Əes�L��$C�o�dx��A��KY�ED�H��:$�@ �6qY�表m j4 ���T_Q�Ĵ1��[G��α�h4���y������t`�K�V�3FM�,�4��}�4�1��7v� X����{�.&W� 9��@�a��R��r����(Zb�{��>��������V Ql� 6)_��$ }�_#*��HU�P� ��,��J���jU������:'A�����$�I���"�Q�R��/���t��8K:�]� (G��V�?&] 5��7�`3��o �y���{Hp�~Չ�Z(��9�>�1x��`n�o��a�<��e�ɚ�$�<Y�����D� ��M�5h$Q�J�:�D=:�vx��,r����\�W�xv#�n�����h2���r�[�;��|4����8��w�ߓX��f�E���� P�kO �~ ����>:�-��".!-*B���5E��&��m��l�az��=|�<�m$2�]6�P�l2}����E��,Z(B���˟&|�ߕ����1Z��E]&�]�f��]N@y���c� ���7*���"n�ŷڜ/�.C��Cv��8��+2H��+���y �ÃU~����� w$ƃ�t�(�:�(����w�b�B+$�3JIWe�V%KK�A]�P�[�UQ�P��,ws�\�]�R`�-_����Ӱ_� >����OcĆh?pg��؂,Q�_O�m�;����V� ��t��_�~#�6���㖟>s�P�1��c�#�fo]�s���`�ܖ�.-+D��b���Iك�hO��s�[v�K���g���hEL=R!�E�~�*~���k��P�zm��t�<A�����a����x�y��O�.�#}��s���F�M3��6�=�"Ѵ���DM5�;� ��V���#�����-yP� y�2HJQI�K}�Ba$��{�:�ʮ�-eD�ך�2�����e׆���e�zۚ��#'D��IS������QCO�V�e��N��u��A����`}���P^G&R���1���|U����m$!��/�5���y���67�~r�����frj9���%�c�o)*UB �9���� �߄�3G�f��`I^��ь�ف�ڔj��\�P�ҝK�e���m:���jT�&����:��o�Z��ŁZ��?c�wŻ�́>�B�$�[�r�{��I��r��$h4��v�mvL��v"Q$��R�fգ�PJ0��,O�3�`� n��/:��N��Wp�\0wz4�.����<��2 (�-ij͎�K�����k��oQ\O��j�Yլ�ů�+��lq�(���Hr]��cM9p5����@ �g���ViR3�(J�y)7���M�I*�!�H�u��.V/S�� "���-����y)��.5�a4����4��k@�9~�_*�4�L��'ϐ���� �t����a�ڭ�>�[���acFi�����-�X��YK��o�uFc�qMjy���;F�ֱ[�!���Mj>7%�_�n�ӛ4!]����C��IVȕr��s�5v�5�B�}����Ih���g�WM���v��Ί�P@ h22 ����ss��>��<:�oQZ�c���o�ܿ~{�~���o�X�&L�f$I`酣��M�7@�"���$�3�Η�o�"�$pR��5��ǫҕ�ĩ�^�*sW���q���C��~�5�t:�ټL�%2D�o�� ��^ȩ��Z��M,R���4�ϰF�:3�޴�z�t����ousQ�Z����24` �؀ξ��<-M�eY�� �L���y�x��O �K����2s�|��s�Q�|$�oL܉��o�4 P�f!��Z�yқ)ϛ���%S���rt��� x�f!?E�TPa�t�'�u�^Te��z\O�EBDA��W�8��頬�P����R��峗B���[Q$��յ��Jm}�Z� � Ӆ�V���\��6F���no4����OW�?��c�Y�n\�{���*)FE~�F�P�TB�p�Y�^<j���11\ǹ)�A��VŗUT�~���a� 8,�'����@whS"���Gm#˭�>R�IF:��+aG�}��CBJ���3�s9��B�Vŭ Z��$��ɮҘ�����{��6b������&�>��$ڿ���[n!�: T�4�_��tf8���_�@�F���<��ė9 ��KJ�I��ʎ-��.�\fv$��]����2��+�D�І��<T*D5h{�n2%^ ��-gؚ��������ʰ����,��}�*�Ww8���5L^R�,.-1Q���cqj<��JYNR^azn��t�T�a�~�� yt�����}�����bW��Cܦ�-O]���t�#��λq��W��~���q�1��b=`�!�G�%x ���}� ���c F�ZTل\GXk�X![;es�^�� R�&�1ҏ�_ Qk��4�ܺ53�ZG�V�t��O���L"�1����[��v��&�ԅ;��~6Z�� 9�e?�w�ׯ�t�]: /m���>����p<ڒ��#UQU>�JP)���g�.���ys��\�e�o��2� ���Ӣj����Ԁ��ؔ�Ud���/��I�� ���O V-���_����^ c E��\�A2�`����mX䫏�1��5�� �3��V�̑sR��gI��}�PU�*L�ɪ$2�N�, J�c�M+&���]>,��GҐj��!�1X��<��*$@��� G��k�.�J0f�]I�[� ���,��t����Iы-p�[�t;��/����$��1I ө9�Թ��d �V����ca�� �գ��N��mD���V�[����S�]b|�9�����Sꈸx���LEF���h�$"iW�Jp�=YN��Z%�4��1|�����,�63��+n:�5��aOvT���v2�7Dls �� �C��3�kɘ ����PYm���I�3��$�["�1������Q�o&uz�<��~2T܉���l%M���ӕ-�}z.���W��ks_r��}<'�=D"~�u�Y� ��O<�G,�ĝ+�kG�n�÷ٵpI^� ����[��I� Xg�J�`�ۻag�w��=��]�)���7��T���ӑl9��D��ieL�����-ι\���y H�E��X ����}�ƒ��=q��r�i8�=�}6���ѱO��>��JḨ��c�L_y��{�=�i����j��[W�/��{���e�h:�ט�3.L?�aZ�fo����j "�զ�َ���e�o�⇇�7ʎ{�p��1�k���,��:�5��8|�}��%�>�Xw��V3�T7>���cJ���� -Bt���q��=���jsDzkJ�aEҘ�$��gݾE,q�#��>�c��[vE�u�����‡ ��Dԛ�� i����A�1��/G5�)0�TkyӁS� ���W�7�/}-�a��7i&GT��a=�V�X/8�)�l@�L��B3s4Z�4ShbjJ��۹be�bX� [�"K��Z������w�օvԻZ��I�z�Jm���13]�����w��xH51onRn�&=/�3��؈Uh�����i�Q�ھd����9�_~@R��E���K��5MG�ȦMk��*�X��E=Ң�GQ��(��?�ml�舕t�M�#2�t�%������]c׍�^ ;��3wu���Ĝi�j50�K�KY�v�$�K�fo�ïP_������op k�D�%x0��C�M��Z���&�k��4�<�no�v�^ XH���<<����>Z� Lf�6��������'�y]{�X�o1}9G�aD�����5�>7:{��r�M���L��vL�_G:���"�÷��?h�z�:Ԝ��VF���\z ���$H`0M[J������a�|���z?�oo{]�>����������t��� endstream endobj 111 0 obj <</Type/FontDescriptor/FontName/AKTSTT+CMMI5/FontBBox[0 -194 304 122]/Flags 65540 /Ascent 122 /CapHeight 122 /Descent -194 /ItalicAngle 0 /StemV 45 /MissingWidth 500 /CharSet(/comma/period)/FontFile3 138 0 R>> endobj 138 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 336>>stream x�cd`ab`ddds���4�TH3��a�!���;����|��<��<,+��'�=��{�3#c^q�s~AeQfzF��F��������������cnjQfrb��obIFjnb �������ZR��a�QRR`��_^^���[��_�n���P�Y���Z�ZT������W������v��t��-(-I-R��OI-�c```��b`�e�9��>�Gߏ��a �;~�c��x ��E3E���C J�;;�[8Z��O����w��;�RKrs��m�5��49��n�6��|�߷�]��<++??+ky��5˗���+]��yڴi����N�ɾ��)��?���<< ]�z� endstream endobj 42 0 obj <</Type/FontDescriptor/FontName/GXIXLG+CMEX10/FontBBox[-24 -214 474 334]/Flags 4 /Ascent 334 /CapHeight 334 /Descent -214 /ItalicAngle 0 /StemV 71 /MissingWidth 500 /CharSet(/bracehtipdownleft/bracehtipdownright/bracehtipupleft/bracehtipupright)/FontFile3 139 0 R>> endobj 139 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 507>>stream x�U�MhA�g��]�ڪzp��%�BI/�z���� q��MJ��lf��X�1�Y5FD�4!�x�bE!�� ţw/��&�4 b�@i���~�!�K7b�A�CO�N� |����Q*����z��؇��3�֎Bݪ�~��{B7Jfn1KP8=�b���it2�� yl�Ҫ��*��J��2���s��P�L���̌m�5_����٩id�H]�lq]�5�f�<FC��0%��al������`�i�%9#���2�C� v�v'����2v�����H� ��Q��m��j��<_j�9&������[�.u�!n��5.+e�t����6��/6ɔ�,�B|C��)H5~>�e�ʭ^��E��+��wʦ���s��v2ϫ/�[*�]�3��Lb����mZ_Qf����-�0?��O�*�1�e��|�Y���آnw���m:�K�:N����$W.��S�!��0uZ����jLV������Bo�wH�t��zC��q����� endstream endobj 9 0 obj <</Type/FontDescriptor/FontName/LABKNK+CMBX12/FontBBox[0 -201 1139 700]/Flags 4 /Ascent 700 /CapHeight 700 /Descent -201 /ItalicAngle 0 /StemV 170 /MissingWidth 500 /XHeight 456 /CharSet(/A/C/D/E/F/I/L/M/P/R/S/U/V/W/a/c/d/e/f/fi/five/four/g/h/i/k/l/m/n/o/one/p/r/s/seven/six/t/three/two/u/v/x/y)/FontFile3 140 0 R>> endobj 140 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 4377>>stream x��X TSW��1$���S�Bkop�����:�8RQT�MPd�)���d'`�Y " �j��U�Pkm���V��ڿRm��o<���������u``�ܳ����}'Ʀ#�HXW�yk]&[_�ߐ��{�oJ�lln񐁝�l�~�V������}�D�q �����4z�'��3��w���<�in�O��VU���*��'HA t� ����4�����oM�5Q>1$���1㝢�#��V�����}�9��G8�� �q�:�Į�!A;##|œ�C���3 3nnp���� ���#"�U��Eou�����o��g���5A^v.��L��i�f(���`�3+��HƓŬbV3k���3�Y�L`�3��ƍYȸ0����f 3�Y�Lcܙ��{� &��� `^cxf 3�q`�S &�y$Y 9�cFC��s��6o���|.�eQ��r����m�p'{���my�>�{u�U�����޿�{�?���k�wU� �6������܀�b������,.)���-3y�~�pm�&��;^8hb@�����E+� ��� ��~h7��F���g���Ն�L~�8�&(� ������ ϒ{b �i��8��Ef��$A)Jp3J��A<��������7�_�a9��"S��a��|�d+ ��A�^=� �Yd�n�|�,|�@��.��ϧ6���$uh�h'�1��7&=&.d��ad �p���{���@ dO��}>,iP�{�>����BJ}� 6p3�p��-m f�*�%e�m�\�!��YjI�L�+ 3R���yrQ�\p$)d"A�oCƠݐE{�ݗ��MJ�D�"��'�� ,�)�=/�5�ৢ/N��|�I�I� �[UK��s�� 0T*(�d�G��A �gG#��q���WE��]�|�-�Da�}��ڨ���S,��;�9�Ml⏜l+> ܕ�ā�p]�T�,?H��֭�,��8@�����y��yG��8��U8��T��؇x(��o�x�Dz��3�#^5�%��@z�z�m�۴h� p�5m�f3����3)�k��W��d��Ydp�����y�V4c�J>'I5N��bt��0-)"<�,�$Yǯ�8�(-2�����˞={���a$o��)�S�K��h�۴��ʴ������w��Tq��r2�_��&E����Φ���[��k�W����#�Zf����.�1��@܇:�D�� �(g˼� �l'��&�)P5����؋�SM�S���S��T*�g�%�����X!=#Q��R��� \ d�P�Y���N3�f���ca���u��e��!Bj�N�85dD(��MQ��K�%������x����PS�~�w��qu�����p��r�Pj\f��9֦+�� ����҉5��R�/��z7K�W��p3��*�n�ѣ3�ߐ:v���o���!�����A����j卌�ȵ�A~|.�ǿB|-;{�{��W��K���E;�j�����Yп%ǹF��u��Q�hCy>��5���Z�hc��jG���Q�k�eP�MG;�Q� �'�L�h�*�u�ie"o��)�=�]��s 83�Ղ7�-��܁�.O��@��� =I�^v�,��.�A��B;�.�8���.CZ8C�!���ͩ&��Ʃ&I=���:��x�8�"�IlӦ�EW�֔5�h���L��PܧM>3[Y2�L�B���E]���R�vP�,���r��,���P ���'L�X�m:�=2u?�hr�c_Rx �9���{ss�G3�I��'WBHW�%ei*Z�b���[�l�n!�.Vw�\gљ���sTB�F3����(K��`�:�R��փ>;G�e(%o�TJy��*�R-%FW�ʥ�����$��-�f�8':=����ɰ��Cp��F�9k��F�x�,)���"�ፅY9����צ����O@|FF�.�Z�=�5��V.��B����e�;F�%��%8�7�-�tH�D�A_[��)�,(0\���Eu���-�� J :���,D[��=��DH����ͮ��&�!6p����8e�צ�t:�N�� a��~wUY��� k\M�P�a��'�(t�C.[�y��Q'�gև���h��ɴ�j*3���r���Bӣ�Q�h�"�� �W���V�$F�VD�3��tz(��3z�:^�I4&��}`�ۓ���z+��j�3���Tw�}����L�Y5$C���"��&\�ڙ��J�'VQ�&�f6�dK��~��)ĉ {< Ϝ(1Ux����E�H:������F���Jpy����yc4AK�i �P^jz��]���P*��P��U����N(T͞{WX%c�ҟ8=u�q8�����b!5F���]�X��F���g56�&�Ѳ��C����r��~g��: ��ԥȞݷ H�u��ͤ�S��O�M���(�&%�5��P�򆢖&����D�C8a�\� ���H.ݽ�ۺ�O�)�s<�+Ƿ����w�LE���V����z5]u`�u���I0s�b�N&)�ۨ��'���7a|���6�?7ޖ�@\Ÿ = E�g���5�!!OQS\��ϐ�F-^<�.�u�@aIYe�!��[��+�宥8 ��]����ؽ+�|AY��,&<�=VrW�2|���߭� +6T.�)�Fn�4�2;���}Q�^{��p��q�v��m�nSg�������VP���{���%(R�2�ֶ��8r�� �V�$���9������Lj31a��g=qȠ����z���p�{2�q:d��YgA~}U<��{ɏ�����Z������ﶮ�_ �KɌߥ���i��!�4WSrbS�47c�C��'=��n��3gz��M%o+�� ���gn�F)�[�|ӎ�mށ�>>u�MGMuM�OS���s�F��A���x��8��,�la�I.>�/H1[\Ə.�� �V����{ɠ2!S ��n�G��A4��dA�>�l�ѱt� ��B$�6�)NyL}�$�@[��_k9��b3�(jqd����Q�|�G���s���ܽUU�6ݎ�zU�O�?���"�R��+��J�������^��V�6��A(xBD�l/��HIOM��E���2Q���5�`Tõ�[�u���ho�~�M��S�X�qIQ'(�J�'4� ��K�b �q�陉�t���`�.���Z�����Y� ���Ԣ[Es���}� � P�Q���A$�я%�F� ��5_?'�3���`��/8��s.,qxNz�&t�C� ���^�[�!Ȳ��k��;3����\f�e�X�, oܛ�w�}fy��R�翂���փ�iqD��G�$�fy�v���h�z�b�o�r�ru�Vo��e���Y��m���Z�R�>�v�،ܸ0 F����h)�/w��?��I�9u�Hy���� ��l��B�e���|�~���#���g%�y��G�A���� �N�J�P ��hE`"E �v��a�C��`� ��Y�C��!��b�o�qw���!3�X��D��v��1�-�=Y�%\�×��@�2�U�Ѫ>W��d��:'ۼc� U+&ŇC(��:��t�����s#����'���HͿ�_��ʨ�S���q���LK�in)�`ʤ����5Z�.%R����D~ڌ��r�D�d� �� ��wӃ�ٹ ��etl5K΢#z�kRр��e�5�I)�YK�d������]���o?m�G�:]��_6v��5�؊����́e�Y±�mYe�=l��:k���RA�HTR2�i��nq��� ǎ�����ך$_at�*i!ޡ��m� C�^�Y�Aa�-��Z�)��[���d5��μ����~$����?�����������xk�ј5�K�W$�o���>#p��-((}S��"�?T�M�E�ҵ���tn��n��Im����Ҫ[ Ѕo7��ʫtYa�Z�MH{�r��� J�l��1��zx����R3$���:��,�c?�`�etS�8�X/9�b�>���z%|W��JN�O��(H�i!-I��>ѓ������4� Cp ��[AZ� N,�3��r��†��e���+ #}�/+��_Ѷ�XyCh�����ޚ��pb�}��3�7�nH P*:��tm6HN ��8@���x�8�����ͭG�P�oȨ �I� ���ouġ�}-�~<��#�f��E@�0 ~� ��i� �+j)L�*��\��B����8| ��i;u�1�=j4�=̊sB�X��VH<���$>�s�茥2�+��'������q*��f9���}���o�M���r곇��zx.��s�Z!��pov�����z['�ͥ��}B�葋[J� �dS6k�EI/��fz�]��\;;�T�����3�?�q`r endstream endobj 109 0 obj <</Type/FontDescriptor/FontName/POAFWI+CMR5/FontBBox[0 -22 683 694]/Flags 131104 /Ascent 694 /CapHeight 694 /Descent -22 /ItalicAngle 0 /StemV 102 /MissingWidth 500 /XHeight 448 /CharSet(/b/d/e/eight/five/four/l/nine/o/one/seven/six/three/two/u/zero)/FontFile3 141 0 R>> endobj 141 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 1952>>stream x�]� Pgǻ�n9L:jbzFY9\J9T��(ΪY"�!2�#�0 �\p�(0�C��"�(7"+0��x�1�^몬�%� ��������6X�Jm�����z��~�_�i�Ԅ�i�L��p�0G��f��J ylɘ�XJ�Ҵy&;}*���)hMIh:6^���HM�DF%���.���NrWggO�O�:A�������Ա�$�f�<H�Q'������vx͟���2O�8O�����<E�%T'�v�#�+�qI�ժX�|\ڼ�B�#9I� ��F��(���&o�Powvqu[�p���'E�S�(��ZA�Q6�T��C��>��Qө�ǔ��/eJS���t�&�L &/$Jɏ� �a3�����'�y�Z�����iD"�phC�ф��aNL� �D [tF4�-yRK�9�t���G*��o�H��c��&��@��� I���� c�*#�����s��zy|̐�:�0�A�>���1���2� ���w���.532>�E=2��y�wZ�S�=�g#����x�����3_��Pt(������c���o�E'��mВHF��W���e!\sύWķ� fd(#��bbCS7�Ji�v���wg�=�^cK~�$�I�>�,�K���Tb�f��q�RKu�l �K�.<g\� r||2"`=���6ƴe_��ɻ��u��\E����gV��Ե��t�H��`�d�h1T����$'4� �d[�/��T��i9Y��e�>�A��;��#��7P��5pe8�eM7� ��G�qb;0 /D�^O%X� ����˧�I���r�A }{�YK[c*� � ‽���L��#x_t̿�.w���H�b�*4���./>��m�C �I?�� �pn����'�;�c��r�A �/b�J[?�i� �����4^~!f�Ys����� "�`w��K��@~�q�]�(X7�_&�P0�b ���q��YP:!����7��qּ��0�SI����{���&R)��s����_{��,~H�n[��b�c����H[K|��|�����[`_>pt��3_�&�{3���jF��f� ����qRW+� J�'I���{q5�aΪ`W��sJY[aYT�����1�;U�h�g���W/���HO�h��ƿ�Evp�U�gSB\6��=�rlOmd!��5�ػ維 P;�m�����ub@����@��(����J�_Н3aS�/�O���u�;� fa�l�t�>*Lڝ7�7�zΙ�q�Q�(i���q�r俤�}�ߡ).nB�a��xzT�J� ח� �p�V�0T���*��./;'�xu�_�� ֖�;�+ȁ`�a�n�cvAFEin^i1�j쫺)�o�v�� @��yS`v2�²���� �p�mL��I�dlr��i舓G���!�U���[� ��E��75��#��_T�s�]�[��\�T@gN}���4а1�;��[��ϫ�+����Y$���Ivh���Tm3���e8 WSzB��No�Y�U)XtMO�z�?�����ƌ�Y��\���cµ�Q�A6O>*t ��6/��� /��b��=�֒L<���������V%U�d�vi��ҁ��ƒ?��W�T*��4�;Xͮ=��H��� ��V4�r�}�����-|{M"��/�OM}p���0gčb=g����[���gm]�����g�T���}4Gڶ��z�(_s[�2N�v�>X��ѓ�%nD�U����L�m)��[����3��\��:��B������[͂��b����x��UYK4���[�@�AS}��p�'z4nWd�F"E{�Ao�Gc��x���>4J��)3ւ˕�7v���B�1���%�s�.��=�o��p'��������;"3{ �[<���&'W �r�+-����e��|Ȃ77u����Tji9TniEQ�{�R endstream endobj 40 0 obj <</Type/FontDescriptor/FontName/AVTVER+CMBX7/FontBBox[0 -249 926 750]/Flags 65568 /Ascent 750 /CapHeight 750 /Descent -249 /ItalicAngle 0 /StemV 138 /MissingWidth 500 /CharSet(/one/parenleft/parenright/plus/zero)/FontFile3 142 0 R>> endobj 142 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 682>>stream x�M�ILaǿ��p(R���Đ�@�p@�����14&JhjӖ�Kڡe �h؞P�J�Z�q)n1UC�A�^����ӛ�y�p����%���c�<�0 �0�ڮ7�r����r��2� ����\�d��_e jtaW!^;Id ��O=�a_��)�ݕ���������k�[]������M6�)�lbx���W����(z�kk�������=>Dž�j>�+:��/�B��q���K��o�O���o��>7!$����PAH>�HQ�<��~D��J�2�ʼ�fU�]�v��B�LR��ń� 1srؙ!��8��i�N @�@u}���В���b�7 ���ŵ�`��U�Sc\��K�1F�ٗIϱO�i��6$�0���#�m�Z��i�d6��a��`Od��*S:������p u2�'~�`�8��0 �����(��˸E+k�&���k��>y���c�Ctow� ����h�j���p �n�ǚ�G��f7AJS�F6�L:�a4/�Mk��4vX�yK����}�����f��r� H�'� |�<Ǧ�^��ZՑ�V:��{���k����=�N�����R����:,���lU-jڲK˱�ă)�&�� Xש&"��0Z�#��(S�X|B�/o�qy����[\�]NE�?��>+ endstream endobj 97 0 obj <</Type/FontDescriptor/FontName/OUHFBR+CMR6/FontBBox[0 -21 580 676]/Flags 65568 /Ascent 676 /CapHeight 676 /Descent -21 /ItalicAngle 0 /StemV 87 /MissingWidth 500 /CharSet(/eight/five/nine/one/seven/zero)/FontFile3 143 0 R>> endobj 143 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 970>>stream x�%�Lg��z���Av��y\�!��9�f�YD��B�r�����dX�A�@�XK�6[ ̆��8T��s ��&I\2sْ}Ǿ���%/�����I"DA�$����,��m���B�N�UZ�H 5ꐑ��#ѱT��@P$)�h҉���rc���U�%��&�s�'$�ro ���T_���-e����M%�/���zn׾2��z�V[[[�� f�h2���s��2.�7�&+o�2�* ��^� �&�t�P]c�M\�h�MUA(��SR "��$^"�[�W��B�y��R�?J��%d�m(|��t� %M�� n�m�=�u�ݨ �r/���FY.�r� X�[�=,~r����Fa�� ���߾� bN�o�+��% �.yQo�3��A`�s^#��������CU��l�A9۹sr,����zg�X }�� �D�P�/��k/��)�Ҥ��<j�!y�����8���{�Y�7�F��8 =��w؁�ASC��: �{��]�����(p�qCk~EIT%�AC1dB�%���s8O_�W6��jWw�H��k()�b�@���8$tK_?�լQ�]�w�� �q�����Qǩ�3��1���_(n�����s��_��*�l9�8��w������4L��^����W���}x�'o�V�(�AϘ���iX��w��ġ�Y1��E_�Xg�p��%ۤɇ���C�F[��!�~Џ��� z�.g�ƴ���$������⌴���&�����u���8+@�O[�G�tme��ܬ�wǼk|R� ���AI��.�g� E�o� ����3�Ҟ�!A��������4( q�GvL��_� ��xK�W?fd'�/���0u��q\� ՟7��Kg���p�q�pFvl��������Y�^׏Dg�[���������M!�.u��K�^P?O���* endstream endobj 38 0 obj <</Type/FontDescriptor/FontName/SFWNEH+CMMIB7/FontBBox[0 -16 943 702]/Flags 131104 /Ascent 702 /CapHeight 702 /Descent -16 /ItalicAngle 0 /StemV 141 /MissingWidth 500 /XHeight 452 /CharSet(/C/k/z)/FontFile3 144 0 R>> endobj 144 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 815>>stream x�%QLw��UO��I�0�z�X"��%g0Z%j�[J������^�\�^m��*������ T놄�R��X�d���a��%����iL�-�������뽗�c�" ��F����ey~#��T�߬q�X0/�t�L��:��j*F��bg�=F�)pL�����TP�jkwTR����R{4ǴX�)����+�d�P l C��e���;��=O���b����J���6�L�h�,�Jճ�<u�꠩�ݪ>��u8�<�Q&����1 �;OcX9v;�i �Ţ������_���k_@�jw�KcI�B�Hu�Hn�א�����7���Y=�~�E'�7����'����]�)� ~��)�R+̪,��Ϝm�.ކ0�BPћ��B+�O�@�翯�/���I���^�#��5~T��G���Y�/��<ˎ&^޿��s���{���0�[��Γ^���@;t���m鎄��M����0��Bb�00�_8��݌�F[m�M��V���!D)NG 1@Nߝ���$L�cݷ\�|Ӑ��Pv�a61��e/�:�� /��h� ��e$N�B��-R}a�_Af�aXAu�֡�h� "�ʆ�wh�K3?߹55��D��� 1_�E�8�>�=�|���{���H6�N7�%J���?/�Kc|( �L�_|2/��̝�d* �� �����!� �ٯ ��r���7����-����rP:=K���8���9d[���%�g�d7�46�!M&�cc���7 �p ^��.%�"!4�Q/���G���js� e���\�X�Je^�[š�N�a�t� endstream endobj 90 0 obj <</Type/FontDescriptor/FontName/DIOZUW+CMTI9/FontBBox[-25 -205 885 705]/Flags 131076 /Ascent 705 /CapHeight 683 /Descent -205 /ItalicAngle 0 /StemV 132 /MissingWidth 500 /XHeight 442 /CharSet(/I/V/d/e/fi/i/r/s)/FontFile3 145 0 R>> endobj 145 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 1346>>stream x����OSw�ϡ�eu[R�����;�̈́�6����f�h̼U(�@)����^��[�BK[� �^@Z�)Sw�.�L�Y� �%ۋ��9�e��{�������~���C�� EѬ������[t.J�Ϡ�`I�_��U� �9�#�I_�Z^�N�����P�I�/�4+��:����z ���‚��wv�*�� ��j~��/ �|y�i�UH�E����}�\�\�s�B���e�iݞ-�y �\�+��6A o��I���/���/�b���U.���$5i� �TVZ#�A� �ي|��G ��N{C2�.�o�%c]F7+�5���-lz�6No#Qz�s���w�Z�<0 ���[Lѧ� $.����N$õSB�ԃT����`ƺ��W̍ǞFdž ���L-63Q̤̝`�n��W���$|8�.ВןR����(��[j� ����������~z ��N3�'�+?x�9b�gtS:8�5��cЖ ��qC ���Gk:a9Xq��3j�8Ş� )�xː�}�/��ya�?z12������G(�K���u�|X���F�����N�Y �9�;A��� à����A0� c�z����:PZ�1�dZ�%Kr�'�!-���p���K�鸖�7�T�Je<d�w�r�!���;T� ��6� TXzZd zX͝x9���ZF���d^a�my7*�ߜ����)��x�� }N,����݌�J���k7�I�s�#>�r��O�TVi���r���H ݹ|u�[�c7�,���F-I5�TAz5�#ʖbQ�����F{���86%H��1[w0�L�ҟ^�<{�"n�=&��ը/����`%�.��l�.�!9���ߝ��M���̻p�`S�{�uOQj���-�l�� k"m2Y�T2h ��������h��fi3�:IQڄ��~&� $��1pD.��e�00Q������L�^w��t�ޡ3���'`m����0ߤT�-z��M/VuQ��u����������� t�D��$��w�~;�1��m�� �1�;�W��[� ؜��v� P�;�ZfvA���Y�[vn(�����uPc�MG��ٰ�VW��g����n�G�����}�Ʀ�>�q��W'��O�HT������t\G��X�>fl�J��~~��W��dć�? ��kv��NnPj��dn��D'b�'# 7aDj�6h@���i��/��9f%��2� LQ޾iYI�J�$>L��*^X�=V�����,� ���g<XLT�6�j�I���ij���٭tq����Y ?�<���J|EfA,'������Ң� endstream endobj 49 0 obj <</Type/FontDescriptor/FontName/TZIEEQ+DejaVuSans/FontBBox[7 -176 675 742]/Flags 4 /Ascent 742 /CapHeight 742 /Descent -176 /ItalicAngle 0 /StemV 101 /MissingWidth 600 /FontFile2 146 0 R>> endobj 146 0 obj <</Filter/FlateDecode /Length1 3556/Length 2319>>stream x��Wyt������7��d&� �` ��8 B���C٪ m#��H �"D�U0PȈ��TA1R��u�Eǵ��m�h�7��S{N��zz��|�����۷�!ED� ��;�3��yx�2f�Ԣҫ��(�ʙ8kf��n�-�{"sq���{ΚBd�L�'�3���~|O���L* _�;�^�=-`�j���\��K�Μs��|!�=�&]�wX�kjќR=�1]�c���M�tM?NHj鴲���U����J�9?o�Ջj�V�K��6��"��.�Vk/-��yEժV�`;�����2��.Mj(���mc�5����lP�N�&}�ާG�}V�Q]���x]�z`�cvʑ��,?�vT�>�2:�O��u�����.:#^�د�J�A�$���F �y�HA��:�(s��ש-�DwP-�z�m �-�^򪥋����s�X�?"��d�F*�dꕋ��,�D/�&4�6�0�ͳ���Q��Q�8��K�b;�+������q���Z����z0U^��S�����(Vs%�؜�n����.�D�wNۯ�2�����Q1�c�Ö���%X!��V�P�s�Δ�b�Y.YMC/�"�<�C{)�K��:����s�>%9W���E�Cu�b}^jM����hX���j+4$\�3� xtlZF���3XMë=s�5MM� tk3�ڤV#W�C���⩌.Æ�� ̻fu��<�� ��I >0�y-�ڄ�7d|upbIp��2��J{R� y�$��F���<�C�5t3�1�:S %�E6=B�$�$+Yκƪ�i�Ĉ����—\�� ��_B�܋�"h �ӕ̧�����!�?G�'�'}�q.�2���3��͙N���||�a��(�3q���Y�}�G��]?�)�ۇ�㤨�,G��A��'�����qƛ��;�o�a��Q��m��ڶ�U�1^_�3���d��x��2�%Ƌ�_2~��9�0��/, �5�Ɓ����!�������9M؟��a�g�e<Ǩf<���3^<�;d�c�.���.?������I�O;O����}��l��6/c��l��1Ɩ� f cs6=�b6���F�<���6qa���<̈xP%��"X��k�߀u^�$��k���5��f�!��Е�Le!*s�C!�f�z��Y�x�+VJ�+`�r�Y�r7� �,��R��!,��njŋ|f1c� ���������r�cި$3/��ss����Y.�ǘEY3��E)c�^�=i�c�/�L��݌�rL��1�fLdL`���(�J@!���3����( \�39�ܙ�1���yt.F%!_�&�:� `��fc�w0n��6�3n��=�0Y�:�6C[`H�bc��ߍ``y�[� sk��0`r�a�r��������~���1�s��σ��l�M}�(���M�z�r��6z�ѳ-zx���m�����6�<�t�kF��j##]���Ɛ�ƍ����:�qCǐ�a:��!�6r�zF:�}"�$�4?�a������6�6�JS��h��Rׅ�R*Ւ�,��S��0Z0���g�$W_.�r$��ex���� � �p3\6�q��p�Cˢ�; ����g+�1T� /Y�:�/ �o�/Gy�sD��59�UN��D�K*�,�4e�z��;�'N4tk�K��|iŚ.����3qz��|��S��[zS/��^�@>E�h�g�V>\�p9}F;b`����B.�Y��0���I/�����>�j������v7�9ms�)��OqWY�*?���`�R�9��*�Jt&�%R͢RZE�Ng9��U�5FX�&�k��e�� =�9'n�ZnU$l�AD�,��x��z�t�!�:���;ce���ʸ���{%{���^kϤ/"� �Ǒ��tkNz U��� U��v>w��"��!oj�II�O�~�f_nȺ,U���Ύ���p��>��mob{�B�Ҳz���!�&����p8۪��NOS�T��͛��a�y�ڵ��m�ъ�UOp�+[Ǯ����UK�b�eڌ�;_|nŶ@�v��w�[����(�R ��i�yܻ�U�S��]�r�oU+g����V�c����.��/�u�֖���U��g�D�'����l:�]�DܨlE��O9�c~��WKU��sfB������o��<X=��J��5���G�J�5�ֵ�! ���Z��1��yh��L���F45 U1*�n�Vs�����s�Iy����/��޲�>����1��b#56} ����?�b3�lSN�LD��w�b��Q�?W���q����vW�>�~���/O^��?�����OH endstream endobj 36 0 obj <</Type/FontDescriptor/FontName/XVAMOC+CMSY7/FontBBox[0 -17 714 518]/Flags 4 /Ascent 518 /CapHeight 518 /Descent -17 /ItalicAngle 0 /StemV 107 /MissingWidth 500 /CharSet(/multiply/openbullet)/FontFile3 147 0 R>> endobj 147 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 438>>stream x�cd`ab`ddds� �4�T~H3��a�!��]�3��,�n�n��?t��g~O��/���ȘW���_PY���Q�����`hii��`d``����Z�������X����X��(�'g��T*h�d��X�뗗��%�����i�(�g�d(�����(���(�%�*�ݦ&��s JKR�|�SR��� R�JsrRK���V0v1032�������K�j�������~�a��#Stڴ��ݓ8f�Mih�訯����;�w���Ʃ��Mݒ� �M}�������ڮ��V��i 3���M�&���{�� :'vKN�1e���ގ>����a�òWq����~���{o��~o���}��y�^���ͣ����7���\�y^���)�;�������'/�^y�򲽑����<_���K�~�Og�õ�{�| �e`Q� endstream endobj 88 0 obj <</Type/FontDescriptor/FontName/QAEHQH+CMEX9/FontBBox[0 -1757 576 42]/Flags 4 /Ascent 42 /CapHeight 42 /Descent -1757 /ItalicAngle 0 /StemV 86 /MissingWidth 500 /CharSet(/parenleftBig/parenrightBig)/FontFile3 148 0 R>> endobj 148 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 462>>stream x�cd`ab`ddds�u���TH3��a�!��-�S�Ǖ���<��<,�~� }��^���@����1���o�s~AeQfzF��F��������������cnjQfrb��obIFjnb �������ZR��a�QRR`��_^^���[��_�n���P�Y���Z�ZT������W������v��t��-(-I-R��OI-�+H,J��IM+q�L���r����Y �������2l��3~������O=����ޮނeZo��+������ ����#߳��̜T��1��������X����-���t�>��G�������Oi����1mj�4y�>�r��V����o��g}�j���]�����������%���ݹ���Փ�2{;l��5��a7}��}�w���ں���v��i�Sg�v/Z,�}����v?�����*�Ke>�U<<Wg��20j�Ϊ endstream endobj 51 0 obj <</Type/FontDescriptor/FontName/PASSYV+Sawasdee/FontBBox[29 -153 298 764]/Flags 4 /Ascent 764 /CapHeight 764 /Descent -153 /ItalicAngle 0 /StemV 44 /MissingWidth 365 /FontFile2 149 0 R>> endobj 149 0 obj <</Filter/FlateDecode /Length1 1820/Length 1099>>stream x��TMlE~���nH�8�t�0�MR�]��!!!nY7�c�ZN��U��nCiZC��"Ú@��H�8!�9D���K� �9�rA\��!�@�7�J8 ��jߛ��{��fv�@�� r��� ������B���b+I�����M!p�c����p� T.\�v��w��X`�R,�:��� ��y�q�����/,.7��D�t�Tl�9�P\��>�%�mz��Pi��Bq�z��bӾ�׫�T��6�����h� g�?��Y|����9��Y��8�����,�� �AG��lj�d�<a�8n��u�~�H�@$ƈF�g�j� Z6o�TL9�\ڋ���!3݌1��CeE~��A�mJ�g�H�LI���lbɴL�Z[�l�y�Z��!;]+$�ƫ�zmH��i� ���!�^�$�0 e���B���Aΰ*V��ɨ$˦d�Vޱ8a�����1�C��.�U�q�U ���D�"5hy�I���83RS�NZEŢ�b�)<9�����+���v���VD�%�ea0(���i�M����B��� 5�Ӓ̈iXXPF�je,����b,��!����$�,��������� ����m�*+��ќ���•�vt��R$��ؒ;�\� ew��$TDIa���Q ϔRuB *FK��ҽ�թ1D�/(b������xG�����d�j�s��lD���n���aJ6�� ����Y��5B�6k�DY/k8���v�`�I�Rӄy����*fK#AY���� 5�d���yĻɺK�)�\9\�z��u��쓃rZ��^�~c^���Ӵ��][�� ��s�a��u��u�,_�a̭v4κ�����<?3o��jQo�ݑC*�D�1�B ����746��`�p��� ��G:��������p���'��dTό(=C��C=�HF��.���x���2+g[S���Lttv�q�̍L��U�.�^}��o��k�z������qN�=����wb����I����@Ⱦ��;ƛ ��W�W˕��J��?���۰�{h��� ���t��_��;2Yc�}��?���C��������XK���~��G \Q endstream endobj 34 0 obj <</Type/FontDescriptor/FontName/KGDDXH+CMMI7/FontBBox[0 -10 658 441]/Flags 131104 /Ascent 441 /CapHeight 441 /Descent -10 /ItalicAngle 0 /StemV 98 /MissingWidth 500 /XHeight 441 /CharSet(/n)/FontFile3 150 0 R>> endobj 150 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 441>>stream x��Q�CMMI7$������&�M� � ��"�Z�W�Tns�Copyright (c) 1997, 2009 American Mathematical Society (<http://www.ams.org>), with Reserved Font Name CMMI7.CMMI7Computer ModernnO����V�����7Ӎ����~�r��u������������������������������������֋ŌXxO`�{a�o���zV�j�����~��}����t;`bc�v���������������>�V8SXcm��P�b�`�tl~tuf}R��~���������ƞ�����upx�f�p�p�b�uw��C����p��`�� �  7� �� /c�I endstream endobj 86 0 obj <</Type/FontDescriptor/FontName/RSRSZA+CMMIB6/FontBBox[0 -194 712 453]/Flags 131104 /Ascent 453 /CapHeight 453 /Descent -194 /ItalicAngle 0 /StemV 106 /MissingWidth 500 /XHeight 453 /CharSet(/p)/FontFile3 151 0 R>> endobj 151 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 436>>stream x�cd`ab`dddw���t21UH3��a�!���;�g̏H�n�n��?���� ~���!���Ș_��_PY���Q�����`hii��`d``����Z�������X����X��(�'g��T*h�d��X�뗗��%�����i�(�g�d(�����(���(�%�*@����s JKR�|�SR�� �Y�����Y�;��w�}���c�q��w�ߜ�u]@��Y���]�-�-���]�S�fȭ��ؿ�{i�����{y��O��dAyO[�|cOaWcϦ�&� �7n��������w�o6~�[�W�ʧ������͐�[��l~�����'��:�`��n��Y���-�M��y+D����̙}������fƦ���f,M�� -���N�W����8�w=n9.��|��{yx�������c`o��I endstream endobj 53 0 obj <</Type/FontDescriptor/FontName/GMJVEG+f-2-0/FontBBox[-22 -299 625 639]/Flags 131077 /Ascent 639 /CapHeight 595 /Descent -299 /ItalicAngle 0 /StemV 93 /AvgWidth 602 /MaxWidth 602 /MissingWidth 602 /XHeight 464 /CharSet(/B/C/D/E/I/K/L/M/O/T/X/Y/k/m/n/underscore)/FontFile3 152 0 R>> endobj 152 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 2070>>stream x�eU{PS��!��,�1B�͍v��,:�*�먰]�ͮ����y�#@ �H �! ���$��Vݵ�؎�v�ak��YGgt�i;���N���ћ����9s�9������~���!x<Y��5=3��!��c7İ�:|����H��C�|;����@�%@�Z"��;Z���iP��)��-YY��B���Z����)�5��bY�J])+j��+�j���*ف�c2�^�R=���K4%j}I1�5[U]Z^\R�-/Tfd�t���lK��P���� ��k�q�ē�1���*ޚA$���;�W�����iD:�Ad�9�!��H�H� o��C1��Ͽ��� [�Wry�zO�N��}��}�Bp3���$Xu��&��ê�؆�\�5����i��E��~D��w�N4���Z;���|���sW?W���S��� ��h����O�@��ܓc�^�Ղ���Z �g�2O��+��D!oKO��(��C�j_�!7���/�~�2���.1d� �����f���}ƀ֓ӌkj��SjiA-4$�H�{,��iE�2����i�����o8`��:i� ��L�؈٫�q�1��A�l� u:iL�-�[�Nԍ$S����ɫ�lԚMZ��y�c�DȽ��� ~Z��>H݆��{�y�u�����'3#�%�Tq>gL(bύ�/}�9�����ԙL���Y�?,o$ :*��-ba5��~XM-/T��<U&-��3b���+u�����>�)� ;���E!6-$΁�I� g� 9�DT0�sO�ltѻ�6d�BXlh��aZN�ܝ�8+p(\�Ʌd�kٴ�YN�X����B��9�rТl�3�f�o�UU��N�` ����3���H=ƻl�M��#>{��&����!!~��d�� �x-l���S���ȉ~���:]C��D_d.�v�?�M���|�벣^ړߦ9��S.f��>����vy}���*�l�u »���� �N)@�^4;6�>��/�t����ũ���o��F}B���&��9|�����E���z�OE��F�5���r��!1+�D]C'�`*�����3�����#}��K���U�Z�A�2�����e=��2#7��V����P$} 3��uTT,6a��a|�kp^�gj�6X��+�;[�A��A0���.��m�p�f�@Q�&F' �m�r�v�� ˕da�]�������g? �q +���@�L��fN�6_|�xa9�=O�����p�\8�5���OS&���ѐҩ�K�Y�/쓜��sZz .[ ?���hrW��j�wB��G{��Sx?e���9�{��CG�WIus�Z�3�89�h\���Ճ�\ϝpM��qWT2�y��5�o��Qt���d��а� r�?�[�3w�����20K|���:̰�{�'���0lf�D9������H��<C�}� 0�fn��+�����%��M��}��+`��j�����"�f����W�c ZqH�"¯����[l/��8��isI&��;�w�x�;�(���V��`��' ������6���*>ȸLJ�^=��T��u���� �N?�o�aqQ�)�>�Aڭ�@|��v���r�o�a̜�p�����똇�~N��?�t��~�s߹���߁86I� "����ߓ�(FM}5�V���=e��: �`Ĥ"��:��O����Qs�I�o�jdn�д�m)�5I���IE� r�;� �M/���+�pP|n6�jK� 5HtLˤR�1�wd�ܧ� H,��/o���uH�8G�(( 4y�R�B�����>�p �7�8���N� ���3�a�yz(lג��*��4NC�3��(ɖ����u6,Aʝd�,��11�P��� �Ur�{��Uj�#d��B��ږ\Eݱ�q�j(�����j�(4���ԧ�!kMf틹�oR/�e/w?7ܦ<�ɗFr��G�SOOTnEM���E��MX�e8)���K��S_��_�.��-��5O�$�ϭ����dS��D�R��fF�>&�C���� ]�8�~�50���$�Lj�C��U�_j5�� endstream endobj 32 0 obj <</Type/FontDescriptor/FontName/WEWHHV+MSBM10/FontBBox[0 0 703 685]/Flags 65568 /Ascent 685 /CapHeight 685 /Descent 0 /ItalicAngle 0 /StemV 105 /MissingWidth 500 /CharSet(/R)/FontFile3 153 0 R>> endobj 153 0 obj <</Filter/FlateDecode /Subtype/Type1C/Length 499>>stream x�cd`ab`ddd� v�541U~H3��a�!����3��#k7s7ˊ B߃���``fd�/�r�/�,�L�(Q�H�T0��4�Q020�Tp�M-�LN�S�M,�H�M,rr��3SK*4l2JJ �������s�����4u�3K2�R�S��RS���J�sS nӃP��9�E �A � L@�3�0������ֽ���u��c���ź��>�c����9��lߵ~�L�^�2�FrR}_��n��rlsguϝ�:��W>mr����-S#�z|gx2������������%��[euweռ��G&�D���d�lk j�讪��=W��2�Is����'���}��� }��w�n=��eKӻ�n�n��� �ߡ�kEt�9�`k�o�ߢ9���~��>':yn���)?zٻ�v/k��񝍭��������D����ɕ�������3�{V���Y����3�v�g����W>���<��I��7sm��b1��ùn�20m��� endstream endobj 179 0 obj <</Type/Metadata /Subtype/XML/Length 1376>>stream <?xpacket begin='' id='W5M0MpCehiHzreSzNTczkc9d'?> <?adobe-xap-filters esc="CRLF"?> <x:xmpmeta xmlns:x='adobe:ns:meta/' x:xmptk='XMP toolkit 2.9.1-13, framework 1.6'> <rdf:RDF xmlns:rdf='http://www.w3.org/1999/02/22-rdf-syntax-ns#' xmlns:iX='http://ns.adobe.com/iX/1.0/'> <rdf:Description rdf:about='uuid:4b9419f2-d9cb-11f1-0000-c43adc00e114' xmlns:pdf='http://ns.adobe.com/pdf/1.3/' pdf:Producer='GPL Ghostscript 9.16'/> <rdf:Description rdf:about='uuid:4b9419f2-d9cb-11f1-0000-c43adc00e114' xmlns:xmp='http://ns.adobe.com/xap/1.0/'><xmp:ModifyDate>2016-11-03T11:17:38+01:00</xmp:ModifyDate> <xmp:CreateDate>2016-11-03T11:17:38+01:00</xmp:CreateDate> <xmp:CreatorTool>dvips(k) 5.995 Copyright 2015 Radical Eye Software</xmp:CreatorTool></rdf:Description> <rdf:Description rdf:about='uuid:4b9419f2-d9cb-11f1-0000-c43adc00e114' xmlns:xapMM='http://ns.adobe.com/xap/1.0/mm/' xapMM:DocumentID='uuid:4b9419f2-d9cb-11f1-0000-c43adc00e114'/> <rdf:Description rdf:about='uuid:4b9419f2-d9cb-11f1-0000-c43adc00e114' xmlns:dc='http://purl.org/dc/elements/1.1/' dc:format='application/pdf'><dc:title><rdf:Alt><rdf:li xml:lang='x-default'>CP88.dvi</rdf:li></rdf:Alt></dc:title></rdf:Description> </rdf:RDF> </x:xmpmeta> <?xpacket end='w'?> endstream endobj 2 0 obj <</Producer(GPL Ghostscript 9.16) /CreationDate(D:20161103111738+01'00') /ModDate(D:20161103111738+01'00') /Creator(dvips\(k\) 5.995 Copyright 2015 Radical Eye Software) /Title(CP88.dvi)>>endobj xref 0 180 0000000000 65535 f 0000057005 00000 n 0000144232 00000 n 0000056879 00000 n 0000055201 00000 n 0000000015 00000 n 0000005006 00000 n 0000057071 00000 n 0000067007 00000 n 0000120253 00000 n 0000065802 00000 n 0000110562 00000 n 0000064021 00000 n 0000097151 00000 n 0000063442 00000 n 0000092725 00000 n 0000062416 00000 n 0000088262 00000 n 0000057112 00000 n 0000057142 00000 n 0000055361 00000 n 0000005026 00000 n 0000010670 00000 n 0000060995 00000 n 0000079332 00000 n 0000060299 00000 n 0000076146 00000 n 0000059209 00000 n 0000073248 00000 n 0000058636 00000 n 0000071307 00000 n 0000071152 00000 n 0000141983 00000 n 0000070512 00000 n 0000138024 00000 n 0000069581 00000 n 0000135105 00000 n 0000068228 00000 n 0000129685 00000 n 0000067866 00000 n 0000127373 00000 n 0000066693 00000 n 0000119379 00000 n 0000057216 00000 n 0000057246 00000 n 0000055531 00000 n 0000010691 00000 n 0000014831 00000 n 0000069182 00000 n 0000132497 00000 n 0000070361 00000 n 0000136638 00000 n 0000070825 00000 n 0000139529 00000 n 0000065253 00000 n 0000105823 00000 n 0000063811 00000 n 0000095740 00000 n 0000063135 00000 n 0000091922 00000 n 0000061868 00000 n 0000086892 00000 n 0000057397 00000 n 0000057427 00000 n 0000055693 00000 n 0000014852 00000 n 0000020519 00000 n 0000057580 00000 n 0000057610 00000 n 0000055863 00000 n 0000020540 00000 n 0000026179 00000 n 0000061578 00000 n 0000084587 00000 n 0000060798 00000 n 0000077951 00000 n 0000059737 00000 n 0000075384 00000 n 0000057739 00000 n 0000057769 00000 n 0000056033 00000 n 0000026200 00000 n 0000032617 00000 n 0000059028 00000 n 0000072226 00000 n 0000070668 00000 n 0000138777 00000 n 0000069874 00000 n 0000135856 00000 n 0000068498 00000 n 0000130818 00000 n 0000057931 00000 n 0000057961 00000 n 0000056203 00000 n 0000032638 00000 n 0000038527 00000 n 0000068047 00000 n 0000128389 00000 n 0000058136 00000 n 0000058166 00000 n 0000056373 00000 n 0000038548 00000 n 0000047202 00000 n 0000058253 00000 n 0000058284 00000 n 0000056547 00000 n 0000047224 00000 n 0000053501 00000 n 0000067542 00000 n 0000125047 00000 n 0000066446 00000 n 0000118731 00000 n 0000064667 00000 n 0000105204 00000 n 0000058383 00000 n 0000058414 00000 n 0000056713 00000 n 0000053523 00000 n 0000055179 00000 n 0000058550 00000 n 0000058581 00000 n 0000071567 00000 n 0000072475 00000 n 0000073494 00000 n 0000075615 00000 n 0000076391 00000 n 0000078189 00000 n 0000079713 00000 n 0000084834 00000 n 0000087189 00000 n 0000088612 00000 n 0000092156 00000 n 0000093081 00000 n 0000095977 00000 n 0000097637 00000 n 0000105430 00000 n 0000106125 00000 n 0000111039 00000 n 0000118958 00000 n 0000119661 00000 n 0000120584 00000 n 0000125335 00000 n 0000127622 00000 n 0000128630 00000 n 0000129918 00000 n 0000131065 00000 n 0000132702 00000 n 0000135333 00000 n 0000136091 00000 n 0000136841 00000 n 0000138251 00000 n 0000139008 00000 n 0000139827 00000 n 0000142194 00000 n 0000058890 00000 n 0000059498 00000 n 0000059915 00000 n 0000060019 00000 n 0000060691 00000 n 0000061469 00000 n 0000062235 00000 n 0000062789 00000 n 0000062892 00000 n 0000063330 00000 n 0000064552 00000 n 0000064818 00000 n 0000064920 00000 n 0000065686 00000 n 0000066328 00000 n 0000066598 00000 n 0000066854 00000 n 0000067457 00000 n 0000068865 00000 n 0000068950 00000 n 0000069343 00000 n 0000069770 00000 n 0000070024 00000 n 0000070133 00000 n 0000071030 00000 n 0000142778 00000 n trailer << /Size 180 /Root 1 0 R /Info 2 0 R /ID [<D8966A0924A438A93538B237FBECA616><D8966A0924A438A93538B237FBECA616>] >> startxref 144435 %%EOF
Conv2d and Tensor Cores Hi there, I am running into an issue where a conv2d layer is not using Tensor Cores for some configurations of dilations/padding. For certain inputs size the layer uses a Tensor Core CUDNN implementation but not for others. Is there some known limitations/rules that should be followed to guarantee TCUs are used every time for 2d convolution, regardless of input size and dilation/padding parameters? o Linux distro and version: Debian 9 o GPU type: RTX 2080 TI o Nvidia driver version: 410.93 o CUDA version: 10.0 o CUDNN version: 7.6.5 o Python version [if using python]: Using c++ o Tensorflow and PyTorch version o TensorRT version: 7.0.0.11 Thanks in advance for your help! Hi, For best practice, please refer to below link: https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#optimize-layer If possible please share your model and configuration setting so that we can help better. Thanks Thanks. I have three parallel 2d conv nodes. They all have the same input size and same output size. The difference is one has (6,6), other (12,12), other (18,18) padding. The kernel shape is (3,3) for all of them. The one with (6,6) always runs on TCUs regardless of input size. The one with (12,12) runs on TCUs for some input sizes. The one with (18,18) never runs on TCUs. My guess is that there is some rule that only triggers TCUs usage if the input/work size/output size is a multiple of 8 or something like that. But I would like to clarify what the rules are instead of trying to guess. Thanks for your help For best practice, please refer to below link: https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#optimize-layer As mentioned in above link: Tensor dimensions (or the number of input and output channels for FullyConnected layer) of multiples of 32 tend to have the best performance for FP16 and INT8 inference because of the utilization of Tensor Cores if the hardware supports them. Tensor Core kernels for FP16 data requires striding between data rows to be multiples of 8 data elements. For example, a MatrixMultiply that is M x K times K x N requires M, K, and N to be multiple of 8 to use Tensor Core optimized kernels. Can you share more details regarding stride, dilation, in / out channels? Also if possible please share the model & script file to repro the issue. Thanks Sorry, I am not allowed to share the model but it is DeepLab with atrous classifier. The convolutions that are giving me trouble are the atrous convolutions in the classifier. In channels: 2048 Out channels: 256 Kernel: 3,3 Stride: 1,1 Pads: 6,6 Dilation: 6,6 In channels: 2048 Out channels: 256 Kernel: 3,3 Stride: 1,1 Pads: 12,12 Dilation: 12,12 In channels: 2048 Out channels: 256 Kernel: 3,3 Stride: 1,1 Pads: 18,18 Dilation: 18,18 The problem is the optimizer not being able to find an optimized implementation but resorting to use the implicit_convolve_sgemm instead of some turing_h1688 implementation like the other ‘regular’ convolutions. Screenshot at 2020-04-29 15-11-31 (1)1565×424 189 KB Thanks SunilJB Hi @ricardo10silva, There are some additional fixes pushed in latest release, could you please try using latest TRT and let us know in case issue persist. Thanks Related topics Powered by Discourse, best viewed with JavaScript enabled
Train With Mixed Precision Abstract Mixed precision methods combine the use of different numerical formats in one computational workload. This document describes the application of mixed precision to deep neural network training. 1. Introduction There are numerous benefits to using numerical formats with lower precision than 32-bit floating point. First, they require less memory, enabling the training and deployment of larger neural networks. Second, they require less memory bandwidth which speeds up data transfer operations. Third, math operations run much faster in reduced precision, especially on GPUs with Tensor Core support for that precision. Mixed precision training achieves all these benefits while ensuring that no task-specific accuracy is lost compared to full precision training. It does so by identifying the steps that require full precision and using 32-bit floating point for only those steps while using 16-bit floating point everywhere else. 2. Mixed Precision Training Mixed precision training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in the Volta and Turing architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps: The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in CUDA 8 in the NVIDIA Deep Learning SDK. Mixed precision is the combined use of different numerical precisions in a computational method. Half precision (also known as FP16) data compared to higher precision FP32 vs FP64 reduces memory usage of the neural network, allowing training and deployment of larger networks, and FP16 data transfers take less time than FP32 or FP64 transfers. Single precision (also known as 32-bit) is a common floating point format (float in C-derived programming languages), and 64-bit, known as double precision (double). Deep Neural Networks (DNNs) have led to breakthroughs in a number of areas, including: DNN complexity has been increasing to achieve these results, which in turn has increased the computational resources required to train these networks. One way to lower the required resources is to use lower-precision arithmetic, which has the following benefits. Figure 1. Training curves for the bigLSTM English language model shows the benefits of the mixed-precision training techniques. The Y-axis is training loss. Mixed precision without loss scaling (grey) diverges after a while, whereas mixed precision with loss scaling (green) matches the single precision model (black). Since DNN training has traditionally relied on IEEE single-precision format, this guide will focus on how to train with half precision while maintaining the network accuracy achieved with single precision (as Figure 1). This technique is called mixed-precision training since it uses both single and half-precision representations. 2.1. Half Precision Format IEEE 754 standard defines the following 16-bit half-precision floating point format: 1 sign bit, 5 exponent bits, and 10 fractional bits. Exponent is encoded with 15 as the bias, resulting in [-14, 15] exponent range (two exponent values, 0 and 31, are reserved for special values). An implicit lead bit 1 is assumed for normalized values, just like in other IEEE floating point formats. Half precision format leads to the following dynamic range and precision: Some example magnitudes: Half precision dynamic range, including denormals, is 40 powers of 2. For comparison, single precision dynamic range including denormals is 264 powers of 2. 2.2. Tensor Core Math The Volta generation of GPUs introduces Tensor Cores, which provide 8x more throughput than single precision math pipelines. Each Tensor Core performs D = A x B + C, where A, B, C, and D are matrices. A and B are half precision 4x4 matrices, whereas D and C can be either half or single precision 4x4 matrices. In other words, Tensor Core math can accumulate half precision products into either single or half precision outputs. In practice, higher performance is achieved when A and B dimensions are multiples of 8. cuDNN v7 and cuBLAS 9 include some functions that invoke Tensor Core operations, for performance reasons these require that input and output feature map sizes are multiples of 8. For more information, see the NVIDIA cuDNN Developer Guide. The reason half precision is so attractive is that the V100 GPU has 640 Tensor Cores, so they can all be performing 4x4 multiplications all at the same time. The theoretical peak performance of the Tensor Cores on the V100 is approximately 120 TFLOPS. This is about an order of magnitude (10x) faster than double precision (FP64) and about four times faster than single precision (FP32). Matrix multiplies are at the core of Convolutional Neural Networks (CNN). CNNs are very common in deep learning on many networks. Beginning in CUDA 9 and cuDNN 7, the convolution operations are done using Tensor Cores whenever possible. This can greatly improve the training speed as well as the inference speed of CNNs or models that contain convolutions. 2.3. Considering When Training With Mixed Precision Assuming the framework supports Tensor Core math, simply enabling the Tensor Core path in the framework trains many networks faster. You can choose the FP16 format for tensors and/or convolution/fully-connected layers and keep all the hyperparameters of the FP32 training session. For more details, refer to Frameworks. However, some networks require their gradient values to be shifted into FP16 representable range to match the accuracy of FP32 training sessions. The figure below illustrates one such case. Figure 2. Histogram of activation gradient magnitudes throughout FP32 training of Multibox SSD network. The x-axis is logarithmic, except for the zero entry. For example, 66.8% of values were 0 and 4% had magnitude in the (2-32 , 2-30) range. However, this isn’t always the case. You may have to do some scaling and normalization to use FP16 during training. Figure 3. Histogram of activation gradient magnitudes throughout FP32 training of Multibox SSD network. Both x- and y-axes are logarithmic. Consider the histogram of activation gradient values (shown with linear and log y-scales above), collected across all layers during FP32 training of the Multibox SSD detector network (VGG-D backbone). When converted to FP16, 31% of these values become zeros, leaving only 5.3% as nonzeros which for this network lead to divergence during training. Much of the FP16 representable range was left unused by the gradient values. Therefore, if we shift the gradient values to occupy more of that range, we can preserve many values that are otherwise lost to 0s. For this particular network, shifting by three exponent values (multiply by 8) was sufficient to match the accuracy achieved with FP32 training by recovering the relevant values lost to 0. Shifting by 15 exponent values (multiplying by 32K) would recover all but 0.1% of values lost to 0 when converting to FP16 and still avoid overflow. In other words, FP16 dynamic range is sufficient for training, but gradients may have to be scaled to move them into the range to keep them from becoming zeros in FP16. 2.3.1. Loss Scaling To Preserve Small Gradient Magnitudes As was shown in the previous section, successfully training some networks requires gradient value scaling to keep them from becoming zeros in FP16. This can be achieved with a single multiplication. You can scale the loss values computed in the forward pass, before starting backpropagation. By the chain rule, backpropagation ensures that all the gradient values of the same amount are scaled. This requires no extra operations during backpropagation and keeps the relevant gradient values from becoming zeros and losing that gradient information. Weight gradients must be unscaled before weight update, to maintain the magnitude of updates the same as in FP32 training. It is simplest to perform this descaling right after the backward pass but before gradient clipping or any other gradient-related computations. This ensures that no hyperparameters (such as gradient clipping threshold, weight decay, etc.) have to be adjusted. While many networks match FP32 training results when all tensors are stored in FP16, some require updating an FP32 copy of weights. Furthermore, values computed by large reductions should be left in FP32. Examples of this include statistics (mean and variance) computed by batch-normalization, SoftMax. Batch-normalization can still take FP16 inputs and outputs, saving half the bandwidth compared to FP32, it’s just that the statistics and value adjustment should be done in FP32. This leads to the following high-level procedure for training: 2.3.2. Choosing A Scaling Factor The procedure described in the previous section requires you to pick a loss scaling factor to adjust the gradient magnitudes. You can choose a large scaling factor as long as it doesn’t cause overflow during backpropagation. This would lead to weight gradients containing infinities or NaNs, which in turn would irreversibly damage the weights during the update. These overflows can be easily and efficiently detected by inspecting the computed weight gradients, for example, multiplying the weight gradient with 1/S step in the previous section. There are several options to choose the loss scaling factor. The simplest one is to pick a constant scaling factor. We trained a number of feed-forward and recurrent networks with Tensor Core math for various tasks. The network's scaling factors ranged from 8 to 32K (many networks did not require a scaling factor). The network accuracy was achieved from training in FP32. However, since the minimum required scaling factor can depend on the network, framework, minibatch size, etc., some trial and error may be required when picking a scaling value. A constant scaling factor can be chosen more directly if gradient statistics are available. Choose a value so that its product with the maximum absolute gradient value is below 65,504 (the maximum value representable in FP16). A more robust approach is to choose the loss scaling factor dynamically. The basic idea is to start with a large scaling factor and then reconsider it in each training iteration. If no overflow occurs for a chosen number of iterations N, then increase the scaling factor. If an overflow occurs, skip the weight update and decrease the scaling factor. We found that as long as one skips updates infrequently the training schedule does not have to be adjusted to reach the same accuracy as FP32 training. Note that N effectively limits how frequently we may overflow and skip updates. The rate for scaling factor update can be adjusted by picking the increase/decrease multipliers as well as N, the number of nonoverflow iterations before the increase. We successfully trained networks with N = 2000, increasing scaling factor by 2, decreasing scaling factor by 0.5, many other settings are valid as well. Dynamic loss-scaling approach leads to the following high-level training procedure: 3. Automatic Mixed Precision Using mixed precision training requires three steps: Frameworks that support fully automated mixed precision training also support: In those frameworks with automatic support, using mixed precision can be as simple as adding one line of code or enabling a single environment variable. Currently, the frameworks with support for automatic mixed precision are TensorFlow, PyTorch, and MXNet. Refer to NVIDIA Automatic Mixed Precision for Deep Learning for more information, along with the Frameworks section below. 4. Optimizing For Tensor Cores NVIDIA Tensor Cores provide hardware acceleration for mixed precision training. On a V100 GPU, Tensor Cores can speed up matrix multiply and convolution operations by up to 8x in float16 over their float32 equivalents. Taking full advantage of Tensor Cores may require changes to model code. This section describes three steps you can take to maximize the benefit that Tensor Cores provide: The above benefits are ordered by increasing complexity, and in particular, the first step (satisfying shape constraints) usually provides most of the benefit for little effort. 4.1. Satisfying Tensor Core Shape Constraints Due to their design, Tensor Cores have shape constraints on their inputs. For matrix multiplication: For convolution: In practice, for mixed precision training, our recommendations are: 4.2. Increasing Arithmetic Intensity Arithmetic intensity is a measure of how much computational work is to be performed in a kernel per input byte. For example, a V100 GPU has 125 TFLOPs of math throughput and 900 GB/s of memory bandwidth. Taking the ratio of the two, we see that any kernel with fewer than ~140 FLOPs per input byte will be memory-bound. That is, Tensor Cores cannot run at full throughput because memory bandwidth will be the limiting factor. A kernel with sufficient arithmetic intensity to allow full Tensor Core throughput is compute-bound. It is possible to increase arithmetic intensity both in model implementation and model architecture. To increase arithmetic intensity in model implementation: To increase arithmetic intensity in model architecture: 4.3. Decreasing Non-Tensor Core Work Many operations in deep neural networks are not accelerated by Tensor Cores, and it is important to understand the effect this has on end-to-end speed-ups. For example, suppose that a model spends one half of the total training time in Tensor Core-accelerated operations (matrix multiplication and convolution). If Tensor Cores provide a 5x speed-up for those operations, then the total speedup will be 1. / (0.5 + (0.5 / 5.)) = 1.67x. In general, as Tensor Core operations represent a decreasing fraction of total work, the more important it is to focus on optimizing non-Tensor Core operations. It is possible to speed-up these operations by hand, using custom CUDA implementations along with framework integration. Furthermore, frameworks are beginning to provide support for automatically speeding up non-Tensor Core ops with compiler tools. Examples include XLA for TensorFlow and the PyTorch JIT. 5. Multi-GPU Training For multi-GPU training, the same strategy applies for loss scaling. NCCL supports both half precision floats and normal floats, therefore, a developer can choose which precision they want to use to aggregate gradients. Batch size considerations depend on your training framework. 6. Prerequisites To take advantage of mixed precision training, ensure you meet the following minimum requirements: Procedure If using an NVIDIA optimized framework container, that was pulled from the NGC container registry, you will still need to install an NVIDIA driver on your base operating system. However, CUDA and cuDNN will come included in the container. For more information, refer to the Frameworks Support Matrix. 7. Frameworks Most major deep learning frameworks have begun to merge support for half precision training techniques that exploit Tensor Core calculations in Volta and Turing. Additional optimization pull requests are at various stages and listed in their respective sections. For NVCaffe, Caffe2, MXNet, Microsoft Cognitive Toolkit, PyTorch, TensorFlow and Theano, Tensor Core acceleration is automatically enabled if FP16 storage is enabled. While frameworks like Torch will tolerate the latest architecture, it currently does not exploit Tensor Core functionality. PyTorch PyTorch includes support for FP16 storage and Tensor Core math. To achieve optimum performance, you can train a model using Tensor Core math and mixed precision. 7.1.1. Automatic Mixed Precision Training In PyTorch The automatic mixed precision feature is available starting inside the NVIDIA NGC PyTorch 19.03+ containers. To get started, we recommend using AMP (Automatic Mixed Precision), which enables mixed precision in only 3 lines of Python. AMP is available through NVIDIA’s Apex repository of mixed precision and distributed training tools. The AMP API is documented in detail here. 7.1.2. Success Stories The models where we have seen speedup using mixed precision are: 7.1.3. Tensor Core Optimized Model Scripts For PyTorch The Tensor Core examples provided in GitHub focus on achieving the best performance and convergence using NVIDIA Volta Tensor Cores . It uses the latest deep learning example networks and model scripts for training. These examples focus on achieving the best performance and convergence from NVIDIA Volta Tensor Cores by using the latest deep learning example networks for training. Each example model trains with mixed precision Tensor Cores on Volta, therefore you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. This container includes the following Tensor Core examples. This model script is available on GitHub as well as NVIDIA GPU Cloud (NGC). 7.1.4. Manual Conversion To Mixed Precision In PyTorch We recommend using AMP to implement mixed precision in your model. However, if you wish to implement mixed precision yourself, refer to our GTC talk on manual mixed precision (video, slides). TensorFlow TensorFlow supports FP16 storage and Tensor Core math. Models that contain convolutions or matrix multiplications using the tf.float16 data type will automatically take advantage of Tensor Core hardware whenever possible. In order to make use of Tensor Cores, FP32 models will need to be converted to use a mix of FP32 and FP16. This can be done either automatically using automatic mixed precision (AMP) or manually. 7.2.1. Automatic Mixed Precision Training In TensorFlow For models already using a tf.train.Optimizer or tf.keras.optimizers.Optimizer for both compute_gradients() and apply_gradients() operations (for example, by calling optimizer.minimize() or model.fit()), automatic mixed precision can be enabled by wrapping the optimizer with tf.train.experimental.enable_mixed_precision_graph_rewrite(). Graph-based example: opt = tf.train.AdamOptimizer() opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) train_op = opt.miminize(loss) opt = tf.train.AdamOptimizer() opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) train_op = opt.miminize(loss) Keras-based example: opt = tf.keras.optimizers.Adam() opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) model.compile(loss=loss, optimizer=opt) model.fit(...) opt = tf.keras.optimizers.Adam() opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) model.compile(loss=loss, optimizer=opt) model.fit(...) You can also set the environment variable inside a TensorFlow Python script. Issue the following code at the beginning of the script: os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1' os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1' When enabled, automatic mixed precision will do two things: For more information on automatic mixed precision, refer to the NVIDIA TensorFlow User Guide. 7.2.2. Success Stories The models where we have seen speedup using mixed precision are: 7.2.3. Tensor Core Optimized Model Scripts For TensorFlow The Tensor Core examples provided in GitHub focus on achieving the best performance and convergence using NVIDIA Volta Tensor Cores . It uses the latest deep learning example networks and model scripts for training. Each example model trains with mixed precision Tensor Cores on Volta, therefore you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. This container includes the following Tensor Core examples. 7.2.4. Manual Conversion To Mixed Precision Training In TensorFlow Procedure dtype = tf.float16 data = tf.placeholder(dtype, shape=(nbatch, nin)) weights = tf.get_variable('weights', (nin, nout), dtype) biases = tf.get_variable('biases', nout, dtype, initializer=tf.zeros_initializer()) logits = tf.matmul(data, weights) + biases dtype = tf.float16 data = tf.placeholder(dtype, shape=(nbatch, nin)) weights = tf.get_variable('weights', (nin, nout), dtype) biases = tf.get_variable('biases', nout, dtype, initializer=tf.zeros_initializer()) logits = tf.matmul(data, weights) + biases tf.cast(tf.get_variable(..., dtype=tf.float32), tf.float16) tf.cast(tf.get_variable(..., dtype=tf.float32), tf.float16) tf.losses.softmax_cross_entropy(target, tf.cast(logits, tf.float32)) tf.losses.softmax_cross_entropy(target, tf.cast(logits, tf.float32)) loss, params = ... scale = 128 grads = [grad / scale for grad in tf.gradients(loss * scale, params)] loss, params = ... scale = 128 grads = [grad / scale for grad in tf.gradients(loss * scale, params)] MXNet MXNet includes support for FP16 storage and Tensor Core math. To achieve optimum performance, you need to train a model using Tensor Core math and FP16 mode on MXNet. The following procedure is typical for when you want to have your entire network in FP16. Alternatively, you can take output from any layer and cast it to FP16. Subsequent layers will be in FP16 and will use Tensor Core math if applicable. 7.3.1. Automatic Mixed Precision Training In MXNet The automatic mixed precision feature is available starting inside the NVIDIA NGC MXNet 19.04+ containers. Training deep learning networks is a very computationally intensive task. Novel model architectures tend to have an increasing number of layers and parameters, which slows down training. Fortunately, new generations of training hardware as well as software optimizations make training these new models a feasible task. Most of the hardware and software training optimization opportunities involve exploiting lower precision like FP16 in order to utilize the Tensor Cores available on new Volta and Turing GPUs. While training in FP16 showed great success in image classification tasks, other more complicated neural networks typically stayed in FP32 due to difficulties in applying the FP16 training guidelines that are needed to ensure proper model training. That is where AMP (Automatic Mixed Precision) comes into play- it automatically applies the guidelines of FP16 training, using FP16 precision where it provides the most benefit, while conservatively keeping in full FP32 precision operations unsafe to do in FP16. The MXNet AMP tutorial, located in /opt/mxnet/nvidia-examples/AMP/AMP_tutorial.md inside this container, shows how to get started with mixed precision training using AMP for MXNet, using by example the SSD network from GluonCV. 7.3.2. Tensor Core Optimized Model Scripts For MXNet The Tensor Core examples provided in GitHub focus on achieving the best performance and convergence using NVIDIA Volta Tensor Cores. It also uses the latest deep learning example networks and model scripts for training. Each example model trains with mixed precision Tensor Cores starting with the Volta architecture, therefore you can get results much faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time. The MXNet container includes the following MXNet Tensor Core examples: 7.3.3. Manual Conversion To Mixed Precision Training In MXNet Procedure python tools/rec2idx.py <path to .rec file> <path to newly created .idx file> python tools/rec2idx.py <path to .rec file> <path to newly created .idx file> mxnet.sym.Cast(data=input_data, dtype=numpy.float16) mxnet.sym.Cast(data=input_data, dtype=numpy.float16) mxnet.sym.SoftmaxOutput(other_args, grad_scale=128.0) mxnet.sym.SoftmaxOutput(other_args, grad_scale=128.0) mxnet.optimizer.SGD(other_args, rescale_grad=1.0/128) mxnet.optimizer.SGD(other_args, rescale_grad=1.0/128) When training in FP16, it is best to use multi-precision optimizers that keep the weights in FP32 and perform the backward pass in FP16. For example, for SGD with momentum, you would issue the following: mxnet.optimizer.SGD(other_args, momentum=0.9, multi_precision=True) mxnet.optimizer.SGD(other_args, momentum=0.9, multi_precision=True) Alternatively, you can pass 'multi_precision': True to the optimizer_params option in the model.fit method. Caffe2 Caffe2 includes support for FP16 storage and Tensor Core math. To achieve optimum performance, you can train a model using Tensor Core math and FP16 mode on Caffe2. When training a model on Caffe2 using Tensor Core math and FP16, the following actions need to take place: 7.4.1. Running FP16 Training On Caffe2 Procedure python caffe2/python/examples/resnet50_trainer.py --train_data <path> --test_data <path> --num-gpus <int> --batch-size <int> --dtype float16 --enable-tensor-core --cudnn_workspace_limit_mb 1024 --image_size 224 python caffe2/python/examples/resnet50_trainer.py --train_data <path> --test_data <path> --num-gpus <int> --batch-size <int> --dtype float16 --enable-tensor-core --cudnn_workspace_limit_mb 1024 --image_size 224 caffe2/python/examples/resnet50_trainer.py --help caffe2/python/examples/resnet50_trainer.py --help Microsoft Cognitive Toolkit Microsoft Cognitive Toolkit includes support for FP16 storage and Tensor Core math. To achieve optimum performance, you need to train a model using Tensor Core math and FP16 mode on Microsoft Cognitive Toolkit. 7.5.1. Running FP16 Training On Microsoft Cognitive Toolkit After you have trained a neural network, you can optimize and deploy the model for GPU inferencing with TensorRT™ . For more information about optimizing and deploying using TensorRT, refer to the NVIDIA TensorRT documentation. Tensor Core math is turned on by default in FP16. The following procedure is typical of Microsoft Cognitive Toolkit using FP16 in a multi-layer perceptron MNIST example. import cntk as C import numpy as np input_dim = 784 num_output_classes = 10 num_hidden_layers = 1 hidden_layers_dim = 200 # Input variables denoting the features and label data feature = C.input_variable(input_dim, np.float32) label = C.input_variable(num_output_classes, np.float32) feature16 = C.cast(feature, np.float16) label16 = C.cast(label, np.float16) with C.default_options(dtype=np.float16): # Instantiate the feedforward classification model scaled_input16 = C.element_times(C.constant(0.00390625, dtype=np.float16), feature16) z16 = C.layers.Sequential([C.layers.For(range(num_hidden_layers), lambda i: C.layers.Dense(hidden_layers_dim, activation=C.relu)), C.layers.Dense(num_output_classes)])(scaled_input16) ce16 = C.cross_entropy_with_softmax(z16, label16) pe16 = C.classification_error(z16, label16) z = C.cast(z16, np.float32) ce = C.cast(ce16, np.float32) pe = C.cast(pe16, np.float32) # fake data with batch_size = 5 batch_size = 5 feature_data = np.random.randint(0, 256, (batch_size,784)).astype(np.float32) label_data = np.eye(num_output_classes)[np.random.randint(0, num_output_classes, batch_size)] ce.eval({feature:feature_data, label:label_data}) import cntk as C import numpy as np input_dim = 784 num_output_classes = 10 num_hidden_layers = 1 hidden_layers_dim = 200 # Input variables denoting the features and label data feature = C.input_variable(input_dim, np.float32) label = C.input_variable(num_output_classes, np.float32) feature16 = C.cast(feature, np.float16) label16 = C.cast(label, np.float16) with C.default_options(dtype=np.float16): # Instantiate the feedforward classification model scaled_input16 = C.element_times(C.constant(0.00390625, dtype=np.float16), feature16) z16 = C.layers.Sequential([C.layers.For(range(num_hidden_layers), lambda i: C.layers.Dense(hidden_layers_dim, activation=C.relu)), C.layers.Dense(num_output_classes)])(scaled_input16) ce16 = C.cross_entropy_with_softmax(z16, label16) pe16 = C.classification_error(z16, label16) z = C.cast(z16, np.float32) ce = C.cast(ce16, np.float32) pe = C.cast(pe16, np.float32) # fake data with batch_size = 5 batch_size = 5 feature_data = np.random.randint(0, 256, (batch_size,784)).astype(np.float32) label_data = np.eye(num_output_classes)[np.random.randint(0, num_output_classes, batch_size)] ce.eval({feature:feature_data, label:label_data}) 7.5.2. Microsoft Cognitive Toolkit FP16 Example For a more complete example of ResNet-50 with distributed training, refer to the TrainResNet_ImageNet_Distributed.py example. NVCaffe NVCaffe includes support for FP16 storage and Tensor Core math. To achieve optimum performance, you can train a model using Tensor Core math and FP16 mode on NVCaffe. 7.6.1. Running FP16 Training On NVCaffe Procedure caffe$ vim models/resnet50/train_val_fp16.prototxt caffe$ vim models/resnet50/train_val_fp16.prototxt And change the batch_size: 32 setting value to [64...128] * <Number of GPUs installed>. default_forward_type: FLOAT16 default_backward_type: FLOAT16 default_forward_math: FLOAT16 default_backward_math: FLOAT16 default_forward_type: FLOAT16 default_backward_type: FLOAT16 default_forward_math: FLOAT16 default_backward_math: FLOAT16 And by adding solver_data_type: FLOAT16 to the file models/resnet50/solver_fp16.prototxt. global_grad_scale_adaptive: true global_grad_scale_adaptive: true caffe$ ./models/resnet50/train_resnet50_fp16.sh caffe$ ./models/resnet50/train_resnet50_fp16.sh I0806 06:54:20.037241 276 parallel.cpp:79] Overall multi-GPU performance: 5268.04 img/sec* I0806 06:54:20.037241 276 parallel.cpp:79] Overall multi-GPU performance: 5268.04 img/sec* The performance number of 5268 img/sec was trained on an 8-GPU system. For a single GPU system, you could expect around 750 img/sec training with NVCaffe. caffe$ python plot_top5.py -s models/resnet50/logs/resnet50_fp16.log caffe$ python plot_top5.py -s models/resnet50/logs/resnet50_fp16.log Your output should look similar to the following: Figure 4. ResNet-50 FP16 training log 7.6.2. NVCaffe FP16 Example For examples on optimization, refer to the models/resnet50/train_val_fp16.prototxt file. 8. Deploying DNNs After you have trained a neural network, you can optimize and deploy the model for GPU inferencing with TensorRT™ . For more information about optimizing and deploying using TensorRT, refer to the NVIDIA TensorRT documentation. 9. FAQs 9.1. General FAQs Q: What additional resources are available for how to use mixed precision? A: Here are some additional resources that can help with understanding mixed precision: Q: What is Automatic Mixed Precision (AMP) and how can it help with training my model? A: Automatic Mixed Precision (AMP) makes all the required adjustments to train models using mixed precision, providing two benefits over manual operations: The benefits of mixed precision training are: For more information, refer to Automatic Mixed Precision for Deep Learning. Q: How does AMP automate mixed precision? A: Using mixed precision training requires two steps: AMP automates both these steps. In particular in TF-AMP, this is controlled by means of a single environment variable. Q: How does dynamic scaling work? A: Dynamic loss scaling basically attempts to ride the edge of the highest loss scale it can use without causing gradient overflow, to make full use of the FP16 dynamic range. It does so by beginning with a high loss scale value (say, 2^24), then in each iteration, checking the gradients for overflows (infs/NaNs). If none of the gradients overflowed, gradients are unscaled (in FP32) and optimizer.step() is applied as usual. If an overflow was detected, optimizer.step is patched to skip the actual weight update (so that the inf/NaN gradients do not pollute the weights) and the loss scale is reduced by some factor F (F=2 by default). This takes care of reducing the loss scale to a range where overflows are not produced. However, it's only half the story. What if, at some later point, training has stabilized and a higher loss scale is permissible? For example, later in training, gradient magnitudes tend to be smaller, and may require a higher loss scale to prevent underflow. Therefore, dynamic loss scaling also attempts to increase the loss scale by a factor of F every N iterations (N=2000 by default). If increasing the loss scale causes an overflow once more, the step is skipped and the loss scale is reduced back to the pre-increase value as usual. In this way, by: reducing the loss scale whenever a gradient overflow is encountered, and Intermittently attempting to increase the loss scale, the goal of riding the edge of the highest loss scale that can be used without causing overflow is (roughly) accomplished. Q: How do you increase the batch size when AMP is enabled? Do you just increase the batch size by 2? A: It depends on how much memory you saved, which depends on the model. A quick way is to watch -n 0.5 nvidia-smi from a separate terminal while you launch your run, to see how much device memory you're using. In general, using a larger batch per GPU tends to improve utilization, as long as you obey the guidelines to allow Tensor Core usage (refer to Issue #221 for more information). Q: How is AllowList/DenyList/InferList determined? What are the corresponding ops that are in each list? A: We determine these based on our experience with numeric stability from our research. AllowList operations are operations that take advantage of our GPU Tensor Cores. DenyList operations are operations that may overflow the range of FP16, or require the higher precision of FP32. InferList operations are operations that are safely done in either FP32 or FP16. Typical ops included in each list are: To view/review, modify, and recompile to experiment, or to use environment variables in our container to modify AllowList/DenyList, see: Q: What are the minimum hardware and software requirements to use AMP? A: In order to run AMP effectively, you need Tensor Cores in your GPU; for training, we recommend V100; and for inference, we recommend T4. You can access this hardware through cloud service providers (AWS, Azure or Google Cloud). When using a framework, TensorFlow 1.14 supports AMP natively or support for AMP is available using NVIDIA’s containers 19.07+. In PyTorch, 1.0 AMP is available through APEX. Q: How do I enable AMP for my deep learning training? A: Enabling AMP is framework dependent: tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) model, optimizer = amp.initialize(model, optimizer, opt_level="O1") with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() model, optimizer = amp.initialize(model, optimizer, opt_level="O1") with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() amp.init() amp.init_trainer(trainer) with amp.scale_loss(loss, trainer) as scaled_loss: autograd.backward(scaled_loss) amp.init() amp.init_trainer(trainer) with amp.scale_loss(loss, trainer) as scaled_loss: autograd.backward(scaled_loss) Q: What are the models that are suitable for AMP? And what kind of speed-up can I expect? A: All models are suitable for AMP, although the speed-up may vary from model to model. The following table provides some examples of the speed-up for different models: Top 1% Top 1% mAP mAP BLEU BLEU HR HR Top 1% Top 1% 257,687 smp/s 500,375 smp/s Values are measured with the model running on DGX-1V 8GPU 16G, DGX-1V 8GPU 32G, or DGX-2V 16GPU 32G. When enabling AMP, there are other aspects to consider such as the reduction in memory and in bandwidth needed to train the mixed precision model. Q: How much faster will my model run with AMP? A: There are no precise rules for mixed precision speedups, but here are a few guidelines: Q: How do I see reduced memory consumption? A: In TensorFlow, set the allow_growth flag so it only allocates what it needs and view in nvidia-smi. For PyTorch, nvidia-smi can show memory utilization. The best way to test, is to try a larger batch size that would have otherwise led to out-of-memory when AMP is not enabled. Q: What if I have already implemented manual mixed precision, how will AMP further improve my model performance? What benefits should I expect from AMP? A: If the code is already written in such a way to follow the NVIDIA Mixed Precision Training Guide, then AMP will leave things as they are. Q: Why do I observe only a little speedup with AMP turned on? A: First, you need to identify the bottleneck in your workflow, is it data I/O or compute bound? To find out what is limiting the performance of your workflow use DLProf to profile it. If the slowest part of the workflow is in the GPU, check if the layers of your model are actually making use of mixed precision. This can be done in a TensorBoard extension after profiling your network with DLProf, or manually by profiling with Nsight Systems or nvprof and looking for kernel names including the strings [i|s|h]884 or [i|s|h]1688 (for example, volta_h884gemm_… or turing_fp16_s1688cudnn_fp16_…). Some layers of a network are DenyListed, meaning that they cannot use mixed precision for accuracy reasons. The DenyList is framework dependent. Refer to the following resources for more information: Furthermore, Tensor Cores are optimizing GEMMs (generalized (dense) matrix-matrix multiplies) operations, there are restrictions on the dimensions of the matrices in order to effectively optimize such operations: Q: Is accuracy worse when AMP is turned on? A: AMP is designed to leave accuracy unchanged with respect to FP32 training. And, in practice, we never observed noticeable degradation of accuracy when training with AMP. Q: What if the model code crashes after I have enabled AMP? A: First, make sure that your model doesn’t crash without using AMP. Then, if you have experienced such issues after enabling AMP, file a bug. Q: How do I know that AMP is working for me or Tensor Cores are being enabled? A: The log outputs whether AMP is working, and is framework specific. In TensorFlow, for instance, you will see log messages similar to the following: TF AMP log messages are of the form ‘Converted 405/4897 nodes to float16 precision using 2 cast(s) to float16 (excluding Const and Variable casts) TF AMP log messages are of the form ‘Converted 405/4897 nodes to float16 precision using 2 cast(s) to float16 (excluding Const and Variable casts) 9.2. TensorFlow FAQs Q: Is Automatic Mixed Precision (AMP) dependent on a TensorFlow version or can any TensorFlow version enable AMP? A: AMP is available in the NGC TensorFlow containers starting from 19.03 and is enabled using the TF_ENABLE_AUTO_MIXED_PRECISION=1 environment variable. It is now enabled by wrapping the optimizer object as follows: opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) More information is available in the following webinar. Starting with TensorFlow 1.14, AMP will be available natively in the framework. Q: What is the scheme for TensorFlow to decide which operations to cast to FP16 (which level of the graph or where to decide)? Does TensorFlow also keep a DenyList and an AllowList like PyTorch? A: Our GTC Silicon Valley session S91029, Automated Mixed-Precision Tools for TensorFlow Training discusses how this works. TensorFlow also uses the DenyList and AllowList concepts, but with some subtle differences because TensorFlow has the advantage of a static graph to analyze and convert. Q: What is TF-AMP and what is the goal? A:The top-level goal is that our customers who use TensorFlow to train on V100 have a great mixed precision training experience utilizing all the acceleration possible offered by the hardware. That means accuracy that matches FP32 and real speedups without much manual effort. In practice, achieving that goal requires a few things to happen: Q: How is TF-AMP implemented? A:TF-AMP optimizes the model graph mainly by: It is possible to separately enable the automatic insertion of cast operations and automatic loss scaling. For more details, refer to this NVIDIA TensorFlow User Guide. It must be emphasized that this is only one part of making mixed precision successful, the most important part is to ensure that these changes do not reduce accuracy. Q: Is AMP dependent on a TensorFlow version or can any TensorFlow version enable AMP? A: AMP is available in the NGC TensorFlow containers: Furthermore AMP is available with the official distribution of TensorFlow starting with version 1.14. More information is available in the following webinar. Q: How does AMP know which layer of the model to optimize? A: AMP maintains lists of the layers that can be optimized: The TensorFlow list is located here. TensorFlow has the advantage of a static graph to analyze and convert with respect to other frameworks. Our GTC Silicon Valley session S91029, Automated Mixed-Precision Tools for TensorFlow Training discusses how this works in more detail. Q: How can I see what changes automatic mixed precision makes to my model? A: Because automatic mixed precision operates at the level of TensorFlow graphs, it can be challenging to quickly grasp the changes it makes: often it will tweak thousands of TensorFlow operations, but those correspond to many fewer logical layers. You can set the environment variable TF_CPP_VMODULE="auto_mixed_precision=2" to see a full log of the decisions automatic mixed precision makes (note that this may generate a lot of output). Q: Why do I see only FP32 datatypes in my saved model GraphDef? A: When you save a model graph or inspect the graph with Session.graph for Session.graph_def, TensorFlow returns the unoptimized version of the graph. TF-AMP works as an optimization pass over the original graph, so its changes are not included in the unoptimized graph. You can set the environment variable TF_AMP_LOG_PATH=some_directory, and TF-AMP will save pre- and post-optimization copies of each graph it processes to that directory. There will be many hard-to-distinguish graph files since TensorFlow processes initialization (for example) as a disjoint graph. Q: Why do I see step=0 repeated multiple times when training with TF-AMP? A: The automatic loss scaling algorithm that TF-AMP enables can choose to “skip” training iterations as it searches for the optimal loss scale. When it does so, it does not increment the global step count. Since most of the skips occur at the beginning of training (usually fewer than ten iterations), this behavior manifests as multiple iterations where the step counter stays at zero. Q: How are user-defined custom TF operations handled? A: By default, TF-AMP will leave alone any op types it doesn’t know about, including custom operations. That means the types of op’s inputs and outputs are not changed, and TF-AMP will insert casts as necessary to interoperate with the rest of the (possibly-changed) graph. If you would like to make TF-AMP aware of a custom op type, there are three environment variables you can use: Each of these environment variables takes a comma-separated list of string op names. For example, you might set export TF_AMP_ALLOWLIST_ADD=MyOp1,MyOp2. The op name is the string name used in the call to REGISTER_OP, which corresponds to the name attribute on the operation’s OpDef. Q: Can I change the algorithmic behavior of automatic mixed precision? A: The primary lever for controlling automatic mixed precision behavior is to manipulate what ops lie on each of the AllowList, InferList, and DenyList. You can add ops to each using the three environment variables above, and there is a corresponding variable TF_AUTO_MIXED_PRECISION_GRAPH_REWRITE_{ALLOWLIST,INFERLIST,DENYLIST}_REMOVE to take built-in ops off of each list. Q: Why doesn’t my model achieve full accuracy when I enable AMP? A: The most likely explanation is that loss scaling is not being applied during gradient evaluation. This can happen if the optimizer is not wrapped by tf.trian.experimental.enable_mixed_precision_graph_rewrite() or if gradients are computed directly using tf.gradients() rather than with Optimizer.minimize() or Optimizer.compute_gradients(). Q: Do we have examples or documentation showing how to use AMP with tf.gradients() along with static and/or dynamic loss scaling? A: For static loss scaling, it’s straightforward: loss = some_loss() loss *= loss_scale # Scale by the loss scale scaled_grads = tf.gradients(loss, …) # Compute gradients # Now unscale, handling sparse grads grads = [] for scaled_grad in scaled_grads: if scaled_grad is not None: if isinstance(scaled_grad, tf.IndexedSlices): grads.append(tf.IndexedSlices( scaled_grad.values * (1. / loss_scale), scaled_grad.indices, scaled_grad.dense_shape)) else: grads.append(scaled_grad * (1. / loss_scale)) else: grads.append(None) # Now use `grads` as you would normally loss = some_loss() loss *= loss_scale # Scale by the loss scale scaled_grads = tf.gradients(loss, …) # Compute gradients # Now unscale, handling sparse grads grads = [] for scaled_grad in scaled_grads: if scaled_grad is not None: if isinstance(scaled_grad, tf.IndexedSlices): grads.append(tf.IndexedSlices( scaled_grad.values * (1. / loss_scale), scaled_grad.indices, scaled_grad.dense_shape)) else: grads.append(scaled_grad * (1. / loss_scale)) else: grads.append(None) # Now use `grads` as you would normally 9.3. PyTorch FAQs Q: Is Automatic Mixed Precision (AMP) dependent on a PyTorch version or can any PyTorch version enable AMP? A: AMP with CUDA and CPP extensions requires PyTorch 1.0 or later. The Python-only build might be able to work with PyTorch 0.4, however, 1.0+ is strongly recommended. Q: How does dynamic scaling choose a good scaling factor? A: Dynamic loss scaling basically attempts to ride the edge of the highest loss scale it can use without causing gradient overflow, to make full use of the FP16 dynamic range. It does so by beginning with a high loss scale value (say, 2^24), then in each iteration, checking the gradients for overflows (infs/NaNs). If none of the gradients overflowed, gradients are unscaled (in FP32) and optimizer.step() is applied as usual. If an overflow was detected, optimizer.step is patched to skip the actual weight update (so that the inf/NaN gradients do not pollute the weights) and the loss scale is reduced by some factor F (F=2 by default). This takes care of reducing the loss scale to a range where overflows are not produced. However, it's only half the story. What if, at some later point, training has stabilized and a higher loss scale is permissible? For example, later in training, gradient magnitudes tend to be smaller, and may require a higher loss scale to prevent underflow. Therefore, dynamic loss scaling also attempts to increase the loss scale by a factor of F every N iterations (N=2000 by default). If increasing the loss scale causes an overflow once more, the step is skipped and the loss scale is reduced back to the pre-increase value as usual. In this way, by: Q: How do you increase the batch size when AMP is enabled? Do you just increase the batch size by 8? A: It depends on how much memory you saved, which depends on the model. A quick-and-dirty way is to watch -n 0.5 nvidia-smi from a separate terminal while you launch your run, to see how much device memory you're using. In general, using a larger batch per GPU tends to improve utilization, as long as you obey the guidelines to allow Tensor Core usage (refer to Issue #221 for more information). Q:Is AMP dependent on a PyTorch version or can any PyTorch version enable AMP? A: AMP with CUDA and CPP extensions requires PyTorch 1.0 or later. The Python-only build might be able to work with PyTorch 0.4, however, 1.0+ is strongly recommended. Q: How to use O0, O1, O2, O3? Which is recommended for AMP? What are the differences? A: In the future, AMP O1 functionality will be moved upstream. Q: Can AMP save checkpoints of the model in FP32? A: O1 checkpoints of the model will be saved in FP32, with O2 checkpoints of the model will not be saved in FP32, and the optimizer primary weights must be saved separately. The best practice is always to use O1 to save checkpoints. 9.4. MXNet FAQs Q: Is Automatic Mixed Precision (AMP) dependent on an MXNet version or can any MXNet version enable AMP? A: AMP is available in the NGC MXNet container starting from 19.04. Starting with MXNet 1.5, AMP will be available natively in the upstream framework. Notices Notice This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality. NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice. Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete. NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document. NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk. NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs. No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA. Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices. THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product. Google Android, Android TV, Google Play and the Google Play logo are trademarks of Google, Inc. Trademarks NVIDIA, the NVIDIA logo, CUDA, Merlin, RAPIDS, Triton Inference Server, Turing and Volta are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact Copyright © 2025 NVIDIA Corporation
Tips for Optimizing GPU Performance Using Tensor Cores Originally published at: Tips for Optimizing GPU Performance Using Tensor Cores | NVIDIA Technical Blog Our most popular question is “What can I do to get great GPU performance for deep learning?” We’ve recently published a detailed Deep Learning Performance Guide to help answer this question. The guide explains how GPUs process data and gives tips on how to design networks for better performance. We also take a close look at Tensor Core… Thanks for the post! I have a question about enabling Tensor cores. I wonder where should we set the value for "the batch size and number of inputs and outputs, for a fully-connected layer and channels in and out, for a convolutional layer" ? Glad you enjoyed the post! That depends on how you are running your network. In our APIs and most frameworks, you can specify these parameters when you define a layer and its inputs and outputs. Are you using cuBLAS or cuDNN, or a particular framework? Hi Valerie, this blog said, "Earlier versions of cuDNN required the channel dimension of all tensors be a multiple of 8. That constraint no longer applies to packed NCHW data; cuDNN now automatically pads the tensors as needed." but in this paper: "We recommend ensuring all such parameters are multiples of 8 when training with FP16 and multiples of 16 when training with INT8. These include batch size and number of inputs and outputs, for a fully-connected layer and channels in and out, for a convolutional layer."For a convolutional layer, is it necessary to ensure channel dimensions are multiples of 8 ? This is a very good question! The blog you linked to is correct: with data in the NCHW layout, cuDNN performs automatic padding of channel in and out counts of convolutional layers, so in that case Tensor Cores will activate even when channels in and out are not set to multiples of 8. For brevity, this post focused on the strictest version of these rules: when using data in the NHWC layout, automatic padding won't occur. We talk about the difference between these formats in the Tensor Layouts section of the Deep Learning Performance Guide, if you'd like to read more. The Channels In and Out section of the guide also explains in more detail how this affects the rules for channel counts. (Channels In and Out describes special case kernels for layers with four input channels as well, which may be of interest!) Thanks for the reply. I am using Caffe, I see that we can define a layer and its outputs, I guess the value of inputs in this case will be the outputs from last layer. But I am not sure I can define the batch size. Hi, I wonder if Tensor cores have wave quantization problem? Since different GPUs have different numbers of Tensor cores. I'm a little rusty with Caffe, but if memory serves, the batch size is controlled by the shape of the tensor you use as input during training or inference, which is probably defined in the data layer. So the first dimension of your net.blobs, or other form of data tensor, would be the batch size for all layers. Which framework works best for Tensor cores? The overall Tensor Core count of a GPU doesn't have a separate wave quantization effect. SM-based wave quantization, the sort that we talk about in this post, occurs because layer parameters can be set to any value. So we can choose an inefficient batch size such that the training work can't be divided evenly among SMs once split into tiles / thread blocks. You don't need to worry about this issue at the Tensor Cores level because the tile sizes available are designed to allow efficient work by the set of Tensor Cores in an SM! That question is very complex! There isn't any single preferred framework. Our Training With Mixed Precision guide, and in particular this section explaining how to set up and optimize for Tensor Cores in various frameworks, might be a good place to start. I see. Thank you! Do you mean that the tie sizes of Tensor cores are more flexible than the tie size options in cuBLAS? My wording wasn't precise, sorry! By tile sizes, I mean those available in cuBLAS. This sort of tiling doesn't occur at the Tensor Cores level. To illustrate, consider our feed-forward layer example from the post again. With a batch size of 2048, the equivalent output matrix would have dimensions of 4096 x 2048. Assuming the 256 x 128 tile size is used, 16 x 16 = 256 total tiles are created. These tiles can't be split evenly between the 80 SMs on a Tesla V100 GPU, so this case suffers from wave quantization. With a tile size of 256 x 128, each SM handles the thread block for one tile at a time. The amount of work done by this thread block is controlled by the tile size, and we design the available tile sizes such that the corresponding thread blocks can be calculated jointly by the Tensor Cores on an SM with maximum efficiency. So you don't need to worry about wave quantization at this level. Put another way: a Tesla V100 GPU has 80 SMs, and each SM has 8 Tensor Cores, for a total of 640 Tensor Cores on the GPU. However, wave quantization depends directly on the number of SMs and the tile size; the number of Tensor Cores isn't itself relevant. (Your intuition that the number of Tensor Cores affects quantization is correct in that the number of Tensor Cores is 8 times the number of SMs on this GPU; the information is already being taken into account!) Hope this makes it clearer! I see it now. Thank you so much for the reply! This answer is great! Thanks for your reply! I have another question. In the cuDNN Developer Guide: "For algorithms other than *_ALGO_WINOGRAD_NONFUSED, when the following requirements are met, the cuDNN library will trigger the Tensor Core operations: The number of input and output feature maps is a multiple of 8." Question: For algorithms *_ALGO_WINOGRAD_NONFUSED, what are the requirements ? Because the TF_ENABLE_WINOGRAD_NONFUSED variable is enabled by default Related topics Powered by Discourse, best viewed with JavaScript enabled
Received: 8 August 2019 Accepted: 28 August 2019 DOI: 10.1002/cpe.5547 S P E C I A L I S S U E P A P E R Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC-9 Perlmutter system Charlene Yang1 Thorsten Kurth1 Samuel Williams2 1National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory, Berkeley, California 2Computational Research Division (CRD), Lawrence Berkeley National Laboratory, Berkeley, California, Correspondence Charlene Yang, National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory, Berkeley, CA 94720. Email: [email protected] Funding information Advanced Scientific Computing Research Program, Department of Energy, Office of Science, U.S., Grant/Award Number: DE-AC02-05CH11231; National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy, Grant/Award Number: DE-AC02-05CH11231 Summary The Roofline performance model provides an intuitive and insightful approach to identify- ing performance bottlenecks and guiding performance optimization. In preparation for the next-generation supercomputer Perlmutter at NERSC, this paper presents a methodology to construct a hierarchical Roofline on NVIDIA GPUs and extends it to support reduced pre- cision and Tensor Cores. The hierarchical Roofline incorporates L1, L2, device memory, and system memory bandwidths into one single figure, and it offers more profound insights into performance analysis than the traditional DRAM-only Roofline. We use our Roofline method- ology to analyze three proxy applications: GPP from BerkeleyGW, HPGMG from AMReX, and conv2d from TensorFlow. In doing so, we demonstrate the ability of our methodology to readily understand various aspects of performance and performance bottlenecks on NVIDIA GPUs and motivate code optimizations. KEYWORDS code optimization, Cray, NVIDIA GPU, performance analysis, Roofline, tensor core 1 INTRODUCTION NERSC's next supercomputer Perlmutter will be an NVIDIA GPU-accelerated Cray supercomputer with AMD EPYC host CPUs and an Ethernet-compatible Slingshot network. Although NERSC users are generally familiar with performance optimization on Intel and AMD CPUs, there are a number of new facets of performance optimization on GPUs, including thread predication, deep memory hierarchies, mixed precision computation, and Tensor Cores that need to be better understood. Rather than forcing users to embrace a ‘‘trial-and-error’’ approach to performance optimization or dig through numerous profiler metrics, the Roofline performance model1 provides a visually intuitive way for users to identify performance bottlenecks and motivate code optimization strategies. Roofline is a throughput-oriented performance model centered around the interplay between computational capabilities (eg, peak GFLOP/s), memory bandwidth (eg, STREAM GB/s), and data locality (ie, reuse of data once it is loaded from memory). Data locality is commonly expressed as arithmetic intensity, which is the ratio of floating-point operations performed to data movement (FLOPs:Byte). Performance (GFLOP/s) is bound by GFLOP/s ≤min ⎧ ⎪ ⎨ ⎪⎩ Peak GFLOP/s Peak GB/s × Arithmetic Intensity, (1) which produces the traditional Roofline formulation when plotted on a log-log plot. Previously, the Roofline model was expanded to support the full memory hierarchy2,3 by adding additional bandwidth ‘‘ceilings.’’ Similarly, additional ceilings beneath the Roofline can be added to represent performance bottlenecks arising from lack of vectorization or the failure to exploit fused multiply-add (FMA) instructions. Published 2019. This article is a U.S. Government work and is in the public domain in the USA Concurrency Computat Pract Exper. 2019;e5547. wileyonlinelibrary.com/journal/cpe 1 of 12 https://doi.org/10.1002/cpe.5547 Published 2019. This article is a U.S. Government work and is in the public domain in the USA Concurrency Computat Pract Exper. 2020;32:e5547. wileyonlinelibrary.com/journal/cpe 1 of 12 https://doi.org/10.1002/cpe.5547 2 of 12 YANG ET AL. Orthogonal to the Roofline description of hardware is characterizing applications in terms of Roofline-related coordinates, ie, Performance (GFLOP/s) and Arithmetic Intensity (FLOPs/Byte). One can employ a variety of methods to calculate these terms ranging from hand counting FLOPs and estimating bytes, to performance counters,4,5 to software simulators2 that trade performance for accuracy. Over the last decade, Roofline analysis has been proven to be a great success, especially with the hierarchical Roofline on Intel CPUs2 and was a benefit to understanding performance on NERSC's previous KNL-based Cori Supercomputer.6,7 However, Roofline has yet to be fully developed on NVIDIA GPUs. This paper builds upon the previous work on CPU architectures as well as the HBM-only Roofline methodology we developed for NVIDIA GPUs8 and expands the model into a hierarchical Roofline methodology that also captures the performance effects associated with reduced precision and Tensor Cores on NVIDIA's latest V100 GPUs. Our expanded methodology includes the following. • Empirical measurement of peak performance (GFLOP/s) and bandwidth (GB/s); • Accurate measurement of the total number of FLOPs in the code; • Accurate measurement of data movement in the code, throughout the memory/cache hierarchy, ie, BytesL1, BytesL2, BytesHBM, BytesSystemMemory; • Calculation of arithmetic intensities on various memory/cache levels, ie, FLOPs:ByteL1, FLOPs:ByteL2, FLOPs:ByteHBM, FLOPs:ByteSystemMemory; • Quantifying the performance implications of FMA, FPADD, and FPMUL in the instruction mix; • Quantifying the performance implications of reduced precision (FP16 and FP32) and Tensor Cores; • Plotting application performance against architecture peaks. In this paper, we provide a detailed description of our Roofline methodology for NVIDIA GPUs. We then apply this methodology to three proxy applications, GPP from the Material Science code BerkeleyGW,9 HPGMG from the Adaptive Mesh Refinement framework AMReX,10 and conv2d from TensorFlow.11 For each of these applications, we include multiple variants of the same code in order to highlight the ability of our methodology to capture the different nuances of performance analysis on NVIDIA GPUs. Throughout this process, we provide a detailed analysis of the information our Roofline methodology extracts. Finally, we conclude the paper with some high-level insights, observations, and espouse several directions for future work. 2 ROOFLINE METHODOLOGY ON NVIDIA GPUS In order to affect Roofline analysis of GPU-accelerated applications, one must perform three steps. First, one must characterize the underlying GPU's computational capabilities in terms of this Roofline model. In effect, this is measuring peak performance (GFLOP/s) and bandwidth (GB/s) as a function of data precision, operation, and memory/cache level. Second, one must characterize the execution of an application and extract the relevant Roofline-related parameters, including data movement at each level of the memory hierarchy, floating-point operations performed (by precision), and kernel run times. Finally, one must synthesize these two data sets together and plot them in a single figure. 2.1 Architectural characterization The Empirical Roofline Toolkit (ERT)12 was developed to characterize multicore, manycore, and GPU-accelerated systems. It was written in MPI+OpenMP+CUDA in order to replicate the most common programming environments on DOE (Department of Energy) supercomputers. ERT defines a kernel of varying L1 arithmetic intensity on a parameterized vector. By sweeping a range of arithmetic intensities and vector sizes, it can extract the peak performance of a target platform as well as the bandwidth at each level of the memory hierarchy. In this paper, we used the MPI+CUDA implementation of ERT to characterize a single Volta V100 GPU. Unfortunately, ERT, as written, consistently fails to identify the L1 cache on NVIDIA GPUs. To that end, throughout this paper, we use a theoretical L1 bandwidth coupled with empirical (ERT) bandwidths for L2 and HBM, for the Roofline ceilings. ERT, as written, is solely a double-precision benchmark. As such, in this paper, we use a simple linear extrapolation for single- and half-precision performance. Moreover, whereas ERT's kernels are optimized for fused-multiply-add (FMA) instruction-set architectures, we estimate the penalty of not exploiting FMA by defining a ‘‘no FMA’’ ceiling that is half the FMA performance. NVIDIA GPUs implement 16-bit (FP16) Tensor Core matrix-matrix multiplications. Throughout this paper, we use the theoretical peak Tensor Core performance. This may seem optimistic, but we will show it does not skew our analysis. Ultimately, we collect 10 performance numbers for our target GPU: L1, L2, and HBM bandwidth, FP16/FP32/FP64 FMA and FP16/FP32/FP64 ‘‘no FMA’’ performance, and Tensor Core peak performance. For brevity, Figure 1 plots measured ERT and theoretical performance for FP64 FMA, no-FMA, L2, and HBM on an NVIDIA Volta V100 GPU. Clearly, theoretical performance generally overestimates attainable performance by about 10%. 2.2 Application characterization In this paper, we leverage the proof of concept methodology developed by Yang et al8 and extend it to support both hierarchical (L1, L2, HBM, System Memory) Roofline analysis as well as FP32 and FP16 precision (including Tensor Core). To that end, we use nvprof to collect a set of 2 of 12 YANG ET AL. YANG ET AL. 3 of 12 FIGURE 1 NVIDIA V100 Hierarchical Roofline Ceilings. Observe V100 advertised performance is very close to empirical performance metrics for each kernel in an application. We then synthesize those metrics together in order to plot each kernel on a Roofline using its Arithmetic Intensity (x) and GFLOP/s (y) coordinates. In order to calculate a kernel's arithmetic intensity (AI) and GFLOP/s performance, we must collect three raw quantities, ie, kernel run time, FLOPs executed (forFP64, FP32, and FP16), and bytes read and written by each level of the memory hierarchy (L1, L2, HBM, and System Memory). AI<precision>,<level> = nvprof FLOPs<precision> nvprof Bytes<level> (2) FLOP/s<precision> = nvprof FLOPs<precision> nvprof Run Time , (3) where <level> can be L1, L2, HBM (Device Memory) or System Memory, and <precision> can be FP64, FP32, FP16, or Tensor Core. Kernel run time: To collect application run time, we use the following commands to obtain either the timing of a particular invocation of a kernel or the average timing of a kernel over multiple invocations. nvprof −−print −gpu −trace.∕application nvprof −−print −gpu −summary.∕application. Kernel FLOPs: nvprof provides a rich set of metrics to measure the total number of FLOPs executed in a kernel. These metrics only account for nonpredicated threads, so operations that are masked out are not included. For complex operations such as divides, logarithms, and exponentials, each operation is implemented with multiple instructions and hence is counted as multiple FLOPs. To collect the FLOP counts, we use nvprof–kernels < kernel_name > –metrics < metric_name > ./application, where <metric_name> can be flop_count_dp, flop_count_sp, or flop_count_hp for FP64, FP32, and FP16, respectively. The aforementioned floating-point metrics can account for the majority of FLOPs in a large range of applications. However, FLOPs executed inside the NVIDIA V100 Tensor Cores are not captured by these counters. The Tensor Cores are designed to accelerated matrix-FMA operations, that is, operations of the form D = A ·B+C, where A, B are real valued 4×4 matrices in half precision (FP16) and C, D are real valued 4×4 matrices in 16-bit (FP16) or 32-bit (FP32) precision. In the latter case, the accumulator in the Tensor Core operation is performed using FP32 arithmetic. As of early 2019, nvprof does not offer an accurate flop_count_ metric for Tensor Cores, like for the normal SM cores but rather a ‘‘utilization’’ metric tensor_precision_fu_utilization. This metric spans an integer range from 0 (not used) to 10 (fully utilized). In order to estimate the Tensor Core FLOP count, we assume that a utilization value of 10 corresponds to 125 TFLOP/s and then multiply this number with the run time of the kernel to estimate the total number of FLOPs. It is expected that NVIDIA's next-generation Nsight profiling tool will have enhanced capabilities to measure Tensor Core FLOPs, and we will investigate that in our future work. Bytes: The data moved between each two levels in the memory/cache hierarchy must be collected in order to construct the hierarchical Roofline. We use nvprof to collect the total number of read and write transactions and multiply the total by the size of each transaction in bytes Bytes = (read transactions + write transactions) × transaction size. (4) YANG ET AL. 3 of 12 4 of 12 YANG ET AL. TABLE 1 nvprof metrics for measuring data traffic in the memory/cache hierarchy Level Metrics Transaction size L1 cache gld_transactions, gst_transactions, atomic_transactions, 32B local_load_transactions, local_store_transactions, shared_load_transactions, shared_store_transactions L2 cache l2_read_transactions, l2_write_transactions 32B HBM memory dram_read_transactions, dram_write_transactions 32B PCIe/NVLINK system_read_transactions, system_write_transactions 32B The invocation of nvprof on command line is the same as when collecting FLOPs, but the metrics are more complicated (see Table 1)*. Note that in this paper, all applications fit in the GPU's HBM memory. As such, system transactions are virtually zero as there is no data movement over PCIe/NVLINK. Thus, they will not be presented in Section 4. 2.3 Roofline visualization With both the empirical ceilings collected via ERT and the application kernel AI (2) and GFLOP/s (3) coordinates determined through the collection of nvprof metrics, we plot the resultant Roofline model using Python Matplotlib13. Some of our Matplotlib scripts are available on GitHub,14 and users are free to tweak them based on their specific needs. The most basic example plot_roofline.py takes an input file that specifies the memory ceilings, compute ceilings, AI's for each kernel, and GFLOP/s performance for each kernel. 3 EXPERIMENTAL SETUP In this section, we describe our test machine, software configuration, and the applications we use to evaluate our hierarchical GPU Roofline methodology. 3.1 Hardware and software configuration Results presented in this paper were all obtained on the Cori supercomputer at the National Energy Research Scientific Center (NERSC), Lawrence Berkeley National Laboratory (LBNL). To prepare for the arrival of the next-generation supercomputer Perlmutter, NERSC has installed a GPU-accelerated partition on Cori, comprised of nodes with Intel Skylake CPUs and NVIDIA V100 GPUs (4:1 GPU:CPU ratio). This partition will enable the NESAP (NERSC Exascale Science Applications Program) teams in their prototyping, debugging, porting, and development activities in preparation for migrating to Perlmutter. CUDA 10 is installed on Cori GPU partition and nvprof, is frequently used in this paper, and is part of the CUDA Toolkit. Moreover, ERT is from the BitBucket repository12 with example configuration scripts deployed for the Cori GPU partition. For the conv2d benchmark from TensorFlow, we used TensorFlow v1.12.0 linked against CUDA 9.0 and cuDNN 7.3.1. 3.2 Benchmarks In this paper, we evaluate our Roofline methodology for NVIDIA GPUs using three benchmarks: GPP from BerkeleyGW, HPGMG from AMRex, and conv2d from TensorFlow. These benchmarks were selected because they exhibit a range of computational characteristics, including a range of memory access patterns, data types, data locality, and thread divergence properties. GPP: The General Plasmon Pole (GPP) kernel9 is a proxy application based on the BerkeleyGW Material Science code.15 It calculates electron self-energy using the common General Plasmon Pole approximation.16 GPP is written in C++ and accelerated with CUDA. The computation in the kernel is tensor-contraction–like, wherein a few precalculated complex double-precision arrays are multiplied and summed over one dimension and collapsed into a small matrix. The problem we chose in this paper is comprised of 512 electrons and 32 768 plane wave basis elements and is a medium-sized problem for real-world materials science. It requires around 1.5GB of memory and fits well into the HBM memory on a V100 GPU. The pseudocode of the GPP kernel can be described as *Add surface and texture related metrics if surface and texture memory are in use, for example, in Graphics applications. 4 of 12 YANG ET AL. YANG ET AL. 5 of 12 Not only does GPP offer abundant parallelism that can be mapped to threads, warps, and thread blocks, but it also is heavily parameterized in order to capture the full spectrum of realistic problem configurations. One of these parameters, nw, enables arbitrary increases in arithmetic intensity by increasing reuse of the arrays accessed within the iw loop. Similarly, we can modify the ig loop to affect strided memory accesses. Ultimately, not only does this single well-understood kernel act as a stand-in for a range of potential application kernels, but it also enables us to test the limits of our Roofline methodology for NVIDIA GPUs. HPGMG GSRB smoother: HPGMG10,17,18 is a geometric multigrid benchmark designed to proxy the multigrid solves found in block structured AMR (Adaptive Mesh Refinement) applications that use the AMReX framework.19 HPGMG solves the fourth-order variable-coefficient Laplacian on a unit-cube with Dirichlet boundary conditions using a multigrid F-cycle. As such, it is only moderately compute–intensive (FP64 FLOPs:Byte>1), but it is highly demanding of SIMT and cache locality. HPGMG was originally implemented in C with MPI and OpenMP parallelization and has shown scalability to 8.5 million cores. It was subsequently extended to support GPUs on the finer (larger) mesh levels.10 The Gauss-Seidel Red-Black (GSRB) smoother generally dominates HPGMG's run time. However, the performance of the smoother varies substantially among the various multigrid levels (mesh sizes). GSRB smoothers perform two stencil kernel invocations per smooth (red and black). Cells are marked as either red or black in a 3D checkerboard pattern. Cells matching the sweep color are updated, while the others are simply copied to the result array. Thus, for a 1283 box, one must perform 1283∕2 =1 M stencils per kernel invocation. In order to balance the quality of a compiler against hardware's ability to efficiently execute strided memory access patterns or predication, HPGMG includes three different implementations of its GSRB smoother (GSRB_FP, GSRB_BRANCH, and GSRB_STRIDE2). All three implemen- tations perform the same computation and touch the same data over the course of a threadblock's execution. As such, they represent an ideal testbed for using Roofline and performance tools to understand the subtle interactions between compiler and hardware. The GSRB_FP imple- mentation realizes red-black updates via multiplication by a precomputed auxiliary array of 1.0's and 0.0's. Such an implementation is trivially vectorized but requires twice the computation and an additional load from cache. The GSRB_BRANCH version uses a branch to perform stencils only on cells whose color matches the sweep while copying data for the others. Such an implementation can be realized through either predication (masking) or loop fissioning into two stride-2 updates. Regardless, the number of floating-point operations is minimized, but execution on SIMD or SIMT hardware may preclude this. Finally, in the GSRB_STRIDE2 implementation, each CUDA thread is responsible for two adjacent cells within a plane. The thread updates the cell whose color matches the sweep color and copies the other. This implementation minimizes computation, avoids predication of computation, but requires the compiler/hardware to execute efficient stride-2 L1 cache accesses (HBM access will always be coalesced into unit-stride transactions). For this paper, we run HPGMG with eight 1283 boxes (./hpgmg-fv 7 8). This results in levels 5 to 8 running on GPU and levels 1 to 4 on CPU. As this paper is focused on Roofline on GPUs, we only examine levels 5 to 8. TensorFlow conv2d: TensorFlow11,20 is a deep learning framework that allows users to express complicated neural network graphs in a reasonable amount of lines of code. Besides flexibility, it also provides performance portability across a variety of computing architectures by using optimized libraries such as cuDNN. 2D convolution layers are the most compute-intensive kernels in most modern deep neural networks, so in this paper, we chose to analyze tf.nn.conv2d21 for our Roofline validation. In a forward pass, tf.nn.conv2d performs 2D convolution of an input tensor and a convolution kernel and produces another tensor as the output. Assume an input tensor A of shape N × H × W × C, where N is the number of samples in the batch, H and W are the height and width, and C is the number of channels. With a convolution kernel K of shape KH, KW, C, C′, the resultant tensor B is of shape N× H′ × W′ × C′, where H′= H −KH + 1, W′= W −KW + 1, and the individual elements are given by Bnhwc = C−1 ∑ m=0 KH−1 ∑ kh=0 KW−1 ∑ kw=0 An h+kh w+wh m Kkh kw m c. (5) In the more computationally expensive backward pass, the derivative of Equation (5) is computed with respect to K. The TensorFlow routine further allows for generalizations such as nonunit stride or dilation. Values at the image boundary are treated separately, eg, by replication or zero-padding.22 In this study, we will focus on the performance characteristics of a typical convolution kernel found in one of the most commonly used networks for image analysis, ResNet50.23 We analyze this kernel for forward and backward passes as well as the effects of parameters such as batch, input, kernel, and stride sizes on their performance. Most TensorFlow routines support specifying different precisions via a dtype argument. For deep learning applications, the most relevant are FP16 and FP32, and we will focus on these two precisions in this paper. The convolution operations such as those in Equation (5) are essentially YANG ET AL. 5 of 12 6 of 12 YANG ET AL. matrix-matrix multiplications, and on NVIDIA's Volta GPUs, TensorFlow can leverage the specialized instructions such as HMMA's on the Tensor Cores for more performance. Due to this GEMM-like operation, the conv2d kernel will have a much higher arithmetic intensity than our previous two examples, GPP and HPGMG. The pseudocode of the conv2d kernel is as follows. The main measurement part is toward the end of this code block (the last four lines). The pyc.driver commands are from PyCUDA,24 and they allow for precise instrumentation on this region. To facilitate this, nvprof will also need to be launched with –profile-from-start off to disable profiling from the start of the application. Measuring the performance of a TensorFlow kernel is not straightforward. First, every kernel can translate into a series of subkernel calls. These subkernels include the kernel that does the essential computation but also kernels that perform housekeeping work before and after the main kernel such as data layout transformation, data type transformation, and index calculation. Second, the autotuning mechanism in TensorFlow and cuDNN library adds to the complexity of the measurement of a kernel's performance. TensorFlow selects the best performing sequence of subkernels based on input parameters and some heuristic data, and the cuDNN library, frequently called by TensorFlow, also performs certain algorithm selection. In this paper, we include the housekeeping subkernels in our performance measurement of conv2d as they are necessary to the core computation subkernel, but we exclude the autotuning subkernels because we want to focus on the best implementation of the conv2d kernel for given parameters. To achieve this, we split the loop in tf.Session into two parts: a warm-up part with trip count n_warm = 5 and a measurement part with trip count n_iter = 20. Depending on the value of pass, exec_op will either compute the convolution (forward pass), or the convolution along with its derivatives (backward pass). The extra calibrate option allows for exclusion of subkernels that are associated with random input tensor generation, and we would like to exclude that in our measurement as well. In Section 4, we use the sum of the following three measurements as the FLOP count of conv2d when calculating Arithmetic Intensity and the GFLOP/s performance, flop_count_sp, flop_count_hp, and FLOPs derived from tensor_precision_fu_utilization (see Section 2.2). Here, we include the flop_count_sp metric (metric for FP32 operations) even in the FP16 kernels because even if the input and output tensors are set to be in FP16 precision, TensorFlow could still decide to deploy subkernels that are in FP32 precision based on its autotuning mechanism. An example of this is the backward pass in Figure 10. However, all the other kernels in Figures 8, 9, and 10 execute as expected, ie, FP16 kernels are run on the Tensor Core, and FP32 kernels are run on the common cores. In this paper, if not otherwise stated, we use input tensor size 112 × 112 × 64 and kernel size 3 × 3 × 64 × C′, where C′ is the number of output filters as defined in Equation (5), C′= 64, and the stride size is 2. 4 RESULTS In this section, we discuss our observations and insights from applying our Roofline methodology for NVIDIA GPUs to our three benchmarks. 4.1 GPP performance analysis Figure 2 presents a hierarchical Roofline model for GPP running on a V100 GPU as a function of the parameter nw. Recall that nw increases arithmetic intensity at all levels of the memory hierarchy as it creates a tighter inner loop that reuses data loaded from the cache. We observe several effects. 6 of 12 YANG ET AL. YANG ET AL. 7 of 12 FIGURE 2 GPP hierarchical Roofline on the NVIDIA V100 GPU as a function of nw (stride=1) FIGURE 3 GPP Roofline on the NVIDIA V100 GPU as a function of the FMA instruction mix. Note that 60% of GPP's floating-point instructions are FMA First, both L1 and HBM arithmetic intensity (x coordinate of each dot) increase linearly with nw. Interestingly, L2 intensity shows a much more complex pattern. Linear increases in arithmetic intensity for kernels that are linearly increasing the number of FLOPs (numerator of arithmetic intensity) implies roughly constant data movement (denominator of arithmetic intensity). As such, we can infer very good locality in the register file (constant L1 intensity) as well as good locality in the L2 (constant HBM intensity). However, the only slight improvement in L2 intensity implies substantial increases in L2 data movement or substantial losses in L1 locality. Second, HBM intensity is consistently much larger than L1 intensity, implying that there is substantially higher locality in the caches than in the register file. Moreover, L2 intensity is initially (nw = 1) very close to HBM intensity, but it approaches L1 intensity as nw approaches 6. This is indicative of a transition from a regime where there is high L1 locality and virtually no L2 locality (nw = 1) to a regime where there is virtually no L1 locality but high L2 locality (nw = 6). Third, at low nw, performance is clearly bound by HBM (green curve tracks the HBM ceiling). However, as nw increases, performance quickly saturates. Our hierarchical Roofline analysis demonstrates GPP is clearly not bound by either L1 or L2 bandwidth (red and blue curves are far from their respective ceilings). This indicates that other effects have manifested that limit performance. Unlike linear algebra routines (eg, matrix-multiplications), GPP includes a mix of floating-point adds, multiplies, and fused multiply-adds (FMA). As 100% of the dynamic instructions are not FMA, peak performance will never be attainable. In fact, we can create an effective ceiling by using nvprof to collect the number of FMA and non-FMA floating-point instructions. Using Equation (6), it is observed that at nw = 6, only 60% of the floating-point instructions GPP executes are FMA. Moreover, Equation (7) bounds GPP performance at 80% of the V100's (double-precision) FMA peak performance. However, Figure 3 shows that the observed performance from GPP is roughly 66% of the full FMA peak. This clearly indicates that other aspects of GPU execution are ultimately limiting performance 𝛼= FMA FP64 instr. FMA FP64 instr. + non-FMA FP64 instr. = 60% (6) 𝛽= 𝛼× 2 + (1 −𝛼) 2 = 80%. (7) As mentioned, GPP is highly parameterizable. To that end, Figure 4 shows a Roofline model for the strided implementation of GPP. Here, threads within a warp update access every nth element (threads stride by 32 ∗n words instead of the nominal Stride-32). Unlike our previous work,8 which focused solely on HBM Rooflines for GPUs, the GPP hierarchical Roofline shows that the L1 and L2 cache behave quite differently from HBM. Whereas HBM intensity decreases linearly with increasing stride up to Stride-4 (4 double complex words = 64 Bytes), L1 and L2 intensity stops decreasing beyond Stride-2 (32B). One might conclude the cache line size in the L2 (or at least its behavior) is larger than the L1 line size or the L1 transaction size. YANG ET AL. 7 of 12 8 of 12 YANG ET AL. FIGURE 4 GPP hierarchical Roofline on the NVIDIA V100 GPU as a function of stride and nw = 6 FIGURE 5 HPGMG GSRB hierarchical Roofline on the NVIDIA V100 GPU for the GSRB_FP implementation 4.2 HPGMG performance analysis We evaluate HPGMG's three implementations of its GSRB smoother using the hierarchical Roofline model. As all variants are memory-intensive (arithmetic intensity is always less than machine balance), unlike GPP, the FMA fraction of the instruction mix will play no role in our analysis. Moreover, in lieu of an algorithmic parameter to affect changes in arithmetic intensity, in all of our analysis, we examine performance on each level in the multigrid hierarchy HPGMG that runs on the GPU. Naively, one might think the same stencil should have the same intensity. However, the deep ghost zone can reduce arithmetic intensity for the smaller boxes (lower levels), while cache effects can manifest and increase intensity for the larger boxes (upper levels). Figure 5 presents the hierarchical Roofline for the GSRB_FP variant of HPGMG's smoother as a function of level (level 5 with 8 163 boxes to level 8 with 8 1283 boxes). Roofline clearly provides several immediate observations. First, performance is highly correlated with HBM bandwidth (green line tracks the HBM ceiling). Second, HBM intensity increases with level. This should come as no surprise as intensity should scale as O(dim3∕(dim + 4)3) as there is a ghost zone (two elements deep) on the high and low faces of each box. This substantially reduces intensity for small boxes (dim = 16). Third, L1 intensity is roughly constant with box size. Once again, this should come as no surprise as, internally, the CUDA implementation uses a fixed thread block dimension to tile each box. The constant thread block dimension exerts a constant pressure on the L1 cache. Fourth, there is substantial reuse in the L1 cache (L1(blue) and L2(red) intensities are widely separated), while there is virtually no reuse in the L2 cache (L2(red) and HBM(blue) intensities are very close). This implies that, virtually, all reuse in HPGMG is captured by the L1 cache or in register reuse and there is very little interthread block bandwidth filtering (something expected for tiled stencil computations). Finally, the very astute will notice that the empirical HBM arithmetic intensity is twice the theoretical HPGMG intensity. This is an artifact of the GSRB_FP implementation redundantly performing the stencil on every point and quashing the results by multiplying by the array of 1's and 0's—something nvprof dutifully observes. Figure 6 presents the hierarchical Roofline for the GSRB_BRANCH implementation. Recall that this implementation differs from the GSRB_FP implementation in that it uses optimized modulo-2 arithmetic and a branch to avoid redundant computation and an extra (L1) load. As such, it performs half as many (nonpredicated) floating-point operations and thus has half the performance and half the arithmetic intensity on each level of multigrid and each level of the memory hierarchy as the GSRB_FP variant shown in Figure 5. All analysis and insights derived from GSRB_FP apply to GSRB_BRANCH. On paper, HPGMG's GSRB_STRIDE2 implementation seems like the ideal implementation. It performs no redundant work, all computation remains converged/nonpredicated, and the stride-2 memory access pattern presented to the L1 should be filtered into a unit-stride pattern presented to the L2/HBM. As such, it can be quite puzzling as to why the GSRB_STRIDE2 implementation underperforms the GSRB_BRANCH and GSRB_FP variants. Our nvprof-based hierarchical Roofline model helps elucidate the causes. 8 of 12 YANG ET AL. YANG ET AL. 9 of 12 FIGURE 6 HPGMG GSRB hierarchical Roofline on the NVIDIA V100 GPU for the GSRB_BRANCH implementation FIGURE 7 HPGMG GSRB hierarchical Roofline on the NVIDIA V100 GPU for the GSRB_STRIDE2 implementation. Observe the unexpected loss in L2 and HBM arithmetic intensity for the larger levels Figure 7 shows the hierarchical Roofline for the GSRB_STRIDE2 variant. It should be immediately obvious that it looks quite different from the GSRB_BRANCH version shown in Figure 6 with performance on the largest boxes (those that dominate HPGMG's solve time) substantially lower. First, the trend in L1 intensity (x-coordinate) is very similar to GSRB_BRANCH. This indicates that as expected, each thread block accesses memory in a similar manner to the GSRB_BRANCH variant. However, when looking at L2 and HBM intensity, we observe very different behaviors. As one proceeds from level 5 (163) to level 6 (323), one observes increases in performance (ultimately bound by HBM bandwidth) but only slight increases in arithmetic intensity. Conversely, from level 6 to level 7 (643) and level 8 (1283), we see substantial reductions in L2 and HBM intensity. The former implies that unlike the GSRB_BRANCH variant, the GSRB_STRIDE2 variant is failing to capture locality in the L1 cache and flooding L2 with additional data movement. The fact that HBM intensity is correlated with L2 intensity implies that the L2 is also failing to capture any locality and transactions received by the L2 are passing through and becoming increased HBM data movement. Increasing data movement when HBM-bound results in performance sliding down along the HBM Roofline. 4.3 TensorFlow conv2d performance analysis In this section, we investigate the effects of different input parameters on the arithmetic intensity and performance of the TensorFlow conv2d kernel. More precisely, we start with the baseline parameters described in Section 3.2 and then vary one parameter at a time. The parameters we examine are batch size, number of output filters, and the kernel (or filter) size. The two data types, FP32 and FP16, are the precisions of the input and output data but not necessarily those of the FLOPs or bytes measured, ie, TensorFlow may decide to execute in FP32 on FP16 inputs. Batch size: Figure 8 depicts the impact of the batch size on performance (the batch size is the parameter N in Equation (5)). For FP32 kernels, ie, when both input and output tensors are in FP32 type, the arithmetic intensity and GFLOP/s performance of conv2d do not change much simply because the underlying algorithm is the same. There is an exception though, with the backward pass where TensorFlow has decided to call a different wgrad subkernel from cuDNN for batch size 64, which has raised the performance a little bit compared with the other two batch sizes. The FP16 kernels should follow the same trend, ie, same intensity and performance for all batch sizes. However, in the raw data, we observe a significant increase in the data movement for larger batch sizes, and because we include housekeeping subkernels such as padding and shuffling in our measurement, the performance of this fairly small kernel is severely affected by these essentially bandwidth-bound subkernels. In the next experiment (number of output filters), where kernels are larger, this effect may be better amortized. However, for TensorFlow applications in general, this could be one of the reasons why the peak performance of 125 TFLOP/s is very hard to reach. The cuDNN library is called very frequently in TensorFlow, and it is observed in our raw data that cuDNN utilizes the shared memory on Volta a lot. On the hierarchical Roofline charts, this is presented by the gaps between the L1 symbols and their respective L2 symbols, ie, these conv2d kernels have good cache locality in level-one cache (including shared memory). YANG ET AL. 9 of 12 10 of 12 YANG ET AL. FIGURE 8 Effects of batch size on performance for forward (left panel) and backward pass (right panel) on second convolution layer from the ResNet5023 network. Open symbols represent kernels with FP32 input and output, and filled symbols represent FP16 Number of output filters: Figure 9 shows the hierarchical Roofline for conv2d when the number of output filters increases, ie, C ′ in Equation (5). The batch size for this set of results is fixed at 16. In both FP16 and FP32 cases and both forward and backward passes, the arithmetic intensity and the GFLOP/s performance increase with the number of filters because the kernel becomes more compute intensive, ie, more computation is done for the same amount of data movement. At the highest, the FP16 kernel in the backward pass is reaching 80% of the peak (at about 100 TFLOP/s), with 512 filters. In the meantime, the FP32 kernel gets even closer to the FMA(FP32) peak, at about 13 TFLOP/s. All of this shows great promise of achieving Volta's compute capability with certain input parameters. Kernel size: Figure 10 shows the hierarchical Roofline of conv2d as a function of the kernel size, ie, KH × KW, from 3 × 3, to 7 × 7, to 9 × 9. The batch size here is fixed at 16, and the number of output filters is 64. The stride size is 2. FIGURE 9 Effects of the number of output filters on performance for forward (left panel) and backward pass (right panel) on second convolution layer from the ResNet5023 network. Open symbols represent kernels with FP32 input and output, and filled symbols represent FP16 FIGURE 10 Effects of kernel size on performance for forward (left panel) and backward pass (right panel) on second convolution layer from the ResNet5023 network. Open symbols represent kernels with FP32 input and output, and filled symbols represent FP16 10 of 12 YANG ET AL. YANG ET AL. 11 of 12 The increase in kernel size should have the same effect as the number of filters, ie, increased computation for the same amount of data movement, hence increased arithmetic intensity and performance. However, there are two exceptions. One is the sudden drop in both arithmetic intensity and performance at kernel size 9 × 9 in the forward pass for the FP32 input and output. In this case, the TensorFlow framework decides to run a different set of subkernels instead of the wgrad subkernels for the 3 × 3 and 7 × 7 kernel sizes; it calls FFT subkernels underneath. This could be due to mistakes in the autotuning decision-making process, where 9 × 9 seems large enough but it is still not at the level where FFT could be run optimally. This phenomena is not observed in the backward pass case, which suggests there is certain sensitivity in TensorFlow's autotuning mechanism to the heuristic data possibly from the warm-up stage. The other exception is with the FP16 kernel in the backward pass at kernel size 9 × 9. Even though the input and output tensors are specified to be FP16, the kernel is run in FP32 precision, ie, not on Tensor Cores. The input data is first converted to FP32, then the FP32 subkernels are executed, and finally, the output is converted back to FP16. These unnecessary conversions lead to the overall kernel performance being even worse than the natural FP32 kernel (with FP32 input and output, open triangles in Figure 10, right panel). It also suggests that the robustness of the autotuning mechanism in TensorFlow could be improved. 5 SUMMARY, CONCLUSIONS, AND FUTURE WORK In this paper, we extend the nvprof-based HBM Roofline methodology we developed for NVIDIA GPUs8 to capture the full NVIDIA GPU memory hierarchy, the effects of FPADD/FPMUL in the instruction mix, the effects of reduced precision FP16 and FP32 Rooflines, and the benefits of using FP16 Tensor Cores (HMMA instructions). To demonstrate the value of this hierarchical GPU Roofline methodology, we used it to analyze three benchmarks: the moderately compute–intensive GPP Material Science proxy application, the cache-intensive HPGMG AMR-multigrid proxy application, and the reduced precision and a Tensor Core-accelerated 2D convolution kernel from TensorFlow. We observe that the hierarchical Roofline can capture insights into compute, cache, or memory performance bottlenecks as well as properties of locality within each level of the cache hierarchy with performance being highly correlated with Roofline in the memory-intensive GPP and HPGMG benchmarks. However, there were several cases in both HPGMG and TensorFlow where empirical performance and arithmetic intensity diverged from Roofline or theoretical expectations. Similarly, the ultimate performance of GPP for high nw was only roughly correlated with the ‘‘partial’’ FMA performance ceiling derived from the FMA fraction of the instruction mix. We found that although Roofline provides observations of performance metrics (eg, decreased arithmetic intensity), it does not inform users as to exactly what went wrong in their application's execution or the code changes required to fix it. Nevertheless, it does provide some key first steps in potentially identifying areas of interest and may motivate further experiments. In the future, we will extend our Roofline methodology and usage along three axes, more ceilings, ie, more instruction-based Rooflines. As for the first axis, we see several potential extensions to Roofline. In lieu of simply scaling performance or relying on marketing numbers, we will extend ERT to support reduced precision (FP16 and FP32) and Tensor Cores (HMMA instructions). Moreover, whereas the FMA mix in the instruction set was insufficient in determining the performance asymptote in GPP, we will extend our nvprof-based Roofline methodology to incorporate occupancy in order to determine if we are expressing sufficiently thread-level parallelism to hide the GPU's high latencies. Moreover, echoing our efforts to understand the impact of FPADD and FPMUL on FP64 performance, we will develop the requisite methodology to capture and visualize the performance bottlenecks arising from mixed precision or Tensor Core–accelerated applications. Although the traditional operation-oriented (GFLOP/s) Roofline model can readily assess performance and memory bottlenecks, it is poorly suited for assessing either an application's exploitation of complex instruction set computing (CISC) or SIMD instruction set architectures (ISAs) or the degree to which an application utilizes a processor's functional units, something that can manifest in mixed precision or partially vectorized code. To that end, we will create an alternate Roofline methodology focused on floating-point instructions per cycle (IPCFP) and floating-point instructions per byte. Recasting Roofline as such does not express performance (as it is agnostic of SIMD and FMA) but allows users to understand when functional units can be fully utilized, executing a mix of reduced precision, Tensor Core, SIMD, or scalar instructions. Finally, we will continue to apply our GPU Roofline methodology to more applications from a wider set of domains as well as extending our methodology to other accelerated architectures. ACKNOWLEDGMENTS This material is based on work supported by the Advanced Scientific Computing Research Program in the U.S. Department of Energy, Office of Science, under award number DE-AC02-05CH11231. This research used resources of the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-05CH11231. ORCID Charlene Yang https://orcid.org/0000-0002-0581-5845 Thorsten Kurth https://orcid.org/0000-0003-0832-6198 YANG ET AL. 11 of 12 12 of 12 YANG ET AL. REFERENCES 1. Williams S, Waterman A, Patterson D. Roofline: an insightful visual performance model for multicore architectures. Commun ACM. 2009;52(4). 2. Koskela T, Matveev Z, Yang C, et al. A novel multi-level integrated Roofline model approach for performance characterization. Paper presented at: International Conference on High Performance Computing; 2018; Frankfurt, Germany. 3. Williams S. Auto-Tuning Performance on Multicore Computers [PhD dissertation]. Berkeley, CA: University of California, Berkeley; 2008. 4. NERSC LIKWID Documentation. https://www.nersc.gov/users/software/performance-and-debugging-tools-likwid/ 5. NERSC SDE Documentation. https://www.nersc.gov/users/application-performance/measuring-arithmetic-intensity/ 6. Barnes T, Cook B, Deslippe J, et al. Evaluating and optimizing the NERSC workload on Knights Landing. Paper presented at: 7th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS); 2016; Salt Lake City, UT. 7. Doerfler D, Deslippe J, Williams S, et al. Applying the Roofline performance model to the Intel Xeon Phi Knights Landing processor. Paper presented at: International Conference on High Performance Computing; 2016; Frankfurt, Germany. 8. Yang C, Gayatri R, Kurth T, et al. An empirical Roofline methodology for quantitatively assessing performance portability. Paper presented a: 2018 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC; 2018; Dallas, TX. 9. General Plasmon Pole (GPP) Kernel. https://github.com/cyanguwa/nersc-roofline 10. HPGMG CUDA Code. https://bitbucket.org/nsakharnykh/hpgmg-cuda 11. TensorFlow. https://tensorflow.org 12. Empirical Roofline Toolkit (ERT). https://bitbucket.org/berkeleylab/cs-roofline-toolkit 13. Python Matplotlib. https://matplotlib.org 14. Example Scripts for Plotting Roofline. https://github.com/cyanguwa/nersc-roofline 15. BerkeleyGW. https://berkeleygw.org 16. Soininen J, Rehr J, Shirley EL. Electron self-energy calculation using a general multi-pole approximation. J Phys Condens Matter. 2003;15(17). 17. HPGMG Website. https://hpgmg.org/ 18. HPGMG-FV Documentation. http://crd.lbl.gov/departments/computer-science/PAR/research/hpgmg 19. AMReX Documentation. https://amrex-codes.github.io/amrex/ 20. Abadi M, Agarwal A, Barham P, et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. http://download.tensorflow.org/ paper/whitepaper2015.pdf 21. tf.nn.conv2d Kernel. https://www.tensorflow.org/api_docs/python/tf/nn/conv2d 22. Ben-Nun T, Hoefler T. Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput Surv. 2018;52(4). 23. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Paper presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2015; Las Vegas, NV. 24. PyCUDA Website. https://mathema.tician.de/software/pycuda How to cite this article: Yang C, Kurth T, Williams S. Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC-9 Perlmutter system. Concurrency Computat Pract Exper. 2019;e5547. https://doi.org/10.1002/cpe.5547 How to cite this article: Yang C, Kurth T, Williams S. Hierarchical Roofline analysis for GPUs: Accelerating performance optimization for the NERSC-9 Perlmutter system. Concurrency Computat Pract Exper. 2020;32:e5547. https://doi.org/10.1002/cpe.5547 12 of 12 YANG ET AL.
ORNL, August 2019 VOLTA TENSOR CORE TRAINING 2 AGENDA V100 Architecture & Tensor Cores Anatomy of a GEMM Programming Approaches Libraries cublas Iterative Refinement Frameworks WMMA & MMA.sync CUTLASS NVIDIA Tools Case Studies Asgard + HPL-AI PICTC DL Framework Non-Traditional Uses 3 VOLTA ARCHITECTURE AND TENSOR CORES 4 TESLA V100 The Fastest and Most Productive GPU for Deep Learning and HPC Volta Architecture Most Productive GPU Tensor Core 120 Programmable TFLOPS Deep Learning Improved SIMT Model New Algorithms Volta MPS Inference Utilization Improved NVLink & HBM2 Efficient Bandwidth 5 21B transistors 815 mm2 80 SM 5120 CUDA Cores 640 Tensor Cores 16 GB HBM2 900 GB/s HBM2 300 GB/s NVLink TESLA V100 *full GV100 chip contains 84 SMs 6 VOLTA GV100 SM GV100 FP32 units 64 FP64 units 32 INT32 units 64 Tensor Cores 8 Register File 256 KB Unified L1/Shared memory 128 KB Active Threads 2048 7 VOLTA TENSOR CORE 8 TENSOR CORE Mixed Precision Matrix Math 4x4 matrices D = AB + C D = FP16 or FP32 FP16 FP16 FP16 or FP32 A0,0 A0,1 A0,2 A0,3 A1,0 A1,1 A1,2 A1,3 A2,0 A2,1 A2,2 A2,3 A3,0 A3,1 A3,2 A3,3 B0,0 B0,1 B0,2 B0,3 B1,0 B1,1 B1,2 B1,3 B2,0 B2,1 B2,2 B2,3 B3,0 B3,1 B3,2 B3,3 C0,0 C0,1 C0,2 C0,3 C1,0 C1,1 C1,2 C1,3 C2,0 C2,1 C2,2 C2,3 C3,0 C3,1 C3,2 C3,3 9 VOLTA TENSOR OPERATION FP16 storage/input Full precision product Sum with FP32 accumulator Convert to FP32 result × + Also supports FP16 accumulator mode for inferencing more products F16 F32 F32 F16 10 TENSOR SYNCHRONIZATION Warp-synchronizing operation Full Warp 16x16 Matrix Math Composed Matrix Multiply and Accumulate for 16x16 matrices Result distributed across warp warp 11 FP32 AND FP16 REPRESENTATION 8-bit exponent 23-bit mantissa 10-bit mantissa 5-bit exponent 1.4 x 10-45 < x < 3.4 x 1038 5.96 x 10-8 < x < 65504 Dynamic Range 12 EFFICIENT LINEAR ALGEBRA COMPUTATIONS ON GPUS 13 GENERAL MATRIX PRODUCT Basic definition General matrix product C = α op(A) * op(B) + β C C is M-by-N, op(A) is M-by-K, op(B) is K-by-N Compute independent dot products Inefficient due to large working sets to hold parts of A and B // Independent dot products for (int i = 0; i < M; ++i) for (int j = 0; j < N; ++j) for (int k = 0; k < K; ++k) C[i][j] += A[i][k] * B[k][j]; 14 GENERAL MATRIX PRODUCT Accumulated outer products General matrix product C = α op(A) * op(B) + β C C is M-by-N, op(A) is M-by-K, op(B) is K-by-N Compute independent dot products Permute loop nests Load elements of A and B exactly once // Independent dot products for (int i = 0; i < M; ++i) for (int j = 0; j < N; ++j) for (int k = 0; k < K; ++k) C[i][j] += A[i][k] * B[k][j]; // Accumulated outer products for (int k = 0; k < K; ++k) for (int i = 0; i < M; ++i) for (int j = 0; j < N; ++j) C[i][j] += A[i][k] * B[k][j]; 15 GENERAL MATRIX PRODUCT Computing matrix product one block at a time Partition the loop nest into blocks along each dimension • Partition into Mtile-by-Ntile independent matrix products • Compute each product by accumulating Mtile-by-Ntile-by-Ktile matrix products for (int mb = 0; mb < M; mb += Mtile) for (int nb = 0; nb < N; nb += Ntile) for (int kb = 0; kb < K; kb += Ktile) { // compute Mtile-by-Ntile-by-Ktile matrix product for (int k = 0; k < Ktile; ++k) for (int i = 0; i < Mtile; ++i) for (int j = 0; j < Ntile; ++j) { int row = mb + i; int col = nb + j; C[row][col] += A[row][kb + k] * B[kb + k][col]; } } 16 BLOCKED GEMM IN CUDA Parallelism Among CUDA Thread Blocks Launch a CUDA kernel grid • Assign CUDA thread blocks to each partition of the output matrix CUDA thread blocks compute Mtile-by-Ntile-by-K matrix product in parallel • Iterate over K dimension in steps, performing an accumulated matrix product for (int mb = 0; mb < M; mb += Mtile) for (int nb = 0; nb < N; nb += Ntile) for (int kb = 0; kb < K; kb += Ktile) { .. compute Mtile by Ntile by Ktile GEMM } by each CUDA thread block 17 THREAD BLOCK TILE STRUCTURE Parallelism Within a CUDA Thread Block Decompose thread block into warp-level tiles • Load A and B operands into Shared Memory (reuse) • C matrix distributed among warps Each warp computes an independent matrix product for (int kb = 0; kb < K; kb += Ktile) { .. load A and B tiles to shared memory for (int m = 0; m < Mtile; m += warp_m) for (int n = 0; n < Ntile; n += warp_n) for (int k = 0; k < Ktile; k += warp_k) .. compute warp_m by warp_n by warp_k GEMM } by each CUDA warp 18 WARP-LEVEL TILE STRUCTURE Warp-level matrix product Warps perform an accumulated matrix product • Load A and B operands from SMEM into registers • C matrix held in registers of participating threads Shared Memory layout is K-strided for efficient loads for (int k = 0; k < Ktile; k += warp_k) { .. load A tile from SMEM into registers .. load B tile from SMEM into registers for (int tm = 0; tm < warp_m; tm += thread_m) for (int tn = 0; tn < warp_n; tn += thread_n) for (int tk = 0; tk < warp_k; tk += thread_k) .. compute thread_m by thread_n by thread_k GEMM } by each CUDA thread 19 THREAD-LEVEL TILE STRUCTURE Parallelism within a thread Threads compute accumulated matrix product • A, B, and C held in registers Opportunity for data reuse: • O(M*N) computations on O(M+N) elements for (int m = 0; m < thread_m; ++m) for (int n = 0; n < thread_n; ++n) for (int k = 0; k < thread_k; ++k) C[m][n] += A[m][k] * B[n][k]; Fused multiply-accumulate instructions 20 COMPLETE GEMM HIERARCHY Data reuse at each level of the memory hierarchy 21 TENSOR CORE PROGRAMMING MODELS 22 USING TENSOR CORES Volta Optimized Frameworks and Libraries __device__ void tensor_op_16_16_16( float *d, half *a, half *b, float *c) { wmma::fragment<matrix_a, …> Amat; wmma::fragment<matrix_b, …> Bmat; wmma::fragment<matrix_c, …> Cmat; wmma::load_matrix_sync(Amat, a, 16); wmma::load_matrix_sync(Bmat, b, 16); wmma::fill_fragment(Cmat, 0.0f); wmma::mma_sync(Cmat, Amat, Bmat, Cmat); wmma::store_matrix_sync(d, Cmat, 16, wmma::row_major); } CUDA C++ Warp-Level Matrix Operations NVIDIA cuDNN, cuBLAS, TensorRT 23 CUBLAS TENSOR CORE HOW-TO Math Mode set with cublasSetMathMode function. Volta and Turing family Tensor Core can be used with in mixed precision (FP16 inputs, FP32 accumulation, FP16 or FP32 output) routines. Pure single precision routines use tensor core (when allowed) by down-converting inputs to half (FP16) precision on the fly. mathMode = CUBLAS_DEFAULT_MATH mathMode = CUBLAS_TENSOR_OP_MATH cublasHgemm, cublasSgemm, cublasGemmEx(algo=DEFAULT) Disallowed Allowed cublasGemmEx(algo=*_TENSOR_OP Allowed Allowed Constraint: M,N,K,LDA,LDB,LDC and A,B,C pointers must ALL be aligned to 8 because of high memory bandwidth needed to efficiently use Tensor Cores. 24 CUBLAS FUTURE IMPROVEMENTS ● Loosening constraints on Tensor Core usage: 1. CUDA 10.1 Update 2 will lift some restrictions so that only requirements remaining are: m%4 == 0 k%8 == 0 lda, ldb, ldc, A, B, C are aligned to 16 bytes, 2. Plan to lift the restriction completely by adding new kernels to work on mis-aligned memory in a future release. ● Plans to make Tensor Core “opt-out” instead of “opt-in” for all directly applicable data type combinations. ● Plans to add NVTX based feedback to add information on tensor-core usage for detailed profiling. Plans are subject to change 25 CUBLASLT: NEW MATRIX MULTIPLICATION LIBRARY • Has its own header file, binary and lightweight context • Intended for power users of GEMMs that need advanced features and optimizations for their workflows • cuBLASLt is not a replacement for cuBLAS • Adds flexibility in: • new matrix data layouts: IMMA, and planar complex (Tensor Ops) • algorithmic implementation choices and heuristics • Workspace support enables new optimizations – e.g. split-k • Non-traditional memory ordering enables hardware optimizations such as INT8 IMMA on Turing GPUs #include <cublasLt.h> cublasLtCreate() cublasLtMatmul() cublasLtMatmulAlgoGetHeuristic() cublasLtMatmulAlgoConfigSetAttribute() solving linear system Ax = b LU factorization TENSOR CORE ACCELERATED IRS SOLVING LINEAR SYSTEM AX = B • LU factorization is used to solve a linear system Ax=b A x = b LUx = b A x b U L x b L y b U x y Ly = b then Ux = y panel update step 1 step 2 step 3 step 4 nb For s = 0, nb, .. N 1. panel factorize 2. update trailing matrix GEMM TRSM Panel L U LU factorization requires O(n3) most of the operations are spent in GEMM TENSOR CORE ACCELERATED IRS SOLVING LINEAR SYSTEM AX = B 29 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 FP64 FP32 FP16 (TC) V100 TFLOPS TENSOR CORE ACCELERATED LIBRARIES Multi-precision numerical methods LU factorization used to solve Ax=b is dominated by GEMMs Can it be accelerated using Tensor Cores and still get fp64 accuracy? 30 DIFFERENT LEVELS OF PRECISIONS USED DURING FACTORIZATION WITH TENSOR CORES FP32 FP32 FP16/FP32 FP32 FP32 FP16/FP32 … m=n 2k 4k 6k 8k 10k 12k 14k 16k 18k 20k 22k 24k 26k 28k 30k Tflop/s 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 FP16 TC square FP16 TC k=256 FP16 square FP16 k=256 FP32 square FP32 k=256 FP64 square FP64 k=256 STUDY OF THE MATRIX MATRIX MULTIPLICATION KERNEL ON NVIDIA V100 • dgemm achieve about 6.4 Tflop/s • sgemm achieve about 14 Tflop/s • hgemm achieve about 27 Tflop/s • Tensor cores gemm reach about 85 Tflop/s • Rank-k GEMM needed by LU does not perform as well as square but still OK Haidar et al., SC’18 proceedings Results obtained using MAGMA 2.5.0 and GV100 GPU matrix size 2k 4k 6k 8k 10k 14k 18k 22k 26k 30k 34k Tflop/s 0 2 4 6 8 10 12 14 16 18 20 22 24 26 FP16-TC (Tensor Cores) hgetrf LU FP16 hgetrf LU FP32 sgetrf LU FP64 dgetrf LU • LU factorization is used to solve a linear system Ax=b A x = b LUx = b Ly = b then Ux = y Study of the LU factorization algorithm on Nvidia V100 LEVERAGING HALF PRECISION IN HPC ON V100 MOTIVATION 3~4X A x b U L x b L y b U x y Main idea is to use lower precision to compute the expensive flops (LU O(n3)) and then iteratively refine the solution in order to achieve the FP64 arithmetic • Wilkinson, Moler, Stewart, & Higham provide error bound for SP fl pt results when using DP fl pt. • E. Carson and N. J. Higham. Accelerating the solution of linear systems by iterative refinement in three precisions. • It can be shown that using this approach we can compute the solution with residual similar to the 64-bit floating point precision. Leveraging Tensor Cores Iterative Refinement Solver Flops = 2n3/(3 time) meaning twice higher is twice faster Problem generated with an arithmetic distribution of the singular values and positive eigenvalues. si = 1−( i−1 n−1)(1− 1 cond) Tensor Core Accelerated IRS solving linear system Ax = b Performance Behavior • solving Ax = b using FP64 LU Flops = 2n3/(3 time) meaning twice higher is twice faster Problem generated with an arithmetic distribution of the singular values and positive eigenvalues. si = 1−( i−1 n−1)(1− 1 cond) Tensor Core Accelerated IRS solving linear system Ax = b Performance Behavior • solving Ax = b using FP64 LU • solving Ax = b using FP32 LU and iterative refinement to achieve FP64 accuracy Flops = 2n3/(3 time) meaning twice higher is twice faster Problem generated with an arithmetic distribution of the singular values and positive eigenvalues. si = 1−( i−1 n−1)(1− 1 cond) Tensor Core Accelerated IRS solving linear system Ax = b Performance Behavior • solving Ax = b using FP64 LU • solving Ax = b using FP32 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 LU and iterative refinement to achieve FP64 accuracy 4X Flops = 2n3/(3 time) meaning twice higher is twice faster Problem generated with an arithmetic distribution of the singular values and positive eigenvalues. si = 1−( i−1 n−1)(1− 1 cond) Tensor Core Accelerated IRS solving linear system Ax = b Performance Behavior • solving Ax = b using FP64 LU • solving Ax = b using FP32 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 Tensor Cores LU and iterative refinement to achieve FP64 accuracy Flops = 2n3/(3 time) meaning twice higher is twice faster Problem generated with an arithmetic distribution of the singular values and positive eigenvalues. si = 1−( i−1 n−1)(1− 1 cond) Tensor Core Accelerated IRS solving linear system Ax = b Performance Behavior • solving Ax = b using FP64 LU • solving Ax = b using FP32 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 Tensor Cores LU and iterative refinement to achieve FP64 accuracy Flops = 2n3/(3 time) meaning twice higher is twice faster 4X Problem generated with an clustered distribution of the singular values Tensor Core Accelerated IRS solving linear system Ax = b Performance Behavior • solving Ax = b using FP64 LU • solving Ax = b using FP32 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 Tensor Cores LU and iterative refinement to achieve FP64 accuracy Flops = 2n3/(3 time) meaning twice higher is twice faster Problem generated with an arithmetic distribution of the singular values . si = 1−( i−1 n−1)(1− 1 cond) 3X Tensor Core Accelerated IRS solving linear system Ax = b Performance Behavior • solving Ax = b using FP64 LU • solving Ax = b using FP32 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 LU and iterative refinement to achieve FP64 accuracy • solving Ax = b using FP16 Tensor Cores LU and iterative refinement to achieve FP64 accuracy Convergence Checks Is the solution really the same as the fp64 solver? Leveraging Tensor Cores Iterative Refinement Solver Haidar et al., SC’18 proceedings Results obtained using MAGMA 2.5.0 and GV100 GPU 43 Mixed-precision iterative refinement solver GV100 vs TU102 Results obtained using MAGMA 2.5.0 0 2 4 6 8 10 12 14 16 18 20 22 0k 4k 8k 12k 16k 20k 24k 28k 32k 36k 40k TFLOPS Matrix Size Performance of solving Ax= b to fp64 accuracy (GV100 & TU102) TU102 FP16-TC -> FP64 GV100 FP16-TC -> FP64 TU102 FP16 -> FP64 GV100 FP16 -> FP64 TU102 FP32 -> FP64 GV100 FP32 -> FP64 TU102 FP64 GV100 FP64 Mixed-precision iterative refinement solver Energy Efficiency Time (sec) 0 1 2 3 4 5 6 7 Average power CPU+GPU (Watts) 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 5.5 14 2021 Performance in Tflop/s Gflops/Watts Joules 10.7 27 1041 16.8 48 609 24.0 74 470 Solving Ax=b on Nvidia V100 FP64 solver dgesv FP32 --> 64 solver dsgesv FP16 --> 64 solver dhgesv FP16 --> 64 solver dhgesv (TC) CPU: 10 cores E5-2650 v3 GPU: Nvidia V100 Haidar et al., SC’18 proceedings Results obtained using MAGMA 2.5.0 and GV100 GPU TENSOR CORE ACCELERATED ITERATIVE REFINEMENT SOLVERS • Real & Planar Complex • FP32 & FP64 support cuSOLVER productization plans ~September 2019 LU Solver ~November 2019 Cholesky Solver ~November 2019 QR Solver *Plans subject to change 47 AUTOMATIC MIXED PRECISION Insert ~ two lines of code to introduce Automatic Mixed-Precision and get upto 3X speedup AMP uses a graph optimization technique to determine FP16 and FP32 operations Support for TensorFlow, PyTorch and MXNet Easy to Use, Greater Performance and Boost in Productivity Unleash the next generation AI performance and get faster to the market! 48 ENABLING AUTOMATIC MIXED PRECISION Add Just A Few Lines of Code, Get Upto 3X Speedup More details: https://developer.nvidia.com/automatic-mixed-precision TensorFlo w os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1' OR export TF_ENABLE_AUTO_MIXED_PRECISION=1 Explicit optimizer wrapper available in NVIDIA Container 19.07+, TF 1.14+, TF 2.0: opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) GA PyTorch model, optimizer = amp.initialize(model, optimizer, opt_level="O1") with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() GA MXNet amp.init() amp.init_trainer(trainer) with amp.scale_loss(loss, trainer) as scaled_loss: autograd.backward(scaled_loss) GA Coming Soon 49 ENABLING AUTOMATIC MIXED PRECISION Add Just A Few Lines of Code • TensorFlow • NVIDIA container 19.03+: • export TF_ENABLE_AUTO_MIXED_PRECISION=1 [automatic casting and automatic loss scaling] • Available in NVIDIA container 19.07+, TF 1.14+, TF 2.0: • We provide an explicit optimizer wrapper to perform loss scaling – which can also enable auto-casting for you: import tensorflow as tf opt = tf.train.GradientDescentOptimizer(0.5) opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) 50 ENABLING AUTOMATIC MIXED PRECISION Add Just A Few Lines of Code • PyTorch • Two steps: initialization and wrapping backpropagation from apex import amp model = … optimizer = SomeOptimizer(model.parameters(), …) # … model, optimizer = amp.initialize(model, optimizer, opt_level=“O1”) # … for train_loop(): loss = loss_fn(model(x), y) with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() # Can manipulate the .grads if you’d like optimizer.step() 51 ENABLING AUTOMATIC MIXED PRECISION Add Just A Few Lines of Code • MXNET • NVIDIA container 19.03+ and MXNET 1.5: from mxnet.contrib import amp amp.init() net = get_network() trainer = mx.gluon.Trainer(...) amp.init_trainer(trainer) for data in dataloader: with autograd.record(True): out = net(data) l = loss(out) with amp.scale_loss(l, trainer) as scaled_loss: autograd.backward(scaled_loss) trainer.step() 52 AUTOMATIC MIXED PRECISION IN TENSORFLOW Upto 3X Speedup All models can be found at: https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow, except for ssd-rn50-fpn-640, which is here: https://github.com/tensorflow/models/tree/master/research/object_detection All performance collected on 1xV100-16GB, except bert-squadqa on 1xV100-32GB. Speedup is the ratio of time to train for a fixed number of epochs in single-precision and Automatic Mixed Precision. Number of epochs for each model was matching the literature or common practice (it was also confirmed that both training sessions achieved the same model accuracy). Batch sizes:. rn50 (v1.5): 128 for FP32, 256 for AMP+XLA; ssd-rn50-fpn-640: 8 for FP32, 16 for AMP+XLA; NCF: 1M for FP32 and AMP+XLA; bert-squadqa: 4 for FP32, 10 for AMP+XLA; GNMT: 128 for FP32, 192 for AMP. TensorFlow Medium Post: Automatic Mixed Precision in TensorFlow for Faster AI Training on NVIDIA GPUs 53 AUTOMATIC MIXED PRECISION IN PYTORCH ● Plot shows ResNet-50 result with/without automatic mixed precision(AMP) ● More AMP enabled model scripts coming soon: Mask-R CNN, GNMT, NCF, etc. https://developer.nvidia.com/automatic-mixed-precision FP32 AMP Enabled Mixed Precision Source: https://github.com/NVIDIA/apex/tree/master/examples/imagenet 2X 54 AUTOMATIC MIXED PRECISION IN MXNET https://github.com/apache/incubator-mxnet/pull/14173 AMP speedup ~1.5X to 2X in comparison with FP32 (*) based on ResNet50 v1.5 55 CUDA TENSOR CORE PROGRAMMING WMMA datatypes wmma::fragment<matrix_a, …> Amat; Per-Thread fragments to hold components of matrices for use with Tensor Cores 56 CUDA TENSOR CORE PROGRAMMING WMMA load and store operations wmma::load_matrix_sync(Amat, a, stride); Warp-level operation to fetch components of matrices into fragments warp 57 CUDA TENSOR CORE PROGRAMMING WMMA Matrix Multiply and Accumulate Operation wmma::mma_sync(Dmat, Amat, Bmat, Cmat); Warp-level operation to perform matrix multiply and accumulate D = 58 CUDA TENSOR CORE PROGRAMMING WMMA load and store operations wmma::store_matrix_sync(d, Dmat, stride); Warp-level operation to fetch components of matrices into fragments warp Result 59 TENSOR CORE EXAMPLE __device__ void tensor_op_16_16_16( float *d, half *a, half *b, float *c) { wmma::fragment<matrix_a, …> Amat; wmma::fragment<matrix_b, …> Bmat; wmma::fragment<matrix_c, …> Cmat; wmma::load_matrix_sync(Amat, a, 16); wmma::load_matrix_sync(Bmat, b, 16); wmma::fill_fragment(Cmat, 0.0f); wmma::mma_sync(Cmat, Amat, Bmat, Cmat); wmma::store_matrix_sync(d, Cmat, 16, wmma::row_major); } CUDA C++ Warp-Level Matrix Operations Create Fragments Initialize Fragments Perform MatMul Store Results 60 TENSOR CORES IN CUDA FORTRAN Similar to CUDA C WMMA API, with some name changes real(2) support for half-precision data available (on both host and device) in PGI 19.7 compilers Requires wmma Fortran module and macros in cuf_macros.CUF file 61 CUDA FORTRAN TENSOR CORE EXAMPLE #include "cuf_macros.CUF" module m contains attributes(global) subroutine wmma_16x16(a, b, c) use wmma real(2), intent(in) :: a(16,*), b(16,*) real(4) :: c(16,*) WMMASubMatrix(WMMAMatrixA, 16, 16, 16, Real, WMMAColMajor) :: sa WMMASubMatrix(WMMAMatrixB, 16, 16, 16, Real, WMMAColMajor) :: sb WMMASubMatrix(WMMAMatrixC, 16, 16, 16, Real, WMMAKind4) :: sc sc = 0.0_4 call wmmaLoadMatrix(sa, a(1,1), 16) call wmmaLoadMatrix(sb, b(1,1), 16) call wmmaMatMul(sc, sa, sb, sc) call wmmaStoreMatrix(c(1,1), sc, 16) end subroutine wmma_16x16 end module m Device Code WMMA Definitions WMMA “fragments” Assignment overloaded to call fill_fragment() 62 CUDA FORTRAN TENSOR CORE EXAMPLE program main use m use cudafor integer, parameter :: m = 16, n=m, k=m real(4) :: a(m,k), b(k,n), c(m,n), cref(m,n) real(4), device :: c_d(m,n) real(2), device :: ah_d(m,k), bh_d(k,n) call random_number(a); a = int(4.*a); ah_d = a call random_number(b); b = int(4.*b); bh_d = b cref = matmul(a, b) c = 0.0 call wmma_16x16<<<1,32>>>(ah_d, bh_d, c_d) c = c_d if (sum(abs(c-cref)) == 0.0) write(*,*) ‘Test passed’ end program main Host Code Launch with a single warp of threads Host-device transfer and 4- to 2-byte conversion 63 MMA.SYNC 64 mma.sync: new instruction in CUDA 10.1 • Directly targets Volta Tensor Cores Matrix multiply-accumulate D = A * B + C • A, B: half • C, D: float or half Warp-synchronous: • Four independent 8-by-8-by-4 matrix multiply-accumulate operations VOLTA MMA.SYNC Warp-scoped matrix multiply instruction 65 VOLTA MMA.SYNC Warp is partitioned into Quad Pairs • QP0: T0..T3 T16..T19 • QP1: T4..T7 T20..T23 • QP2: T8..T11 T24..T27 • QP3: T12..T15 T28..T31 (eight threads each) Each Quad Pair performs one 8-by-8-by-4 matrix multiply Warp-scoped matrix multiply instruction 66 COMPOSING MATRIX MULTIPLIES Replicate data to compute warp-wide 16-by-16-by-4 matrix product • A0..7: QP0,QP2 A8..15: QP1, QP3 • B0..7: QP0,QP1 B8..15: QP2, QP3 1 x mma.sync: 16-by-16-by-4 67 VOLTA MMA.SYNC D = A * B + C PTX Syntax mma.sync.aligned.m8n8k4.alayout.blayout.dtype.f16.f16.ctype d, a, b, c; .alayout = {.row, .col}; .blayout = {.row, .col}; .ctype = {.f16, .f32}; .dtype = {.f16, .f32}; d: 8 x .dtype a: 4 x .f16 b: 4 x .f16 c: 8 x .ctype Note: .f16 elements must be packed into .f16x2 https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma 68 THREAD-DATA MAPPING - F16 MULTIPLICANDS Distributed among threads in quad pair (QP0 shown) ROW-COL (“TN”) COL-ROW (“NT”) mma.sync.aligned.m8n8k4.alayout.blayout.dtype.f16.f16.ctype d, a, b, c; .alayout = {.row, .col}; .blayout = {.row, .col}; a: 2 x .f16x2 b: 2 x .f16x2 69 CUTLASS CUDA C++ Template Library for Matrix Algebra CUTLASS template library for GEMM computations • Blocked structure to maximize data reuse • Software pipelined to hide latency • Conflict-free Shared Memory access to maximize data throughput See CUTLASS GTC 2018 talk. 70 CUTLASS DESIGN PATTERNS Templates: generic programming and compile-time optimizations Traits: describes properties, types, and functors used to specialize CUTLASS concepts Params: structure containing parameters and precomputed values; passed to kernel as POD Vectorized Memory Accesses: load and store as 32b, 64b, or 128b vectors Shape<>: describes size of a 4D vector quantity TileTraits<>: describes a 4D block of elements in memory Fragment<>: partitioning of a tile across a collection of threads TileIterator<>: loads a tile by a collection of threads; result is held in Fragment Design patterns and template concepts in CUTLASS 71 GEMM TEMPLATE KERNEL // // CUTLASS GEMM kernel // template <typename Gemm> __global__ void gemm_kernel(typename Gemm::Params params) { // Declare shared memory __shared__ typename Gemm::SharedStorage shared_storage; // Construct the GEMM object with cleared accumulators Gemm gemm(params); // Compute the matrix multiply-accumulate gemm.multiply_add(shared_storage.mainloop); // Update output memory efficiently gemm.update(shared_storage.epilogue); } // // Specialization for single-precision // typedef cutlass::gemm::SgemmTraits< cutlass::MatrixLayout::kColumnMajor, cutlass::MatrixLayout::kRowMajor, cutlass::Shape<8, 128, 128> > SgemmTraits; // Simplified kernel launch Gemm<SgemmTraits>::launch(params); CUTLASS provides building blocks for efficient device-side code • Helpers simplify common cases 72 EXAMPLE: VOLTA TENSOR CORES WMMA: Warp-synchronous Matrix Multiply-Accumulate • API for issuing operations to Volta Tensor Cores Targeting the CUDA WMMA API /// Perform warp-level multiply-accumulate using WMMA API template < /// Data type of accumulator typename ScalarC, /// Shape of warp-level accumulator tile typename WarpTile, /// Shape of one WMMA operation – e.g. 16x16x16 typename WmmaTile > struct WmmaMultiplyAdd { /// Compute number of WMMA operations typedef typename ShapeDiv<WarpTile, WmmaTile>::Shape Shape; /// Multiply: D = A*B + C inline __device__ void multiply_add( FragmentA const & A, FragmentB const & B, FragmentC const & C, FragmentD & D) { // Perform M-by-N-by-K matrix product using WMMA for (int n = 0; n < Shape::kH; ++n) { for (int m = 0; m < Shape::kW; ++m) { // WMMA API to invoke Tensor Cores nvcuda::wmma::mma_sync( D.elements[n][m], A.elements[k][m], B.elements[k][n], C.elements[n][m] ); } } } }; 73 CUTLASS 1.3 Reusable components targeting Volta Tensor Cores GlobalLoadIterator Transformer SharedStoreIterator SharedTileLoadIterator MatrixMultiply mma.sync Transformer SharedStoreIterator SharedLoaditerator GlobalLoadIterator GlobalStoreIterator Functor GlobalLoadStream Epilogue Warp Matrix Multiply 74 STORING TO SHARED MEMORY CUTLASS Tile Iterators to transform: • Global Memory: Canonical matrix layout Shared Memory: permuted shared memory layout cutlass/gemm/volta884_multiplicand.h // Defines iterators for loading and storing multiplicands template < /// Identifies multiplicand of GEMM (A or B) GemmOperand::Kind Operand, /// Specifies layout of data in source memory MatrixLayout::Kind Layout, /// Specifies threadblock tile shape typename Tile, /// Specifies warp tile shape typename WarpTile, /// Specifies the number of participating warps int WarpCount, /// Specifies the delta between warp tiles typename WarpDelta > struct Volta884Multiplicand { // // Thread-block load iterator (canonical matrix layout) // typedef ... LoadIterator; // // Thread-block store iterator (permuted SMEM layout) // typedef ... StoreIterator; // // Warp-level load iterator // typedef ... WarpLoadIterator; }; 75 LOADING FROM SHARED MEMORY CUTLASS Tile Iterators to transform: • Shared Memory: permuted shared memory layout Register File: mma.sync thread-data mapping cutlass/gemm/volta884_multiplicand.h // Defines iterators for loading and storing multiplicands template < /// Identifies multiplicand of GEMM (A or B) GemmOperand::Kind Operand, /// Specifies layout of data in source memory MatrixLayout::Kind Layout, /// Specifies threadblock tile shape typename Tile, /// Specifies warp tile shape typename WarpTile, /// Specifies the number of participating warps int WarpCount, /// Specifies the delta between warp tiles typename WarpDelta > struct Volta884Multiplicand { // // Thread-block load iterator (canonical matrix layout) // typedef ... LoadIterator; // // Thread-block store iterator (permuted SMEM layout) // typedef ... StoreIterator; // // Warp-level load iterator // typedef ... WarpLoadIterator; }; 76 EXECUTING MMA.SYNC CUTLASS Warp-scoped matrix multiply • Register File: mma.sync thread-data mapping Tensor Cores: mma.sync cutlass/gemm/volta884_multiply_add.h template < /// Shape of a warp-level GEMM (K-by-N-by-M) typename WarpGemmShape_, /// Layout of A multiplicand MatrixLayout::Kind LayoutA, /// Data type of A multiplicand typename ScalarA, /// Layout of B multiplicand MatrixLayout::Kind LayoutB, /// Data type of A multiplicand typename ScalarB, /// Data type of accumulators typename ScalarC, /// Whether infinite results are saturated to +-MAX_FLOAT bool SatFinite = false > struct Volta884MultiplyAdd { // // Multiply : d = (-)a*b + c. // CUTLASS_DEVICE void multiply_add( FragmentA const& A, FragmentB const& B, Accumulators const& C, Accumulators& D, bool negate = false) { ... } }; 77 SPEEDUP RELATIVE TO WMMA 1.06 1.10 1.10 1.25 1.37 1.41 1.42 1.43 1.43 1.44 1.44 1.45 1.45 1.45 1.46 1.46 1.46 1.46 1.47 1.47 1.50 1.61 1.66 1.67 1.71 1.71 1.73 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8transformer_1024_1280_4096_column_columntransformer_1024_1280_1024_column_columntransformer_1024_1280_1024_row columntransformer_1024_1280_33712_column_columntransformer_33712_5120_1024_row_columntransformer_1024_5120_33712_column_columntransformer_1024_1024_1280_column_rowtransformer_1024_1024_2560_column_rowtransformer_33712_1280_1024_row_columntransformer_33712_2560_1024_row_columntransformer_4096_5120_1024_row columntransformer_1024_5120_1024_column_columntransformer_4096_2560_1024_row columntransformer_1024_2560_1024_column_columntransformer_1024_1024_5120_column_rowtransformer_1024_5120_1024_row columntransformer_4096_1280_1024_row columntransformer_1024_2560_1024_row columntransformer_1024_2560_4096_column_columntransformer_1024_5120_4096_column_columntransformer_1024_2560_33712_column_columntransformer_1024_33712_5120_column_rowtransformer_1024_33712_2560_column_rowtransformer_1024_33712_1280_column_rowtransformer_1024_4096_5120_column_rowtransformer_1024_4096_2560_column_rowtransformer_1024_4096_1280_column_row Speedup Transformer - CUTLASS 1.3 - mma.sync speedup vs WMMA V100 - CUDA 10.1 78 TENSOR CORES WITH VISUAL PROFILER • Visual Profiler allows gathering of Tensor Core Utilization after gathering a timeline. • Use the menu option “Run->Collect Metrics and Events” to select the “Tensor-Precision Function Unit Utilization” metric under “Metrics- >Multiprocessor” 79 TENSOR CORES WITH VISUAL PROFILER After clicking on the kernel of interest, select “GPU Details” 80 TENSOR CORES WITH NVPROF • Nvprof supports the tensor_precision_fu_utilization metric which reveals the utilization level of Tensor Cores in each kernel of your model. (Since CUDA9) • The utilization level of the multiprocessor function units that execute tensor core instructions on a scale of 0 to 10 Invocations Metric Name Metric Description Min Max Avg Device "Quadro GV100 (0)" Kernel: compute_gemm(__half const *, __half const *, float const *, float*, float, float) 1 tensor_precision_fu_utilization Tensor-Precision Function Unit Utilization Mid (5) Mid (5) Mid (5) nvprof -m tensor_precision_fu_utilization ./cudaTensorCoreGemm 81 TENSOR CORES WITH NSIGHT COMPUTE • The Nsight Compute CLI allows collecting several metrics related to tensor core usage • This data can be view from the CLI or via the Nsight Compute GUI nv-nsight-cu-cli --metrics sm__pipe_tensor_cycles_active.avg.pct_of_peak_sustained_active ./cudaTensorCoreGemm [ compute_gemm, 2019-Aug-08 12:48:39, Context 1, Stream 7 Section: Command line profiler metrics ---------------------------------------------------------------------- --------------- ------------------------------ sm__pipe_tensor_cycles_active.avg.pct_of_peak_sustained_active % 43.44 ---------------------------------------------------------------------- --------------- ------------------------------ 82 CASE STUDIES 83 ITERATIVE REFINEMENT (ASGARD & HPL) With NVIDIA Tensor Cores the simulations run 3.5X faster than previous methods so the team can simulate significantly longer physical times and help advance our understanding of how to sustain the plasma and generate energy ADVANCING FUSION DISCOVERIES Joint work with ORNL & UTK David Green, Ed Azevedo, Wael Elwasif, Graham Lopez, Tyler McDaniel, Lin Mu, Stan Tomov, Jack Dongarra ASGarD: Adaptive Sparse Grid Discretization Two stream instability study Scientists believe fusion is the future of energy but maintaining plasma reactions is challenging and disruptions can result in damage to the tokamak. Researchers at ORNL are simulating instabilities in the plasma to provide physicists a better understanding of what happens inside the reactor. ADVANCING FUSION DISCOVERIES Joint work with ORNL & UTK David Green, Ed Azevedo, Wael Elwasif, Graham Lopez, Tyler McDaniel, Lin Mu, Stan Tomov, Jack Dongarra ASGarD: Adaptive Sparse Grid Discretization Two stream instability study magma_dgesv_gpu( N, nrhs, d_A, ldda, ipiv, d_B, lddb, &info ); is replaced by magma_dhgesv_iteref_gpu( N, nrhs, d_A, ldda, h_ipiv, d_ipiv, d_B, lddb, d_X, lddx, d_workspace, &gesv_iter, &info); Mixed-precision iterative refinement solver Performance on a wider range of problems PERFORMANCE FOR REAL-LIFE MATRICES FROM THE SUITESPARSE COLLECTION AND FROM DENSE MATRIX ARISING FROM RADAR DESIGN name Description size k• (A) dgesv dsgesv dhgesv dhgesv- TC time(s) # iter time (s) speedup # iter time (s) speedup # iter time (s) speedup em192 radar design 26896 106 5.70 3 3.11 1.8328 40 5.21 1.0940 10 2.05 2.7805 appu NASA app benchmark 14000 104 0.43 2 0.27 1.5926 7 0.24 1.7917 4 0.19 2.2632 ns3Da 3D Navier Stokes 20414 7.6 103 1.12 2 0.69 1.6232 6 0.54 2.0741 4 0.43 2.6047 nd6k ND problem set 18000 3.5 102 0.81 2 0.45 1.8000 5 0.36 2.2500 3 0.30 2.7000 nd12k ND problem set 36000 4.3 102 5.36 2 2.75 1.9491 5 1.86 2.8817 3 1.31 4.0916 Poisson 2D Poisson problem 32000 2.1 106 3.81 2 2.15 1.7721 59 2.04 1.8676 10 1.13 3.3717 Vlasov 2D Vlasov problem 22000 8.3 103 1.65 2 0.95 1.7368 4 0.67 2.4627 3 0.48 3.4375 appu 1,853,104 nnz nd12k 14,220,946 nnz nd12k 1,679,599 nnz WORLD’S FASTEST SUPERCOMPUTER TRIPLES ITS PERFORMANCE RECORD… WORLD’S FASTEST SUPERCOMPUTER TRIPLES ITS PERFORMANCE RECORD… Using mixed precision iterative refinement approach we solved a matrix of order 10,091,520 on the DOE‘s Summit system. Composed of nodes made up of 2 IBM Power-9 processors, 22 cores each, and 6 Nvidia V100 GPUs The run used 4500 nodes Used a random matrix with large diagonal elements to insure convergence of the method. Mixed precision HPL achieved 445 PFLOPS or 2.95X over DP precision HPL result on the Top500. (148 PFLOPS) 89 TENSOR CORE FOR PARTICLE PUSH IN MAGNETIC FIELD 90 PARTICLE PUSH • The governing equation for particle velocity in magnetic field is given by: • ⅆ𝑣 ⅆ𝑡= 𝑞 𝑚v x B , 𝑣= 𝑣𝑒𝑙𝑜𝑐𝑖𝑡𝑦, 𝑞= 𝑐ℎ𝑎𝑟𝑔𝑒, 𝑚= 𝑚𝑎𝑠𝑠, 𝐵= 𝑚𝑎𝑔𝑛𝑒𝑡𝑖𝑐𝑓𝑖𝑒𝑙𝑑 • *Discretizing the above equation in 2 dimension can lead to : /*grab magnetic field at current position*/ B=EvalB(x); /*get new velocity at n+1*/ v2[0] = v[0] + q/m*B*v[1]*dt; v2[1] = v[1] - q/m*B*v[0]*dt; /*update position*/ x2[0] = x[0] + v2[0]*dt; x2[1] = x[1] + v2[1]*dt; /*push down*/ v[0]=v2[0]; v[1]=v2[1]; (x1,y1) (x2,y2) (x3,y3) (x4,y5) Gather by magnetic forces from the cell vertices. *Ref: https://www.particleincell.com/2011/vxb-rotation/ 91 BORIS METHOD *Ref: https://www.particleincell.com/2011/vxb-rotation/ *Boris method is the de facto standard for particle pushing in plasma simulation codes It is an explicit technique The following equations summarize Boris method 𝑣+ −𝑣− Δ𝑡 = 𝑞 2𝑚𝑣+ + 𝑣−× 𝐵 𝑣′ = 𝑣−+ 𝑣−× 𝑡 𝑣+ = 𝑣−+ 𝑣′ × s 𝑡= 𝑞 Τ 𝐵𝑚Δ Τ 𝑡2 𝑠= 2𝑡 1 + 𝑡2 In the absence of Electric Field. V+ acts as velocity update. Electric field can be easily added. 92 SCATTER PARTICLE INSTEAD OF GATHER (x1,y1) (x2,y2) (x3,y3) (x4,y5) Gather by interpolation forces from the cell vertices. (x1,y1) (x2,y2) (x3,y3) (x4,y5) Scatter particle properties to nodes and add compute at nodes. To use Tensor core, scatter properties of the particles and use WMMA to compute and assemble • We separate velocity direction and magnitude. Magnitude in FP32 while directions in FP16 • We pack velocity, t and s vectors into Tensor Core format. This is basically the scatter operation. • The GEMM updates velocities and add them back to particle final velocity at a given time step in FP32 93 IMPLEMENTING WITH WMMA // Half-precision, no tensor core int id = (int)(threadIdx.x % 32); for(int k = 0;k<8;k++) { float t =0; for(int i =0;i<16;i++) { t += __half2float( __hmul(X[i + k*16] , Y[id + i * 32]) ); } accmat[id + k*32] = t; } return; // Half-precision, with tensor core // Declare the fragments nvcuda::wmma::fragment<nvcuda::wmma::matrix_a, 8, 32, 16, half, nvcuda::wmma::col_major> a_frag; nvcuda::wmma::fragment<nvcuda::wmma::matrix_b, 8, 32, 16, half, nvcuda::wmma::row_major> b_frag; nvcuda::wmma::fragment<nvcuda::wmma::accumulat or, 8, 32, 16, float> acc_frag; nvcuda::wmma::fill_fragment(acc_frag, 0.0f); nvcuda::wmma::load_matrix_sync(a_frag, a, 8); nvcuda::wmma::load_matrix_sync(b_frag, b, 32); nvcuda::wmma::mma_sync(acc_frag, a_frag, b_frag, acc_frag); nvcuda::wmma::store_matrix_sync(c , acc_frag, 32, nvcuda::wmma::mem_row_major); return; 94 PICTC PERFORMANCE COMPARISON 0 10 20 Reference Tensor Cores Reference Tensor Cores Time (ms) – Smaller is BEtter Source: CUDA 10.1, Summit 2.1X 95 MACHINE LEARNING 96 DGX Mixed-Precision Led MLPerf Time to Accuracy on Single Node World’s Fastest Industry-Wide AI Benchmark Achieved on NVIDIA GPUs Image Classification RN50 v.1.5 MXNet Object Detection Mask R-CNN PyTorch Object Detection SSD PyTorch Translation (recurrent) GNMT PyTorch Translation (non- recurrent) Transformer PyTorch Recommendation NCF PyTorch 70 minutes 167 minutes 14 minutes 10 minutes 19 minutes 0.4 minutes Test Platform: DGX-2H - Dual-Socket Xeon Platinum 8174, 1.5TB system RAM, 16 x 32 GB Tesla V100 SXM-3 GPUs connected via NVSwitch 97 97 NVIDIA NGC MODEL SCRIPTS Tensor Core Optimized Deep Learning Examples 14 Available today! ● Tensor Core optimized for greater performance ● Test drive automatic mixed precision ● Actively updated by NVIDIA ● State-of-the-art accuracy using Tensor Cores ● Serves as a reference implementation ● Exposes hyperparameters and source code for further adjustment Accessible via: ● NVIDIA NGC https://ngc.nvidia.com/catalog/model-scripts ● GitHub https://www.github.com/NVIDIA/deeplearningexamples ● NVIDIA NGC Framework containers https://ngc.nvidia.com/catalog/containers 98 98 NVIDIA NGC MODEL SCRIPTS Tensor Core Examples Built for Multiple Use Cases and Frameworks A dedicated hub to download Tensor Core Optimized Deep Learning Examples on NGC https://ngc.nvidia.com/catalog/model-scripts?quickFilter=deep-learning 99 99 MODEL SCRIPTS FOR VARIOUS APPLICATIONS https://developer.nvidia.com/deep-learning-examples Computer Vision Speech & NLP Recommender Systems ● SSD PyTorch ● SSD TensorFlow ● UNET-Industrial TensorFlow ● UNET-Medical TensorFlow ● ResNet-50 v1.5 MXNet ● ResNet-50 PyTorch ● ResNet-50 TensorFlow ● Mask R-CNN PyTorch ● GNMT v2 TensorFlow ● GNMT v2 PyTorch ● Transformer PyTorch ● BERT (Pre-training and Q&A) TensorFlow ● NCF PyTorch ● NCF TensorFlow Text to Speech ● Tacotron2 and WaveGlow PyTorch 100 IMAGE CLASSIFICATION: MXNet ResNet-50 v1.5 https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_mxnet NGC 18.12+ MXNet container Source: https://github.com/NVIDIA/DeepLearningExamples/tree/master/MxNet/Classification/RN50v1.5 GPU:1xV100-16GB | DGX-1V | Batch Size: 208 (FP16), 96 (FP16) DGX-1V 8GPU 16G MXNet ResNet FP32 MXNet ResNet Mixed Precision Time to Train [Hours] 11.1 3.3 Train AccuracyTop 1% 76.67% 76.49% Perf. 2,957 Img/sec 10,263 Img/sec Data set ImageNet 101 SPEECH SYNTHESIS: Tacotron 2 And WaveGlow v1.0 https://ngc.nvidia.com/catalog/model-scripts/nvidia:tacotron_2_and_waveglow_for_pytorch Source: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2 GPU:1xV100-16GB | DGX-1V | Batch Size: 208 (FP16), 96 (FP16) DGX-1V 16G Tacotron 2 FP32 Tacotron 2 Mixed Precision WaveGlow FP32 WaveGlow Mixed Precision Time to Train [Hours] 44 @ 1500 epochs 33.14 @ 1500 epochs 109.96 @ 1000 epochs 54.83 @ 1000 epochs Train Accuracy Loss (@1000 Epochs) 0.3629 0.3645 -6.1087 -6.0258 Perf. 10,843 tokens/sec 12,742 tokens/sec 257,687(*) samples/sec 500,375(*) samples/sec Data set LJ Speech Dataset (*) With sampling rate equal to 22050, one second of audio is generated from 22050 samples 102 LANGUAGE MODELING: BERT for TensorFlow https://ngc.nvidia.com/catalog/model-scripts/nvidia:bert_for_tensorflow NGC 19.03 TensorFlow container Source: https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT GPU:8xV100-32GB | DGX-1 | Batch size per GPU: 4 DGX-1V 8GPU 32G TF BERT FP32 TF BERT Mixed Precision Time to Train [Hours] 0.77 (BSxGPU = 4) 0.51 (BSxGPU = 4) Train F1 (mean) 90.83 90.99 Perf. (BSxGPU = 4) 66.65 sentences/sec 129.16 sentences/sec Data set SQuaD (fine-tuning) 103 OBJECT DETECTION: TensorFlow SSD NGC 19.03 TensorFlow container https://ngc.nvidia.com/catalog/model-scripts/nvidia:ssd_for_tensorflow Source: https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Detection/SSD GPU:8xV100-16GB | DGX-1V | Batch Size: 32 (FP32, Mixed) DGX-1V 8GPU 16G TF SSD FP32 TF SSD Mixed Precision Time to Train 1h 37min 1h 19min Accuracy (map) 0.268 0.269 Perf. (BSxGPU = 32) 569 Img/sec 752 Img/sec Data set COCO 2017 104 TRANSLATION: PyTorch GNMT https://ngc.nvidia.com/catalog/model-scripts/nvidia:gnmt_v2_for_pytorch NGC 19.01 PyTorch container Source: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Translation/GNMT GPU:16xV100-32GB | DGX-2 | Batch size: 128 (FP32, Mixed) DGX-2V 16GPU 32G PyTorch GNMT FP32 PyTorch GNMT Mixed Precision Time to Train [min] 58.6 26.3 Train Accuracy BLEU score 24.16 24.22 Perf. 314.831 tokens/sec 738,521 tokens/sec Data set WMT16 English to German 105 RECOMMENDER: PyTorch Neural Collaborative Filter https://ngc.nvidia.com/catalog/model-scripts/nvidia:ncf_for_pytorch NGC 18.12 PyTorch container Source: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Recommendation/NCF GPU:8xV100-16GB | DGX-1 | Batch size: 1,048,576 DGX-1V 8GPU 16G PyTorch NCF FP32 PyTorch NCF Mixed Precision Time to Accuracy [seconds] 32.68 20.42 Accuracy Hit Rate @10 0.96 0.96 Perf. 55,004,590 smp/sec 99,332,230 smp/sec Data set MovieLens 20M 106 INDUSTRIAL DEFECT DETECTION: TensorFlow U-Net https://ngc.nvidia.com/catalog/model-scripts/nvidia:unet_industrial_for_tensorflow NGC 19.03 TensorFlow container Source: https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Segmentation/UNet_Industrial GPU:8xV100-16GB | DGX-1 | Batch size: 16 DAGM 2007 has 10 classes (for the competition). Each class has an independent IOU. DGX-1V 8GPU 16G TF U-Net FP32 TF U-Net Mixed Precision Time to Train 1 min 44 sec 1 min 36 sec IOU (Th=0.75 Class #4) 0.965 0.960 IOU (Th=0.75 Class #9) 0.988 0.988 Perf. 445 Img/sec 491 Img/sec Data set DAGM 2007 107 Matching Accuracy for FP32 and Mixed Precision Values are measured with model running on (1) DGX-1V 8GPU 16G, (2) DGX-1V 8GPU 32G or (3) DGX-2V 16GPU 32G Model Script Framework Data Set Automatic or Manual Mixed- Precision FP32 Accuracy Mixed- Precision Accuracy FP32 Throughput Mixed-Precision Throughput Speedup BERT Q&A (2) TensorFlo w SQuaD AMP 90.83 Top 1 90.99 Top 1 66.65 sentences/sec 129.16 sentences/sec 1.94 SSD w/RN50 (1) TensorFlo w COCO 2017 AMP 0.268 mAP 0.269 mAP 569 images/sec 752 images/sec 1.32 GNMT (3) PyTorch WMT16 English to German Manual 24.16 BLEU 24.22 BLEU 314,831 tokens/sec 738,521 tokens/sec 2.35 Neural Collaborative Filter (1) PyTorch MovieLens 20M Manual 0.959 HR 0.960 HR 55,004,590 samples/sec 99,332,230 items/sec 1.81 U-Net Industrial (1) TensorFlo w DAGM 2007 AMP 0.965- 0.988 0.960-0.988 445 images/sec 491 images/sec 1.10 ResNet-50 v1.5 (1) MXNet ImageNet Manual 76.67 Top 1% 76.49 Top 1% 2,957 images/sec 10,263 images/sec 3.47 Tacotron 2 / WaveGlow 1.0 (1) PyTorch LJ Speech Dataset AMP 0.3629/ -6.1087 0.3645/ -6.0258 10,843 tok/s 257,687 smp/s 12,742 tok/s 500,375 smp/s 1.18/ 1.94 108 NON-TRADITIONAL USES 109 NON-TRADITIONAL USE OF TENSOR CORES Many problems can be reformulated in terms of dense matrix multiplication May not be most algorithmithically efficient, but tensor core performance can make up for large constant factor differences Example: computing correlations between sets of binary vectors e.g. for clustering points in 0,1 𝑁[Joubert et al, 2018] 𝐶𝑖, 𝑗= ෍ 𝑘 𝑁 𝐴𝑖𝑘∧𝐵𝑗𝑘 = ෍ 𝑘 𝑁 𝐴𝑖𝑘 ∗[𝐵𝑗𝑘] ⇒ 𝐶𝑖𝑗= ෍ 𝑘 𝐴𝑖𝑘𝐵𝑗𝑘 ⇒ 𝐶= 𝐴𝐵𝑇 Perfect use case for reduced precision: inputs are all in [0,1], outputs in [0,N] (or better) See https://www.olcf.ornl.gov/wp- content/uploads/2018/10/joubert_2019OLCFUserMeeting.pdf When you have a GEMM-shaped hammer... 110 HGEMM VS GEMMEX cublasSetMathMode(handle, CUBLAS_TENSOR_OP_MATH); const __half *A = ...; const __half *B = ...; __half *C = ...; cublasHgemm(handle, transa, transb, m, n, k, alpha, A, lda, B, ldb, beta, C, ldc); ... float *C = ...; cublasGemmEx(handle, transa, transb, m, n, k, alpha, A, CUDA_R_16F, lda, B, CUDA_R_16F, ldb, beta, C, CUDA_R_32F, ldc, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP); vs. accumulates in FP16 exact results only up to N < 2048 accumulates in FP32 nearly as fast as cublasHgemm (same datapath, just a bit more I/O) exact results up to N < 224 make sure to ask for tensor cores! 111 PITFALL: LARGE VALUES FOR LD{A,B,C} When N gets large, A and B matrices can get very long and skinny Prefer the memory layout that keeps lda and ldb small Change the transa/transb parameters on cublas*Gemm* to match Your caches and TLBs will thank you! = * lda = 1000 ldb = 8000000 = * lda = 1000 ldb = 1000 ( )T   vs. 112 CONCLUSIONS 113 CONCLUSIONS When used appropriately Tensor Cores can achieve as much as an 8X performance increase. Real-world applications have seen > 2X performance improvement. High-throughput Matrix Multiplication requires careful data considerations A variety of High and Low-level approaches are available for programming Tensor Cores Tensor Cores show promise beyond Machine Learning applications 114 ADDITIONAL RESOURCES CUTLASS Basics - http://on-demand.gputechconf.com/gtc/2018/presentation/s8854-cutlass- software-primitives-for-dense-linear-algebra-at-all-levels-and-scales-within-cuda.pdf cuTensor & CUTLASS - https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/S9593/ cuBLAS - https://docs.nvidia.com/cuda/cublas/index.html Mixed Precision Training Guide - https://docs.nvidia.com/deeplearning/sdk/mixed-precision- training/index.html CUDA C Programming Guide (WMMA) - https://docs.nvidia.com/cuda/cuda-c-programming- guide/index.html#wmma PTX ISA - https://docs.nvidia.com/cuda/parallel-thread-execution/index.html CUDA Tensor Core Sample - https://docs.nvidia.com/cuda/cuda-samples/index.html#cuda- tensor-core-gemm
Andrew Kerr, May 21, 2020 DEVELOPING CUDA KERNELS TO PUSH TENSOR CORES TO THE ABSOLUTE LIMIT ON NVIDIA A100 2 CUTLASS Team Andrew Kerr, Haicheng Wu, Manish Gupta, Duane Merrill, Pradeep Ramani Contributors Mostafa Hagog, Timothy Costa, Alan Kaatz, John Tran, Stephen Jones, Kyrylo Perelygin, Luke Durant, Piotr Majcher, Paul Springer, Markus Hohnerbach Acknowledgements Joel McCormack, Julien Demouth, Olivier Giroux, Bryce Lelbach, Cris Cecka ACKNOWLEDGEMENTS 3 Overview NVIDIA Ampere Architecture and CUTLASS 2.2 Tensor Cores on NVIDIA Ampere Architecture Accelerated matrix operations Efficient data movements for Tensor Cores Strategies for maximizing performance CUTLASS on NVIDIA A100 Optimal CUDA C++ templates for Tensor Cores AGENDA 4 OVERVIEW 5 NVIDIA AMPERE ARCHITECTURE New and Faster Tensor Core Operations  Floating-point Tensor Core operations 8x and 16x faster than F32 CUDA Cores  Integer Tensor Core operations 32x and 64x faster than F32 CUDA Cores  New IEEE double-precision Tensor Cores 2x faster than F64 CUDA Cores Additional Data Types and Mode  Bfloat16, double, Tensor Float 32 Asynchronous copy  Copy directly into shared memory – deep software pipelines Many additional new features – see “Inside NVIDIA Ampere Architecture” NVIDIA A100 6 PROGRAMMING NVIDIA AMPERE ARCHITECTURE Deep Learning and Math Libraries using Tensor Cores (with CUDA kernels under the hood) • cuDNN, cuBLAS, cuTENSOR, cuSOLVER, cuFFT, cuSPARSE • “CUDNN V8: New Advances in Deep Learning Acceleration” (GTC 2020 - S21685) • “How CUDA Math Libraries Can Help you Unleash the Power of the New NVIDIA A100 GPU” (GTC 2020 – S21681) • “Inside the Compilers, Libraries and Tools for Accelerated Computing” (GTC 2020 – S21766) CUDA C++ Device Code • CUTLASS, CUDA Math API, CUB, Thrust, libcu++ CUDA device code CUDA-accelerated math libraries with host-side API GPU GPU-accelerated application 7 PROGRAMMING NVIDIA AMPERE ARCHITECTURE This is a talk for CUDA programmers with CUDA C++ CUDA device code CUDA-accelerated math libraries with host-side API GPU GPU-accelerated application 8 CUTLASS https://github.com/NVIDIA/cutlass CUDA C++ Templates for Deep Learning and Linear Algebra CUDA 9.1 CUDA 10.1 CUDA 11 CUDA 9.2 CUDA 10.2 CUTLASS Preview Release CUTLASS 1.3 – native NVIDIA V100 Tensor Cores CUTLASS 2.2 – NVIDIA A100 CUTLASS 1.0 CUTLASS 2.0 – native NVIDIA Turing Tensor Cores 9 CUTLASS CUTLASS 2.2: optimal performance on NVIDIA Ampere Architecture • Higher throughput Tensor Cores: more than 2x speedup for all data types • New floating-point types: bfloat16, Tensor Float 32, double • Deep software pipelines with cp.async: efficient and latency tolerant CUTLASS 2.1 • Planar complex: complex-valued GEMMs with batching options targeting Volta and Turing Tensor Cores • BLAS-style host side API CUTLASS 2.0: significant refactoring using modern C++11 programming • Efficient: particularly for Turing Tensor Cores • Tensor Core programming model: reusable components for linear algebra kernels in CUDA • Documentation, profiling tools, reference implementations, SDK examples, more.. What’s new? https://github.com/NVIDIA/cutlass 10 0 100,000 200,000 300,000 400,000 500,000 600,000 700,000 800,000 900,000 1,000,000 128 1152 2176 3200 4224 5248 6272 7296 8320 9344 10368 11392 12416 13440 14464 15488 GFLOP/s GEMM K CUTLASS PERFORMANCE ON NVIDIA AMPERE ARCHITECTURE 0 50,000 100,000 150,000 200,000 250,000 32 544 1056 1568 2080 2592 3104 3616 4128 4640 5152 5664 6176 6688 7200 7712 GFLOP/s GEMM K Mixed Precision Floating Point CUTLASS 2.2 - CUDA 11 Toolkit – NVIDIA A100 Double Precision Floating Point Mixed Precision Integer 0 2,000 4,000 6,000 8,000 10,000 12,000 14,000 16,000 18,000 20,000 32 160 288 416 544 672 800 928 1056 1184 1312 1440 1568 1696 1824 1952 GFLOP/s GEMM K Tensor Core – F64 Tensor Core – BF16, F16 CUDA Core – F64 Tensor Core – TF32 CUDA Core – F32 Tensor Core – INT4 CUDA Core – INT8 Tensor Core – INT8 m=3456, n=4096 5.7x 2x 13x 7.7x 13.8x 11 TENSOR CORES ON NVIDIA AMPERE ARCHITECTURE 12 Matrix operations: D = op(A, B) + C  Matrix multiply-add  XOR-POPC Input Data types: A, B  half, bfloat16, Tensor Float 32, double, int8, int4, bin1 Accumulation Data Types: C, D  half, float, int32_t, double WHAT ARE TENSOR CORES? 13 WHAT ARE TENSOR CORES? Matrix operations: D = op(A, B) + C  Matrix multiply-add  XOR-POPC M-by-N-by-K matrix operation  Warp-synchronous, collective operation  32 threads within warp collectively hold A, B, C, and D operands 14 NVIDIA AMPERE ARCHITECTURE - TENSOR CORE OPERATIONS https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma-and-friends PTX Data Types (A * B + C) Shape Speedup on NVIDIA A100 (vs F32 CUDA cores) Speedup on Turing* (vs F32 Cores) Speedup on Volta* (vs F32 Cores) mma.sync.m16n8k16 mma.sync.m16n8k8 F16 * F16 + F16 F16 * F16 + F32 BF16 * BF16 + F32 16-by-8-by-16 16-by-8-by-8 16x 8x 8x mma.sync.m16n8k8 TF32 * TF32 + F32 16-by-8-by-8 8x N/A N/A mma.sync.m8n8k4 F64 * F64 + F64 8-by-8-by-4 2x N/A N/A mma.sync.m16n8k32 mma.sync.m8n8k16 S8 * S8 + S32 16-by-8-by-32 8-by-8-by-16 32x 16x N/A mma.sync.m16n8k64 S4 * S4 + S32 16-by-8-by-64 64x 32x N/A mma.sync.m16n8k256 B1 ^ B1 + S32 16-by-8-by-256 256x 128x N/A * Instructions with equivalent functionality for Turing and Volta differ in shape from the NVIDIA Ampere Architecture in several cases. 15 Warp-wide Tensor Core operation: 8-by-8-by-128b TENSOR CORE OPERATION: FUNDAMENTAL SHAPE 16 mma.sync.aligned (via inline PTX) int32_t D[2]; uint32_t const A; uint32_t const B; int32_t const C[2]; // Example targets 8-by-8-by-16 Tensor Core operation asm( "mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 " " { %0, %1 }, " " %2, " " %3, " " { %4, %5 }; " : "=r"(D[0]), "=r"(D[1]) : "r"(A), "r"(B), "r"(C[0]), "r"(C[1]) ); 8-by-8-by-16 S8 * S8 + S32 https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma-and-friends 17 Warp-wide Tensor Core operation: 16-by-8-by-128b EXPANDING THE M DIMENSION 18 mma.sync.aligned (via inline PTX) float D[4]; uint32_t const A[2]; uint32_t const B; float const C[4]; // Example targets 16-by-8-by-8 Tensor Core operation asm( "mma.sync.aligned.m16n8k8.row.col.f32.f16.f16.f32 " " { %0, %1, %2, %3 }, " " { %4, %5}, " " %6, " " { %7, %8, %9, %10 };" : "=f"(D[0]), "=f"(D[1]), "=f"(D[2]), "=f"(D[3]) : "r"(A[0]), "r"(A[1]), "r"(B), "f"(C[0]), "f"(C[1]) ); 16-by-8-by-8 F16 * F16 + F32 https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma-and-friends 19 Warp-wide Tensor Core operation: 16-by-8-by-256b EXPANDING THE K DIMENSION 20 mma.sync.aligned (via inline PTX) float D[4]; uint32_t const A[4]; uint32_t const B[2]; float const C[4]; // Example targets 16-by-8-by-32 Tensor Core operation asm( "mma.sync.aligned.m16n8k16.row.col.f32.f16.f16.f32 " " { %0, %1, %2, %3 }, " " { %4, %5, %6, %7 }, " " { %8, %9 }, " " { %10, %11, %12, %13 };" : "=f"(D[0]), "=f"(D[1]), "=f"(D[2]), "=f"(D[3]) : "r"(A[0]), "r"(A[1]), "r"(A[2]), "r"(A[3]), "r"(B[0]), "r"(B[1]), "f"(C[0]), "f"(C[1]), "f"(C[2]), "f"(C[3]) ); 16-by-8-by-16 F16 * F16 + F32 https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma-and-friends 21 mma.sync.aligned (via inline PTX) int32_t D[4]; uint32_t const A[4]; uint32_t const B[2]; int32_t const C[4]; // Example targets 16-by-8-by-32 Tensor Core operation asm( "mma.sync.aligned.m16n8k32.row.col.s32.s8.s8.s32 " " { %0, %1, %2, %3 }, " " { %4, %5, %6, %7 }, " " { %8, %9 }, " " { %10, %11, %12, %13 };" : "=r"(D[0]), "=r"(D[1]), "=r"(D[2]), "=r"(D[3]) : "r"(A[0]), "r"(A[1]), "r"(A[2]), "r"(A[3]), "r"(B[0]), "r"(B[1]), "r"(C[0]), "r"(C[1]), "r"(C[2]), "r"(C[3]) ); 16-by-8-by-32 S8 * S8 + S32 https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma-and-friends 22 mma.sync.aligned (via inline PTX) uint32_t D[2]; // two registers needed (vs. four) uint32_t const A[4]; uint32_t const B[2]; uint32_t const C[2]; // two registers needed (vs. four) // Example targets 16-by-8-by-16 Tensor Core operation asm( "mma.sync.aligned.m16n8k16.row.col.f16.f16.f16.f16 " " { %0, %1}, " " { %2, %3, %4, %5 }, " " { %6, %7 }, " " { %8, %9 }; " : "=r"(D[0]), "=r"(D[1]) : "r"(A[0]), "r"(A[1]), "r"(A[2]), "r"(A[3]), "r"(B[0]), "r"(B[1]), "r"(C[0]), "r"(C[1]) ); 16-by-8-by-16 HALF-PRECISION : F16 * F16 + F16 https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma-and-friends C[0] C[1] 23 mma.sync.aligned (via inline PTX) uint64_t D[2]; // two 64-bit accumulators uint64_t const A; // one 64-bit element for A operand uint64_t const B; // one 64-bit element for B operand uint64_t const C[2]; // two 64-bit accumulators // Example targets 8-by-8-by-4 Tensor Core operation asm( "mma.sync.aligned.m8n8k4.row.col.f64.f64.f64.f64 " " { %0, %1}, " “ %2, " " %3, " " { %4, %5 }; " : "=l"(D[0]), "=l"(D[1]) : “l"(A), “l"(B), “l"(C[0]), “l"(C[1]) ); 8-by-8-by-4 DOUBLE-PRECISION: F64 * F64 + F64 https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma-and-friends 24 cutlass::arch::Mma /// Matrix multiply-add operation template < /// Size of the matrix product (concept: GemmShape) typename Shape, /// Number of threads participating int kThreads, /// Data type of A elements typename ElementA, /// Layout of A matrix (concept: MatrixLayout) typename LayoutA, /// Data type of B elements typename ElementB, /// Layout of B matrix (concept: MatrixLayout) typename LayoutB, /// Element type of C matrix typename ElementC, /// Layout of C matrix (concept: MatrixLayout) typename LayoutC, /// Inner product operator typename Operator > struct Mma; m-by-n-by-k CUTLASS: wraps PTX in template https://github.com/NVIDIA/cutlass/blob/master/include/cutlass/arch/mma_sm80.h 25 cutlass::arch::Mma __global__ void kernel() { // arrays containing logical elements Array<half_t, 8> A; Array<half_t, 4> B; Array< float, 4> C; // define the appropriate matrix operation arch::Mma< GemmShape<16, 8, 16>, 32, ... > mma; // in-place matrix multiply-accumulate mma(C, A, B, C); ... } 16-by-8-by-16 CUTLASS: wraps PTX in template https://github.com/NVIDIA/cutlass/blob/master/include/cutlass/arch/mma_sm80.h 26 EFFICIENT DATA MOVEMENT FOR TENSOR CORES 27 CUDA example __global__ void tensor_core_example_8x8x16( int32_t *D, uint32_t const *A, uint32_t const *B, int32_t const *C) { // Compute the coordinates of accesses to A and B matrices int outer = threadIdx.x / 4; // m or n dimension int inner = threadIdx.x % 4; // k dimension // Compute the coordinates for the accumulator matrices int c_row = threadIdx.x / 4; int c_col = 2 * (threadIdx.x % 4); // Compute linear offsets into each matrix int ab_idx = outer * 4 + inner; int cd_idx = c_row * 8 + c_col; // Issue Tensor Core operation asm( "mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 " " { %0, %1 }, " " %2, " " %3, " " { %4, %5 }; " : "=r"(D[cd_idx]), "=r"(D[cd_idx + 1]) : "r"(A[ab_idx]), "r"(B[ab_idx]), "r"(C[cd_idx]), "r"(C[cd_idx + 1]) ); } HELLO WORLD: TENSOR CORES Map each thread to coordinates of the matrix operation Load inputs from memory Perform the matrix operation Store the result to memory 28 CUDA example __global__ void tensor_core_example_8x8x16( int32_t *D, uint32_t const *A, uint32_t const *B, int32_t const *C) { // Compute the coordinates of accesses to A and B matrices int outer = threadIdx.x / 4; // m or n dimension int inner = threadIdx.x % 4; // k dimension // Compute the coordinates for the accumulator matrices int c_row = threadIdx.x / 4; int c_col = 2 * (threadIdx.x % 4); // Compute linear offsets into each matrix int ab_idx = outer * 4 + inner; int cd_idx = c_row * 8 + c_col; // Issue Tensor Core operation asm( "mma.sync.aligned.m8n8k16.row.col.s32.s8.s8.s32 " " { %0, %1 }, " " %2, " " %3, " " { %4, %5 }; " : "=r"(D[cd_idx]), "=r"(D[cd_idx + 1]) : "r"(A[ab_idx]), "r"(B[ab_idx]), "r"(C[cd_idx]), "r"(C[cd_idx + 1]) ); } PERFORMANCE IMPLICATIONS Load A and B inputs from memory: 2 x 4B per thread Perform one Tensor Core operation: 2048 flops per warp 2048 flops require 256 B of loaded data 8 flops/byte NVIDIA A100 Specifications: • 624 TFLOP/s (INT8) • 1.6 TB/s (HBM2) 400 flops/byte 8 flops/byte * 1.6 TB/s 12 TFLOP/s This kernel is global memory bandwidth limited. 29 FEEDING THE DATA PATH Efficient storing and loading through Shared Memory Tiled, hierarchical model: reuse data in Shared Memory and in Registers See CUTLASS GTC 2018 talk for more details about this model. 30 FEEDING THE DATA PATH Move data from Global Memory to Tensor Cores as efficiently as possible • Latency-tolerant pipeline from Global Memory • Conflict-free Shared Memory stores • Conflict-free Shared Memory loads 31 ASYNCHRONOUS COPY: EFFICIENT PIPELINES New NVIDIA Ampere Architecture feature: cp.async • Asynchronous copy directly from Global to Shared Memory • See “Inside the NVIDIA Ampere Architecture” for more details (GTC 2020 – S21730) Enables efficient software pipelines • Minimizes data movement: L2 L1 RF SMEM becomes L2 SMEM • Saves registers: RF no longer needed to hold the results of long-latency load instructions • Indirection: fetch several stages in advance for greater latency tolerance Committed Stage SMEM write pointer Copies in flight Circular buffer in Shared Memory cp.async cp.async cp.async ld.shared 32 FEEDING THE DATA PATH Move data from Global Memory to Tensor Cores as efficiently as possible • Latency-tolerant pipeline from Global Memory • Conflict-free Shared Memory stores • Conflict-free Shared Memory loads 33 GLOBAL MEMORY TO TENSOR CORES Global Memory Shared Memory Tensor Cores cp.async M dimension K dimension 34 LDMATRIX: FETCH TENSOR CORE OPERANDS PTX instruction to load a matrix from Shared Memory Shared Memory Tensor Cores Each thread supplies a pointer to 128b row of data in Shared Memory Each 128b row is broadcast to groups of four threads (potentially different threads than the one supplying the pointer) Data matches arrangement of inputs to Tensor Core operations 35 LDMATRIX: PTX INSTRUCTION Each thread supplies a pointer to 128b row of data in Shared Memory Each 128b row is broadcast to groups of four threads (potentially different threads than the one supplying the pointer) Data matches arrangement of inputs to Tensor Core operations PTX instruction to load a matrix from SMEM // Inline PTX assembly for ldmatrix uint32_t R[4]; uint32_t smem_ptr; asm volatile ( "ldmatrix.sync.aligned.x4.m8n8.shared.b16 " "{%0, %1, %2, %3}, [%4]; " : "=r"(R[0]), "=r"(R[1]), "=r"(R[2]), "=r"(R[3]) : "r"(smem_ptr) ); 36 GLOBAL MEMORY TO TENSOR CORES Tensor Cores ldmatrix cp.async Global Memory Shared Memory 37 NVIDIA AMPERE ARCHITECTURE – SHARED MEMORY BANK TIMING Bank conflicts between threads in the same phase 4B words are accessed in 1 phase 8B words are accessed in 2 phases: • Process addresses of the first 16 threads in a warp • Process addresses of the second 16 threads in a warp 16B words are accessed in 4 phases: • Each phase processes 8 consecutive threads of a warp Slide borrowed from: Guillaume Thomas-Collignon and Paulius Micikevicius. "Volta Architecture and performance optimization.” GTC 2018. http://on-demand.gputechconf.com/gtc/2018/presentation/s81006-volta-architecture-and-performance-optimization.pdf 128 bit access size Phase 0: T0 .. T7 Phase 1: T8 .. T15 Phase 2: T16 .. T23 Phase 3: T24 .. T31 38 GLOBAL MEMORY TO TENSOR CORES Global Memory Shared Memory Registers Bank conflict on either store or load from Shared Memory ldmatrix cp.async 39 GLOBAL TO SHARED MEMORY Load (128 bits per thread) Store (128 bits per thread) Permuted Shared Memory layout XOR function maps thread index to Shared Memory location 40 GLOBAL TO SHARED MEMORY Phase 0: T0 .. T7 Phase 1: T8 .. T15 Phase 2: T16 .. T23 Phase 3: T24 .. T31 Load (128 bits per thread) Store (128 bits per thread) 41 GLOBAL TO SHARED MEMORY Phase 0: T0 .. T7 Phase 1: T8 .. T15 Phase 2: T16 .. T23 Phase 3: T24 .. T31 Load (128 bits per thread) Store (128 bits per thread) 42 GLOBAL TO SHARED MEMORY Phase 0: T0 .. T7 Phase 1: T8 .. T15 Phase 2: T16 .. T23 Phase 3: T24 .. T31 Load (128 bits per thread) Store (128 bits per thread) 43 GLOBAL TO SHARED MEMORY Phase 0: T0 .. T7 Phase 1: T8 .. T15 Phase 2: T16 .. T23 Phase 3: T24 .. T31 Load (128 bits per thread) Store (128 bits per thread) 44 FEEDING THE DATA PATH Move data from Global Memory to Tensor Cores as efficiently as possible • Latency-tolerant pipeline from Global Memory • Conflict-free Shared Memory stores • Conflict-free Shared Memory loads 45 LOADING FROM SHARED MEMORY TO REGISTERS 46 LOADING FROM SHARED MEMORY TO REGISTERS 47 LOADING FROM SHARED MEMORY TO REGISTERS 48 LOADING FROM SHARED MEMORY TO REGISTERS 49 ADVANCING TO NEXT K GROUP K=16 .. 31 K=0 ..15 50 ADVANCING TO NEXT K GROUP smem_ptr = row_idx * 8 + column_idx; smem_ptr = smem_ptr ^ 2; K=0..15 K=16..31 51 LOADING FROM SHARED MEMORY TO REGISTERS Phase 0 K=16..31 52 LOADING FROM SHARED MEMORY TO REGISTERS Phase 1 K=16..31 53 LOADING FROM SHARED MEMORY TO REGISTERS Phase 2 K=16..31 54 LOADING FROM SHARED MEMORY TO REGISTERS Phase 3 K=16..31 55 CUTLASS CUDA C++ Templates as an Optimal Abstraction Layer for Tensor Cores • Latency-tolerant pipeline from Global Memory • Conflict-free Shared Memory stores • Conflict-free Shared Memory loads 56 CUTLASS: OPTIMAL ABSTRACTION FOR TENSOR CORES using Mma = cutlass::gemm::warp::DefaultMmaTensorOp< GemmShape<64, 64, 16>, half_t, LayoutA, // GEMM A operand half_t, LayoutB, // GEMM B operand float, RowMajor // GEMM C operand >; __shared__ ElementA smem_buffer_A[Mma::Shape::kM * GemmK]; __shared__ ElementB smem_buffer_B[Mma::Shape::kN * GemmK]; // Construct iterators into SMEM tiles Mma::IteratorA iter_A({smem_buffer_A, lda}, thread_id); Mma::IteratorB iter_B({smem_buffer_B, ldb}, thread_id); Mma::FragmentA frag_A; Mma::FragmentB frag_B; Mma::FragmentC accum; Mma mma; accum.clear(); #pragma unroll 1 for (int k = 0; k < GemmK; k += Mma::Shape::kK) { iter_A.load(frag_A); // Load fragments from A and B matrices iter_B.load(frag_B); ++iter_A; ++iter_B; // Advance along GEMM K to next tile in A // and B matrices // Compute matrix product mma(accum, frag_A, frag_B, accum); } Shared Memory Tensor Cores Warp-level matrix multiply 57 CUTLASS: OPTIMAL ABSTRACTION FOR TENSOR CORES using Mma = cutlass::gemm::warp::DefaultMmaTensorOp< GemmShape<64, 64, 16>, half_t, LayoutA, // GEMM A operand half_t, LayoutB, // GEMM B operand float, RowMajor // GEMM C operand >; __shared__ ElementA smem_buffer_A[Mma::Shape::kM * GemmK]; __shared__ ElementB smem_buffer_B[Mma::Shape::kN * GemmK]; // Construct iterators into SMEM tiles Mma::IteratorA iter_A({smem_buffer_A, lda}, thread_id); Mma::IteratorB iter_B({smem_buffer_B, ldb}, thread_id); Mma::FragmentA frag_A; Mma::FragmentB frag_B; Mma::FragmentC accum; Mma mma; accum.clear(); #pragma unroll 1 for (int k = 0; k < GemmK; k += Mma::Shape::kK) { iter_A.load(frag_A); // Load fragments from A and B matrices iter_B.load(frag_B); ++iter_A; ++iter_B; // Advance along GEMM K to next tile in A // and B matrices // Compute matrix product mma(accum, frag_A, frag_B, accum); } Tile Iterator Constructors: Initialize pointers into permuted Shared Memory buffers Fragments: Register-backed arrays holding each thread’s data Warp-level matrix multiply: Decomposes a large matrix multiply into Tensor Core operations Tile Iterator: load() - Fetches data from permuted Shared Memory buffers operator++() - advances to the next logical matrix in SMEM 58 CUTLASS ON NVIDIA A100 59 cuBLAS CUTLASS 99% 99% 98% 99% 95% 93% 96% 95% 98% 97% 98% 97% 94% 90% 97% 99% 90% 90% 90% 90% 80% 83% 80% 80% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% NN NT TN TT NN NT TN TT NN NT TN TT NN NT TN TT NN NT TN TT NN NT TN TT DGEMM IGEMM SGEMM TensorOp (f16) TensorOp (f32) TensorOp (TF32) CUTLASS RELATIVE PERFORMANCE TO CUBLAS CUTLASS 2.2 – CUDA 11 Toolkit – NVIDIA A100 60 CUTLASS RELATIVE PERFORMANCE TO CUBLAS cuBLAS CUTLASS 2.2 – CUDA 11 Toolkit – Three generations of GPU architectures 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% NN NT TN TT NN NT TN TT NN NT TN TT NN NT TN TT NN NT TN TT NN NT TN TT DGEMM IGEMM SGEMM TensorOp (f16) TensorOp (f32) TensorOp (TF32) 2080Ti A100 TitanV CUTLASS 61 0 50,000 100,000 150,000 200,000 250,000 64 160 256 352 448 544 640 736 832 928 1024 1120 1216 1312 1408 1504 1600 1696 1792 1888 1984 2080 2176 2272 2368 2464 2560 2656 2752 2848 2944 3040 3136 3232 3328 3424 3520 3616 3712 3808 3904 4000 4096 GFLOP/s GEMM K ARBITRARY PROBLEM SIZE CUTLASS Templates Cover the Design Space 128b alignment 64b alignment 32b alignment 16b alignment CUDA 10.2 and before CUTLASS 2.2 – NVIDIA A100 - Tensor Cores: F16 * F16 + F32 62 CONCLUSION 63 CONCLUSION: NVIDIA A100 IS FAST AND PROGRAMMABLE Tensor Cores on NVIDIA A100 in CUDA • Order of magnitude speedup for matrix computations • Programmable in CUDA via mma.sync with zero overhead • Kernel design can avoid memory bottlenecks • CUDA 11 Toolkit capable of near-peak performance CUTLASS 2.2: May 2020 • Open source CUDA C++ template library for CUDA development • Reusable building blocks for utilizing Tensor Cores on NVIDIA GPUs • Near-optimal performance on NVIDIA Ampere Architecture Try it out! https://github.com/NVIDIA/cutlass 64 REFERENCES NVIDIA Ampere Architecture: “Inside the NVIDIA Ampere Architecture” (GTC 2020 – S21730) “NVIDIA Ampere Architecture In-Depth” (blog post) “CUDA New Features and Beyond” (GTC 2020 – S21760) “Tensor Core Performance on NVIDIA GPUs” (GTC 2020 – S21929) “Inside the Compilers, Libraries and Tools for Accelerated Computing” (GTC 2020 - S21766) CUTLASS https://github.com/NVIDIA/cutlass (open source software, New BSD license) GTC 2018 and GTC 2019 talks: GEMM structure and Volta Tensor Cores CUTLASS Parallel For All blog post
3D Game Engine Programming Helping you build your dream game engine Main menu Post navigation CUDA Thread Execution Model Grid of Thread Blocks Contents GPU Architecture To understand the thread execution model for modern GPU’s, we must first make an analysis of the GPU compute architecture. In this article I will focus on the Fermi compute architecture found in modern GPU’s (GTX 580). Overview of the Fermi Architecture A Fermi GPU consists of 512 CUDA cores. These 512 CUDA cores are split across 16 Streaming Multiprocessors (SM) each SM consisting of 32 CUDA cores. The GPU has 6 64-bit memory partitions supporting up to 6 GB of GDDR5 DRAM memory. Fermi Arcitecture Each streaming multiprocessor (SM) has 32 cuda cores. Each CUDA core consists of an integer arithmetic logic unit (ALU) and a floating point unit (FPU). Fermi Streaming Multiprocessor (SM) The SM has 16 load/store units allowing source and destination addresses to be calculated for sixteen threads per clock. Each SM also has four Special Function Units (SFU) that execute transcendental instructions such as sin, cosine, reciprocal, and square root. CUDA Threads Now that we’ve seen the specific architecture of a Fermi GPU, let’s analyze the more general CUDA thread execution model. Each kernel function is executed in a grid of threads. This grid is divided into blocks also known as thread blocks and each block is further divided into threads. Cuda Execution Model In the image above we see that this example grid is divided into nine thread blocks (3×3), each thread block consists of 9 threads (3×3) for a total of 81 threads for the kernel grid. This image only shows 2-dimensional grid, but if the graphics device supports compute capability 2.0, then the grid of thread blocks can actually be partitioned into 1, 2 or 3 dimensions, otherwise if the device supports compute capability 1.x, then thread blocks can be partitioned into 1, or 2 dimensions (in this case, then the 3rd dimension should always be set to 1). The thread block is partitioned into individual threads and for all compute capabilities, threads can be partitioned into 1, 2, or 3 dimensions. The maximum number of threads that can be assigned to a thread block is 512 for devices with compute capability 1.x and 1024 threads for devices that support compute capability 2.0. The number of blocks within a gird can be determined within a kernel by using the built-in variable gridDim and the number of threads within a block can be determined by using the built-in variable blockDim. A thread block is uniquely identified in a kernel function by using the built-in variable blockIdx and a thread within a block is uniquely identified in a kernel function by using the built-in variable threadIdx. The built-in variables gridDim, blockDim, blockIdx, and threadIdx are each 3-component structs with members x, y, z. With a 1-D kernel, the unique thread ID within a block is the same as the x component of the threadIdx variable. and the unique block ID within a grid is the same as the x component of the blockIdx variable: To determine the unique thread ID in a 2-D block, you would use the following formula: and to determine the unique block ID within a 2-D grid, you would use the following formula: I’ll leave it as an exercise for the reader to determine the formula to compute the unique thread ID and block ID in a 3D grid. Matrix Addition Example Let’s take a look at an example kernel that one might execute. Let’s assume we want to implement a kernel function that adds two matrices and stores the result in a 3rd. The general formula for matrix addition is: That is, the sum of matrix A and matrix B is the sum of the components of matrix A and matrix B. Let’s first write the host version of this method that we would execute on the CPU. This is a pretty standard method that loops through the rows and columns of a matrix and adds the components and stores the results in a 3rd. Now let’s see how we might execute this kernel on the GPU using CUDA. First, we need to think of the problem domain. I this case, the domain is trivial: it is the components of a matrix. Since we are operating on 2-D arrays, it seems reasonable to split our domain into two dimensions; one for the rows, and another for the columns of the matrices. We will assume that we are working on square matrices. This simplifies the problem but mathematically matrix addition only requires that the two matrices have the same number of rows and columns but does not have the requirement that the matrices must be square. Since we know that a kernel is limited to 512 threads/block with compute capability 1.x and 1024 threads/block with compute capability 2.0, then we know we can split our job into square thread blocks each consisting of 16×16 threads (256 threads per block) with compute capability 1.x and 32×32 threads (1024 threads per block) with compute capability 2.0. If we limit the size of our matrix to no larger than 16×16, then we only need a single block to compute the matrix sum and our kernel execution configuration might look something like this: In this simple case, the kernel grid consists of only a single block with matrixRank x matrixRank threads. However, if we want to sum matrices larger than 512 components, then we must split our problem domain into smaller groups that can be processed in multiple blocks. Let’s assume that we want to limit our blocks to execute in 16×16 (256) threads. We can determine the number of blocks that will be required to operate on the entire array by dividing the size of the matrix dimension by the maximum number of threads per block and round-up to the nearest whole number: And we can determine the number of threads per block by dividing the size of the matrix dimension by the number of blocks and round-up to the nearest whole number: So for example, for a 4×4 matrix, we would get and the number of threads is computed as: resulting in a 1×1 grid of 4×4 thread blocks for a total of 16 threads. Another example a 512×512 matirx, we would get: and the number of threads is computed as: resulting in a 32×32 grid of 16×16 thread blocks for a total of 262,144 threads. The host code to setup the kernel granularity might look like this: The Matrix Addition Kernel Function On the device, one kernel function is created for every thread in the problem domain (the matrix elements). We can use the built-in variables gridDim, blockDim, blockIdx, and threadIdx, to identify the current matrix element that the current kernel is operating on. If we assume we have a 9×9 matrix and we split the problem domain into 3×3 blocks each consisting of 3×3 threads as shown in the CUDA Grid below, then we could compute the ith column and the jth row of the matrix with the following formula: So for thread (0,0) of block (1,1) of our 9×9 matrix, we would get: for the column and: for the row. The index into the 1-D buffer that store the matrix is then computed as: and substituting gives: Which is the correct element in the matrix. This solution assumes we are accessing the matrix in row-major order. CUDA Grid Example Let’s see how we might implement this in the kernel. On line 3, and 4 we compute the column and row of the matrix we are operating on using the formulas shown earlier. On line 6, the 1-d index in the matrix array is computed based on the size of a single dimension of the square matris. We must be careful that we don’t try to read or write out of the bounds of the matrix. This might happen if the size of the matrix does not fit nicely into the size of the CUDA grid (in the case of matrices whose size is not evenly divisible by 16) To protect the read and write operation, on line 7 we must check that the computed index does not exceed the size of our array. Thread Synchronization CUDA provides a synchronization barrier for all threads in a block through the __syncthreads() method. A practical example of thread synchronization will be shown in a later article about optimization a CUDA kernel, but for now it’s only important that you know this functionality exists. Thread synchronization is only possible across all threads in a block but not across all threads running in the grid. By not allowing threads across blocks to be synchronized, CUDA enables multiple blocks to be executed on other streaming multiprocessors (SM) in any order. The queue of blocks can be distributed to any SM without having to wait for blocks from another SM to be complete. This allows the CUDA enabled applications to scale across platforms that have more SM at it’s disposal, executing more blocks concurrently than another platforms with less SM’s. Thread synchronization follows strict synchronization rules. All threads in a block must hit the synchronization point or none of them must hit synchronization point. Give the following code block: will cause the threads in a block to wait indefinitely for each other because the two occurrences of __syncthreads are considered separate synchronization points and all threads of the same block must hit the same synchronization point, or all of them must not hit it. Thread Assignment When a kernel is invoked, the CUDA runtime will distribute the blocks across the SM’s on the device. A maximum of 8 blocks (irrelevant of platform) will be assigned to each SM as long as there are enough resources (registers, shared memory, and threads) to execute all the blocks. In the case where there are not enough resources on the SM, then the CUDA runtime will automatically assign less blocks per SM until the resource usage is below the maximum per SM. The total number of blocks that can be executed concurrently is dependent on the device. In the case of the Fermi architecture discussed earlier, a total of 16 SM’s can concurrently handle 8 blocks for a total of 128 blocks executing concurrently on the device. Because the Fermi architecture support compute compatibility 2.0, we can create thread blocks consisting of at most 1024 threads, then the Fermi device can technically support 131,072 threads residing in the SM’s for execution. This does not mean that every clock tick the devices is executing 131,072 instruction simultaneously. In order to understand how the blocks are actually executed on the device, we must look one step further to see how the threads of a block are actually scheduled on the SM’s. Thread Scheduling When a block is assigned to a SM, it is further divided into groups of 32 threads called a warp. Warp scheduling is different depending on the platform, but if we take a look at the Fermi architecture, we see that a single SM consists of 32 CUDA cores (or streaming processor) – two groups of 16 per SM. Each SM in the Fermi architecture (see Fermi architecture image above) features two warp schedulers allowing two warps to be issued and executed concurrently. Fermi’s dual-warp scheduler selects two warps and issues one instruction from each warp to a group of sixteen cores, sixteen load/store units, or four special function units (SFU’s). Most instructions can be dual-issued; two integer instructions, two floating point instructions, or a mix of integer, floating point, load, store, and SFU instructions can be issued concurrently. Fermi - Dual Warp Scheduler You might be wondering why it would be useful to schedule 8 blocks of a maximum of 1024 threads if the SM only has 32 SP’s? The answer is that each instruction of a kernel may require more than a few clock cycles to execute (for example, an instruction to read from global memory will require multiple clock cycles). Any instruction that requires multiple clock cycles to execute incurs latency. The latency of long-running instructions can be hidden by executing instructions from other warps while waiting for the result of the previous warp. This technique of filling the latency of expensive operations with work from other threads is often called latency hiding. Thread Divergence It is reasonable to imagine that your CUDA program contains flow-control statements like if-then-else, switch, while loops, or for loops. Whenever you introduce these flow-control statements in your code, you also introduce the possibility of thread divergence. It is important to be aware of the consequence of thread divergence and also to understand how you can minimize the negative impact of divergence. Thread divergence occurs when some threads in a warp follow a different execution path than others. Let’s take the following code block as an example: Then our flow control and thread divergence would look something like this: Thread Divergence As you can see from this example, the even numbered threads in each block will execute PathA while the odd numbered threads in the block will execute PathB. This is pretty much the worst-case scenario for simple divergence example. Both PathA and PathB cannot be executed concurrently on all threads because their execution paths are different. Only the threads that execute the exact same execution path can run concurrently so the total running time of the warp is the sum of the execution time of both PathA and PathB. In this example, the threads in the warp that execute PathA are activated if the condition is true and all the other threads are deactivated. Then, in another pass, all the threads that execute PathB are activated if the condition is false are activated and the other threads are deactivated. This means that to resolve this condition requires 2-passes to be executed for a single warp. The overhead of having the warp execute both PathA and PathB can be eliminated if the programmer takes careful consideration when writing the kernel. If possible, all threads of a block (since warps can’t span thread blocks) should execute the same execution path. This way you guarantee that all threads in a warp will execute the same execution path and there will be no thread divergence within a block. Exercise If a device supports compute capability 1.3 then it can have blocks with a maximum of 512 threads/block and 8 blocks/SM can be scheduled concurrently. Each SM can schedule groups of 32-thread units called warps. The maximum number of resident warps per SM in a device that supports compute capability 1.3 is 32 and the maximum number of resident threads per SM is 1024. Q. What would be the ideal block granularity to compute the product of two 2-D matrices of size 1024 x 1024? A. To answer this question, let’s analyze each choice and give pros and cons for each one. 4×4: If we decide to split our domain into 4×4 thread blocks, then we have 16 threads per block. In order to fully occupy the SM that can support 1024 threads per SM, we would need 1024/16 = 64 blocks but the SM can only schedule 8 blocks/SM so each SM would be scheduled with 8 blocks each having 16 threads which is 128 threads/SM. When divided into warps, we only have 4 warps scheduled per SM out of a total of 32 which gives only 12.5% occupancy. 8×8: We have the same problem here as we did with the 4×4 thread block granularity except not as severe. With 8×8 thread blocks, we get 64 threads per block. For a SM that can support 1024 threads per SM, we would need 1024/64 = 16 blocks but since we are limited to 8 blocks maximum per SM, we can only execute 8×64 = 512 threads/SM. When split into warps of 32-threads each, we get 512/32 = 16 warps scheduled per SM from a possible total 32 warps. This give only 50% occupancy. 16×16: A 16×16 thread block gives 256 threads/block. With a maximum thread limit per SM of 1024, we get 1024/256 = 4 blocks/SM. This is within the 8 block limit so 4 blocks, each of 256 threads can be scheduled on one SM. With 4 blocks each with 256 threads, we get a total of 1024 threads. The threads are further split into warps of 32 threads each for a total of 32 warps. Since the device can support 32 warps/SM we have achieved 100% occupancy. 32×32: This option is not even an option since a 32×32 thread block produces a single block with 1024 threads. As stated earlier, we are limited to 512 threads per block with compute capability 1.3 so our kernel wouldn’t even run. So the best choice for this problem domain would be to invoke a kernel with block size 16×16. Conclusion In this article, I discussed the architecture of a CUDA enabled GPU, in particular the Fermi architecture. I also showed how a kernel function is scheduled on the GPU and how the warp scheduler executes instructions from different warps in order to minimize the amount of noticeable latency between kernel instructions. References 2 thoughts on “CUDA Thread Execution Model” I have a question about this exercise. There is matrix dim = 1024 and 1024 threads per SM. Where you use dim value and where threads number on these calculate? It’s looks like dim value is never used… Btw. This blog is awesome 🙂 The size of the matrix is not really important when you are trying to determine thread occupancy. The only requirement is that the matrix should be large enough to keep all the SM’s busy. The 1024×1024 matrix size ensures that we have enough threads to process. The point of the exercise is to show that if you make the block sizes too small, you will not be able to maintain full occupancy on the GPU and if you make the block sizes too big, it won’t even run. Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Δ This site uses Akismet to reduce spam. Learn how your comment data is processed.
Home Articles Developers FAQ Cuda developers questions Optimize Your First CUDA Code with Best Practices Published on 17 February 2025 by Cătălina Mărcuță & MoldStud Research Team Optimize Your First CUDA Code with Best Practices Discover essential best practices to optimize your first CUDA code, enhancing performance and efficiency for new developers in GPU programming. In today's technology-driven landscape, the demand for high-performance computing is ever-increasing. As programmers embark on their journey into parallel programming, understanding the nuances of coding for graphical processing units can be pivotal. This area of development isn't just about writing functional scripts; it's about ensuring that those scripts run as effectively as possible. With the right approach, developers can unlock the full potential of their hardware. Many novice programmers may feel overwhelmed by the complexities involved in harnessing the power of GPUs. Yet, with a strategic mindset, they can streamline their development process and achieve remarkable results. Simple mistakes can often lead to significant performance bottlenecks. Therefore, acquiring essential knowledge before diving in can make all the difference. Consider the importance of memory management. In GPU programming, the way data is handled can drastically affect execution speed. Efficiently utilizing shared and global memory, for instance, can lead to noticeable improvements. Not to mention, understanding how to minimize memory transfers between the CPU and GPU is crucial. Additionally, grasping the concepts of parallelism can significantly enhance program performance. When developers learn to break tasks into smaller chunks that can be processed simultaneously, they harness the true power of modern GPUs. This is where the real magic happens; programmers can take advantage of hundreds or even thousands of processing cores to perform calculations in tandem. In this article, we will delve into various techniques and guidelines that can aid in achieving stellar performance in GPU applications. We will present valuable insights, practical tips, and code examples to illuminate your path. Remember, understanding the tools at your disposal is key to becoming a proficient programmer in this exciting field. Optimizing Your First CUDA Code Writing efficient parallel algorithms can significantly impact application performance. Every detail in your implementation can play a role in maximizing execution speed. Even small optimizations can result in noticeable improvements. Understanding memory access patterns and kernel execution is crucial. Consider the following strategies to enhance performance: Incorporating these tips can lead to substantial enhancements in computational performance, as many developers discover through rigorous testing and profiling of their applications. For instance, using shared memory efficiently can sometimes reduce access times by several orders of magnitude. The following example illustrates the use of shared memory: Testing various configurations and using profiling tools can reveal performance bottlenecks. Tools such as NVIDIA Visual Profiler can provide insights into your kernel's execution characteristics. Analyzing memory access patterns and understanding occupancy levels are vital steps toward achieving high efficiency. As you advance in your journey, remember that connecting with experienced developers can provide invaluable insights. Consider collaborating with professionals, such as those you can hire vaadin developers, to enhance your knowledge and improve your coding practices. Fundamentals of CUDA Programming Understanding the basic principles of parallel programming is crucial. Harnessing the power of GPUs can significantly enhance performance. This section delves into the core concepts that underpin GPU programming. Starting with architecture, it's vital to grasp how threads, blocks, and grids work together. Each element plays a unique role in executing tasks efficiently and concurrently. Threads are the smallest units of execution. They are grouped into blocks, which are then organized into grids. This hierarchical structure allows for scalable parallelism. Knowing how to structure these components can lead to better performance. For instance, when launching a kernel, determining the appropriate number of threads and blocks is essential. Memory management is another critical area to explore. GPUs have different types of memory, such as global, shared, and registers. Each type offers varying access speeds and storage capacities. Effective memory use can drastically affect processing times. To illustrate, consider the following memory types and their characteristics: Additionally, understanding how to manage data transfers between the host and GPU is fundamental for performance tuning. Often, the bottleneck lies in data transfer times rather than computation. Therefore, minimizing these transfers can lead to more efficient execution. By leveraging asynchronous memory copies and streams, developers can overlap computation and data transfer, significantly enhancing performance. In conclusion, mastering these fundamental principles sets the stage for developing efficient applications. Familiarity with thread management, memory types, and data transfers is essential. Engaging with these concepts ensures a solid foundation for more advanced techniques. As programmers become more adept, they'll find new ways to exploit the parallel nature of GPUs. By doing so, the benefits of GPU computing become increasingly accessible and impactful. Understanding the CUDA Architecture Having a grasp of the architecture behind parallel computing is crucial. This knowledge enables developers to write more efficient applications. The CUDA framework is designed to facilitate this by utilizing the power of GPUs. Understanding its fundamental components can significantly enhance performance. A well-structured architecture can lead to improved execution speeds and lower latency. The architecture is built around several key elements. These include Streaming Multiprocessors (SMs) and CUDA cores. SMs are responsible for executing threads simultaneously. Each SM contains numerous CUDA cores, which handle individual tasks. The parallel execution model allows for vast acceleration in computation. Furthermore, memory hierarchy plays a vital role. There are various types of memory, including global, shared, and local. Global memory is accessible to all threads but has higher latency. Shared memory, on the other hand, allows threads within the same block to communicate more efficiently. Understanding how to utilize these memory types can lead to significant performance gains. Here’s a quick comparison of the different memory types used in the architecture: Effective implementation of parallel algorithms relies on understanding how to effectively manage these memory types. The ability to arrange threads and blocks properly is essential for maximizing resource utilization. Moreover, the synchronizing of thread execution can greatly impact computational efficiency. For developers looking to enhance their project outcomes, knowledge of these architectural elements is indispensable. Consider the comparison of execution times across different platforms. According to recent statistics, applications utilizing GPU acceleration can achieve speeds up to 100 times faster than those using traditional processors. This difference can be the deciding factor in performance-sensitive applications. As you delve deeper into the intricacies of this architecture, it may also be beneficial to seek external expertise. For example, many businesses find it advantageous to hire wordpress plugin developers to enhance their capabilities. By collaborating with experienced professionals, you can unlock the full potential of parallel computing. Memory Hierarchy and Its Importance Understanding the structure of memory in computing systems is crucial for efficient performance. Memory hierarchy significantly affects how data is accessed and processed. Each level of memory offers different speeds, sizes, and costs. By recognizing these differences, one can make informed decisions about data placement and access patterns. At the top of the hierarchy lies the registers, which provide the fastest access speeds. Next, there are shared memory and local memory, both of which play vital roles in managing data during processing. Global memory follows, offering larger storage but with higher latency. The hierarchy continues with device memory and host memory, where communication speed varies greatly. Efficient use of memory can enhance overall performance. For instance, placing frequently accessed data in shared memory reduces access times and improves computational efficiency. Furthermore, minimizing the use of global memory can lead to significant performance gains, especially in data-intensive applications. For example, consider optimizing matrix multiplication, a common operation in many applications. By storing submatrices in shared memory, one can dramatically reduce the number of expensive global memory accesses. This approach can lead to performance improvements, sometimes exceeding three times the speed of naive implementations. Ultimately, grasping memory hierarchy enables developers to write applications that leverage available resources more effectively, paving the way for achieving optimal performance in applications. It's essential to regularly profile memory usage and performance metrics to identify bottlenecks and optimize data flow. In summary, a deep understanding of memory hierarchy is vital for any software engineer aiming to enhance performance in their applications. By employing specific strategies centered on memory usage, developers can ensure efficient data access and significantly lower execution times. Thread Management Strategies Thread management plays a critical role in achieving peak performance when working with parallel processing. Properly handling threads can significantly influence computation speed and resource utilization. A well-structured approach ensures that all hardware capabilities are effectively leveraged. Understanding the nuances of how threads interact with one another and with memory can lead to impressive enhancements in application efficiency. When deploying kernels, maintaining a balanced workload across threads is essential. Ensure each thread has roughly equal tasks to prevent bottlenecks. One common method is to utilize a grid-stride loop. It allows threads to execute tasks in a staggered manner, which can lead to better occupancy rates. __global__ void kernelFunction(int *data) { int idx = blockIdx.x * blockDim.x + threadIdx.x; for (int i = idx; i < N; i += blockDim.x * gridDim.x) { data[i] = performComputation(data[i]); } } Adopting the grid-stride technique helps ensure that even if one thread finishes early, others can continue processing, thus maximizing hardware utilization. Furthermore, effectively managing thread divergence is crucial for maintaining consistent performance. Divergence occurs when threads within the same warp take different execution paths, leading to inefficient use of resources. To minimize divergence, it's advisable to structure code in such a way that similar threads take the same execution path whenever possible. Conditional statements should be carefully placed to avoid unnecessary branching. Tools for profiling can help identify divergence hotspots within your application. Eliminate or minimize these to bolster efficiency. Another technique is to experiment with different block sizes and configuration parameters based on the specific hardware being used. For instance, using optimally sized blocks can enhance the occupancy of the streaming multiprocessors (SMs), thus improving overall performance. Testing various configurations can reveal the best balance for your specific application. In summary, effective thread management encompasses various strategies, including workload balancing, minimizing divergence, and optimizing block sizes. Leveraging these techniques will lead to more efficient execution and better resource utilization, driving performance enhancements in parallel processing tasks. Best Practices for Performance Enhancement Enhancing performance in GPU programming requires a thoughtful approach. Simple optimizations can yield significant speed-ups. Fine-tuning algorithms and understanding hardware capabilities is crucial. Certain strategies can maximize throughput while minimizing latency. Let's explore effective methods to elevate performance levels. One fundamental rule is to minimize data transfers between host and device. Memory bandwidth is often the bottleneck in GPU applications. Keeping data on the device as long as possible can lead to substantial speed increases. Batch processing of data can mitigate transfer costs and make operations more efficient. For instance, performing multiple computations before sending data back can help. Another critical element involves leveraging the architecture for better execution. Proper utilization of shared memory can improve access times and reduce global memory traffic, as shared memory is significantly faster. Consider breaking down large operations into smaller tasks that can efficiently use shared memory. A well-structured kernel design can help. For example: Implementing concurrent execution also plays a pivotal role in boosting performance. By overlapping computation and data transfers, applications can make effective use of the GPU’s capabilities. Managing kernels and memory copies to run simultaneously can help utilize resources efficiently. For example, employing streams allows developers to orchestrate multiple tasks. Lastly, always profile and benchmark performance regularly. Identifying bottlenecks through profiling tools gives insight into where improvements can be made. Fine-tuning based on real data ensures a targeted enhancement approach. Integrating these strategies will lead to more efficient and high-performing applications. Efficient Memory Usage and Allocation Memory management is critical in any parallel computing environment. Proper allocation and usage can significantly impact performance and resource utilization. It's essential to consider various aspects when dealing with memory in GPU programming. Efficient memory strategies can lead to better throughput and lower execution times. This section covers the key elements of memory handling to maximize efficiency. Understanding the different types of memory available on the GPU is fundamental. Global memory is the most extensive but also the slowest. Shared memory, on the other hand, offers much faster access but is limited in size. Using shared memory effectively can reduce latency significantly, as threads within the same block can access it quickly. In scenarios where data sharing among threads occurs frequently, shared memory becomes invaluable. Additionally, memory allocation strategies play a significant role. Allocating memory during kernel execution can lead to performance penalties. Instead, it's advisable to allocate all necessary memory before launching the kernel. This practice minimizes overhead and helps in maintaining a smoother execution flow. Consider the following example to illustrate effective memory allocation: Here, memory for a floating-point array is allocated on the device before the kernel execution. Preallocation reduces the risk of fragmentation and optimizes memory usage. Moreover, when transferring data between host and device, batch processing can enhance efficiency. Instead of multiple small transfers, larger, consolidated transfers are preferred. This approach can help to leverage the high throughput available on the PCIe bus, reducing the total time spent waiting for memory operations to complete. Implementing memory coalescing techniques can also lead to substantial performance gains. This involves arranging data accesses in a manner that maximizes memory throughput. When consecutive threads access consecutive memory addresses, the GPU can efficiently handle these requests, ensuring better resource use. In conclusion, effective management of memory usage and allocation is crucial for optimal performance in GPU applications. By understanding the different memory types, preallocating resources, batching data transfers, and leveraging coalescing techniques, developers can unlock the full potential of their applications. Every small adjustment can lead to significant performance improvements, making it vital to prioritize these aspects from the outset. Minimizing Kernel Launch Overhead Reducing the time spent on launching kernels is crucial for enhancing performance. Each kernel launch incurs a significant overhead that can hinder the efficiency of applications. By strategically managing these launches, a developer can achieve remarkable gains in execution time. Small kernels can be particularly problematic; their frequent launches can lead to unnecessary delays. To effectively address this concern, consider the following approaches: For instance, by using batch processing, developers can dramatically decrease the frequency of kernel invocations, thus allowing more computation to be completed in each launch. Instead of launching a kernel for every single operation, it's often more efficient to aggregate tasks that can be executed together, leading to a noticeable reduction in launch time. __global__ void batchProcessKernel(int* data, int size) { int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx < size) { // Perform batch operations } } This type of approach can significantly improve overall throughput. By minimizing kernel launches, developers not only save time but also improve resource utilization on the device. Additionally, leveraging asynchronous operations allows for overlapping data transfers with computation, further optimizing the execution workflow. In summary, meticulous planning of kernel launches plays a significant role in achieving superior performance. By employing techniques like batching, asynchronous execution, and kernel fusion, developers can effectively reduce overhead and improve overall application efficiency. Balancing the trade-off between launch frequency and computational workload is essential for maximizing the potential of GPU resources. Utilizing Shared Memory Effectively Shared memory serves as a critical resource in parallel computing environments, allowing threads within the same block to communicate quickly. This high-speed memory space enhances performance by reducing global memory access latency. When used correctly, it can lead to significant improvements in application efficiency. The key lies in understanding how to harness this capability without falling into common pitfalls. Optimizing access patterns is essential, as coalesced access can dramatically benefit performance. Threads in a block can access shared memory much faster than global memory, which is a considerable advantage for performance-critical applications. However, to fully leverage this feature, one needs to ensure proper synchronization between threads. Race conditions can occur if multiple threads attempt to read and write to the same memory locations simultaneously. Implementing synchronization mechanisms such as __syncthreads() helps prevent such issues, ensuring data integrity and coherence. Another aspect to consider is the size of the shared memory. Each block has a limited amount of it, thus careful planning is necessary. Allocating too much shared memory can lead to a reduction in the number of threads that can run concurrently. Therefore, balance is crucial; aim for a size that accommodates necessary data while maximizing occupancy. For example, if a kernel requires storing an intermediate result, consider keeping it in shared memory rather than global memory. Consider the following code snippet, which demonstrates the use of shared memory to optimize a simple vector addition: This example illustrates how shared memory holds intermediate results, significantly reducing the number of accesses to slower global memory. In this case, threads perform computations in shared memory before writing the results, which minimizes the performance impact of memory latency. Performance gains from shared memory utilization can be substantial. Studies indicate that well-optimized applications can see speedups of 2 to 10 times when shared memory is used effectively. However, developers must be aware of the trade-offs involved. Carefully analyzing memory access patterns and thread synchronization will lead to a more responsive and efficient application. To sum up, harnessing shared memory effectively requires not only an understanding of its capabilities but also a commitment to optimizing access patterns and synchronization. As you become more familiar with these nuances, you will likely find that the potential for performance improvement is vast. Profiling Tools for Performance Analysis Understanding the performance of applications is crucial in a high-performance computing environment. Proper insights into execution speed, resource utilization, and bottlenecks can lead to significant improvements. Profiling tools provide the necessary feedback to enhance performance. They help in identifying inefficient areas and optimizing resource allocation. There are various tools available, each offering unique features and benefits. Here are some of the most popular options: Using these tools effectively can lead to a better understanding of application performance. For instance, Nsight Compute provides metrics on memory bandwidth, instruction efficiency, and occupancy rates. These statistics are vital for pinpointing where the application may be lacking. Additionally, many tools support both CPU and GPU profiling, enabling developers to view interactions and performance across the entire application. Utilizing a combination of these tools can yield a holistic view of performance, making it easier to prioritize optimizations and streamline development workflows. By integrating these profiling tools into the development cycle, a developer can significantly reduce debugging time and improve overall software quality. Effective use of profiling tools can elevate performance to new heights, ensuring that applications run efficiently. In conclusion, leveraging profiling tools is not just about improvement; it's about understanding the complete picture of performance. When teams focus on data-driven development, the results can speak for themselves. For those looking to enhance their development processes, tools and techniques are invaluable assets. Additionally, if you're considering building a top-tier development team, check out this article on how to hire software engineers who can help propel your projects forward. Add new comment Related articles Related Reads on Cuda developers questions Dive into our selected range of articles and case studies, emphasizing our dedication to fostering inclusivity within software development. Crafted by seasoned professionals, each publication explores groundbreaking approaches and innovations in creating more accessible software solutions. Perfect for both industry veterans and those passionate about making a difference through technology, our collection provides essential insights and knowledge. Embark with us on a mission to shape a more inclusive future in the realm of software development. Navigating CUDA Runtime Errors A Developer's Guide Master CUDA runtime errors with our developer's survival guide, offering essential troubleshooting tips and optimization techniques for seamless performance. Key Libraries and APIs Every CUDA and OpenGL Developer Should Know to Enhance Their Development Process Discover essential libraries and APIs for CUDA and OpenGL developers to enhance your development workflow and streamline your graphics programming. Effective Strategies for CUDA Development Answering Frequently Asked Questions and Sharing Expert Insights Explore best practices for CUDA development with expert tips and answers to common queries to enhance your programming efficiency. You will enjoy it Recommended Articles How to hire remote Laravel developers? When it comes to building a successful software project, having the right team of developers is crucial. Laravel is a popular PHP framework known for its elegant syntax and powerful features. If you're looking to hire remote Laravel developers for your project, there are a few key steps you should follow to ensure you find the best talent for the job. Software Development Services for Startups As technology continues to advance at a rapid pace, businesses are faced with the challenge of navigating the complex tech landscape to ensure they stay competitive and innovative. One key aspect of this is choosing the right software development services to help create custom solutions that meet specific business needs. Team Extension Services The Key to Building Scalable Development Teams In today's fast-paced tech industry, companies are constantly under pressure to deliver cutting-edge solutions quickly and efficiently. One of the key challenges that many businesses face is finding and hiring skilled software developers to meet their development needs. Together, we can redefine technology through accessible design and development
Parallel Code: Maximizing your Performance Potential justin.p.mckennon · No matter what the purpose of your application is, one thing is certain. You want to get the most bang for your buck. You see research papers being published and presented making claims of tremendous speed increases by running algorithms on the GPU (e.g. NVIDIA Tesla), in a cluster, or on a hardware accelerator (such as the Xeon Phi or Cell BE). These architectures allow for massively parallel execution of code that, if done properly, can yield lofty performance gains. Unlike most aspects of programming, the actual writing of the programs is (relatively) simple. Most hardware accelerators support (or are very similar to) C based programming languages. This makes hitting the ground running with parallel coding an actually doable task. While mastering the development of massively parallel code is an entirely different matter, with a basic understanding of the principles behind efficient, parallel code, one can obtain substantial performance increases compared to traditional programming and serial execution of the same algorithms. In order to ensure that you’re getting the most bang for your buck in terms of performance increases, you need to be aware of the bottlenecks associated with coprocessor/GPU programming. Fortunately for you, I’m here to make this an easier task. By simply avoiding these programming “No-No’s” you can optimize the performance of your algorithm without having to spend hundreds of hours learning about every nook and cranny of the architecture of your choice. This series will discuss and demystify these performance-robbing bottlenecks, and provide simple ways to make these a non-factor in your application. Parallel Thread Management – Topic #1 First and foremost, the most important thing with regard to parallel programming is the proper management of threads. Threads are the smallest sequence of programmed instructions that are able to be utilized by an operating system scheduler. Your application’s threads must be kept busy (not waiting) and non-divergent. Properly scheduling and directing threads is imperative to avoid wasting precious computing time. Read: CUDA Parallel Thread Management, Divergence and Profiling Host/Device Transfers and Data Movement – Topic #2 Transferring data between the host and device is a very costly move. It is not uncommon to have code making multiple transactions between the host and device without the programmer’s knowledge. Cleverly structuring code can save tons of processing time! On top of that, it is imperative to understand the cost of these host device transfers. In some cases, it may be more beneficial to run certain algorithms or pieces of code on the host due to the costly transfer time associated with farming data to the device. Read: Profile CUDA Host-to-Device Transfers and Data Movement Read: Optimize CUDA Host-to-Device Transfers Cache and Shared Memory Optimizations – Topic #3 In addition to managing the threads running in your application, properly utilizing the various memory types available on your device is paramount to ensuring that you’re squeezing every drop of performance from your application. Shared memory, local memory, and register memory all have their advantages and disadvantages and need to be used very carefully to avoid wasting valuable clock cycles. Phenomena such as bank conflicts, memory spilling (too much data being placed in registers and spilling into local memory),  improper loop unrolling, as well as the amount of shared memory, all play pivotal roles in obtaining the greatest performance. Read: GPU Memory Types and Memory Performance Comparison Read: GPU Shared Memory Performance Optimization Read: Avoiding GPU Memory Performance Bottlenecks More to come… All in all, utilizing devices like the NVIDIA GPU, Cell BE or Intel Xeon Phi to increase the performance of your application doesn’t have to be a daunting task. Over the next several posts, this blog will outline and identify effective techniques to make troubleshooting the performance leaks of your application an easy matter. Each of these common bottlenecks will be discussed in detail in an effort to provide programmers insight into how to make use of all the resources that many popular architectures provide. You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
CUDA Parallel Thread Management justin.p.mckennon · This post is Topic #1 in our series Parallel Code: Maximizing your Performance Potential. Regardless of the environment or architecture you are using, one thing is certain: you must properly manage the threads running in your application to optimize performance. This post will discuss how to get the most out of your threads in a CUDA application. CUDA Threads CUDA threads utilize block and thread IDs to determine what data to compute. Block IDs can be 1D or 2D. Thread IDs can be 1D, 2D, or 3D. Utilizing multidimensional threads and blocks greatly simplifies memory addressing when performing operations on multidimensional data (a very common occurrence in image processing, for example). You, the programmer, declare the size of the block (between 1 and 512 concurrent threads), the number of dimensions (1D, 2D, 3D) of the block, and the block dimensions in threads. In each block, all of the threads are capable of sharing data and synchronizing. The image below depicts the CUDA grid/block/thread structure. So, assuming you’ve got your kernel up and running, how do you properly manipulate the threads running in your application? For starters, declaring the proper block/grid size (and dimension) is paramount. The appropriate size for these parameters is hardware- and device-dependent and must be fiddled with through trial and error. There’s not really a “General Rule” for determining the values for these parameters outside of really knowing the data in your application and the limitations of your hardware. Let’s say for now, that your block and grid sizes are sufficient. With appropriate block and grid sizes/dimensions, there are two keys to optimizing your application’s performance: thread communications and thread paths. Thread Communications When the threads (in the same block) in your application need to communicate or share data, there are two methods that CUDA provides: shared memory and __syncthreads().  The __syncthreads() command effectively creates a barrier in the code execution where all of the threads in a given block will wait until all of the threads in that block have reached the synchronization point. This is especially useful for ensuring that computation data is written to memory before other threads read it. Improper use of this command, however, can create deadlock conditions and cause your application to hang. Deadlocks are literally a show stopper since they will cause your application to stop dead in its tracks. Maximizing the use of shared memory will be discussed in much greater detail in a later post. Effectively utilizing shared memory is an absolute necessity for a high-performance application. Shared memory is hundreds of times faster than global memory. A common method of scheduling computations on a device that maximizes the use of shared memory is, at a high level, relatively simple: Structuring your code in this fashion will pay great dividends. Thread Paths The other aspect of managing threads is controlling the paths of your threads. In nearly every application, it is almost impossible to structure code without branches (e.g. if/else conditions). Threads in the same block that execute different pieces of code (different execution paths) as a result of branch conditions are said to be divergent. When threads within the same block have different execution paths, they must be serialized. Since all threads in a block always run the same code, if any thread executes the code inside the IF condition (or if-then-else, for loops, etc), all of the threads in that same warp (a group of 32 threads) will go through that section of code. This occurs even if they are not actually executing (when the branch condition is not met)! If half of the threads in a given warp evaluate a branch condition as true, the utilization of the execution units is only 50%, meaning that half of the threads are effectively DOING NOTHING! The actual performance impact depends on the size and frequency of these divergent branch conditions. Divergence can be avoided when a branch condition is a function of the thread ID. An example of code that would likely produce divergence: if(threadIdx.x > 4) { //your code } This divergence is a result of the branch granularity being less than the warp size. By making the branch granularity a whole multiple of the warp size (instead of less than the warp size), this divergence can be completely eliminated: if(threadIdx.x/WARP_SIZE > 4) { //your code } Optimizing in the Real World I know what you’re thinking: “All of this is great information, Justin, but how can I check my code for deadlocks and divergent branches?” Easy – step through every line of code in the entire application with a fine toothed comb. Well, that doesn’t sound easy. Fortunately, the NVIDIA CUDA profiler provides a very robust means for identifying these problems in your code. There are visual and text-based versions of the profiler – I’ll be discussing the text version. From the command line, the values of four environmental variables can be set: The CUDA profiler only supports four types of events being profiled at a time. Later posts will discuss the other event types of the profiler, but with regards to managing threads, a few event types are essential to profile: With these set, the profiler will output the number of branches and divergent branches that are encountered when executing the application, which provides invaluable insight as to which portions of code are degrading performance. Using this information, you can tell if any of the branch conditions are causing threads to diverge. In addition to the events that were chosen to be profiled, the profiler can also output the total execution time on both the CPU and GPU for the application/kernel, which can be used to gauge performance when tweaking code. Additional functions of the CUDA profiler will be discussed throughout the next several posts. More information on the NVIDIA CUDA profiler, including information about the visual profiler, can be found in the Profiler User’s Guide: https://docs.nvidia.com/cuda/profiler-users-guide/index.html You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
CUDA Host/Device Transfers and Data Movement justin.p.mckennon · This post is Topic #2 (part 1) in our series Parallel Code: Maximizing your Performance Potential. In post #1, I discussed a few ways to optimize the performance of your application via controlling your threads and provided some insight as to how to go about fixing some possible thread related issues in your application. In this post and the following one, I will discuss another possible major performance bottleneck: Host/Device Transfers and Data Movement. Profiling Your CUDA Code for Timing Data In a standard CUDA application, several steps typically occur: In the above list, steps 2 and 4 are an absolute necessity in every CUDA application, but are also HUGE performance robbers as well. These transfers are the slowest portion of data movement involved in any aspect of GPU computing. The actual transfer speed (bandwidth) is dependent on the type of hardware you’re using, but regardless of this point, it is still the slowest. In the example code below, I will illustrate this point: int main() { const unsigned int X=1048576; //1 Megabyte const unsigned int bytes = X*sizeof(int); int *hostArray= (int*)malloc(bytes); int *deviceArray; cudaMalloc((int**)&deviceArracy,bytes); memset(hostArray,0,bytes); cudaMemcpy(deviceArray,hostArray,bytes,cudaMemcpyHostToDevice); cudaMemcpy(hostArray,deviceArray,bytes,cudaMemcpyDeviceToHost); cudaFree(deviceArray); } In this example, there are no operations being run on the device. The data is simply copied from the host to the device and back. I’ve named this program profilerExample.cu. To profile this code, it simply needs to be compiled with nvcc and then run with nvprof (nvprof is new in CUDA 5 – the older command line profiler can still be used in earlier versions of CUDA): $ nvcc profilerExample.cu -o profileExample $ nvprof ./profileExample ======== NVPROF is profiling profileExample.out... ======== Command: profileExample.o ======== Profiling result: Time(%) Time Calls Avg Min Max Name 50.08 718.11us 1 718.11us 718.11us 718.11us [CUDA memcpy DtoH] 49.92 715.94us 1 715.94us 715.94us 715.94us [CUDA memcpy HtoD] On my desktop I run a GTX 680 graphics card. As you can see from the above results, a simple copy operation to/from the GPU takes in excess of 715 microseconds each way (a lifetime in terms of computation time). In complex applications with larger amounts of data going back and forth between the host and device many times, this can result in significant time being wasted on these transfers. Alternative Profiling Options Using Timers In addition to the nvprof profiler, any CPU timer can be used to measure the elapsed time of a CUDA call/function or kernel execution. It is important to note that if you’re using a CPU timer to measure the timing performance of a portion (or all) or your application, that many of the CUDA functions are asynchronous. This means that the function returns control to the associated thread prior to completing all of their work. If you’re using a CPU timer you must synchronize the CPU thread associated with the timer with the device by calling cudaDeviceSynchronize() immediately before starting and stopping the CPU timer. This blocks the CPU threads until all the CUDA calls issued by that thread have been completed. CUDA also provides its own method for timing using events. The following example code snippet illustrates how to use the CUDA event timers to profile your code: cudaEvent_t startTime, stopTime; float time; cudaEventCreate(&startTime); cudaEventCreate(&stopTime); cudaEventRecord(startTime,0); kernel<<<gridDimensions,numberOfThreads>>>(dataOut,dataIn,size_x,size_y,NUM_REPS); cudaEventRecord(stopTime,0); cudaEventSynchronize(stopTime); cudaEventElapsedTime(&time, startTime, stopTime); cudaEventDestroy(startTime); cudaEventDestroy(stopTime); In this example, the cudaEventRecord() function call places the startTime and stopTime events into the default execution stream, ‘0’. The device records a timestamp for the event when it reaches that event in the execution stream. cudaEventElapsedTime() simply returns the time in milliseconds (with roughly 0.5us resolution) between the events. Importance of Data Transfers in CUDA Applications Analyzing these timing results can prove to be hugely beneficial in determining which portions of your application are the most expensive in terms of time. While there are a number of factors that can make one portion of code more expensive in terms of time, a good way to increase the performance of your application is to minimize the host/device transfers. The peak theoretical bandwidth between device memory and the device processor is significantly higher than the peak theoretical bandwidth between the host memory and device memory. Therefore, in order to get the most bang for your buck in your application, you really need to minimize these host<->device data transfers. Many programmers are unaware of the high overhead associated with these transfers and by intelligently reducing or eliminating them, you can see very large gains in performance. Try performing a ‘before and after’ type test with your code. If you have multiple transfers occurring throughout your application, try reducing this number and observe the results. The next post in this series will identify effective ways to optimize your code and avoid numerous transfers between the host and device. Utilizing pinned/mapped memory, asynchronous transfers, and overlapping transfers with computations can yield lofty performance gains if you have many host/device transfers occurring in your application. More information about nvprof can be located at NVIDIA’s Developer Zone: CUDA Toolkit Documentation – Profiler User’s Guide You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
Optimize CUDA Host/Device Transfers justin.p.mckennon · This post is Topic #2 (part 2) in our series Parallel Code: Maximizing your Performance Potential. In my previous post, CUDA Host/Device Transfers and Data Movement, I provided an introduction into the bottlenecks associated with host/device transfers and data movement. This post will delve a bit further into the subject and provide a few nifty ways to mitigate these very costly operations. In every single CUDA application (well any useful ones, that is) there is at the very least one host-to-device transfer and one device-to-host transfer. More complicated applications often have many transfers between the host and device. In CUDA programming, this is one of the most expensive operations in terms of timing. So, if these host/device data transfers are so costly, how do you avoid them? Well, you can’t. But what you can do is minimize the number of transfers between host and device in your application, and mask their impact on the performance of your application. First, any intermediate data structures that are used within your kernel should always be allocated and destroyed solely on the device. This removes the need to map these structures to host memory and removes the need to transfer this data between the host and device. If your application has multiple host/device transfers, every effort should be made to batch these transfers into one large transfer. I like to think of this as if you were carrying groceries. Why make multiple trips out to the car when you can load up your arms and do it all at once? Most GPUs support transfer speeds between 5GB/sec and 11GB/sec. For situations where there is no way around transferring data between host and device, more advanced techniques can be employed to lessen the impact on your application: pinned (also known as page-locked, or mapped) memory and asynchronous transfers. Pinned Memory The cudaHostAlloc() function allows you to allocate host memory that can be read from the device and written directly to by the device. This allocated memory is called pinned memory. Pinned memory transfers attain the highest bandwidth between the host and device. During execution, a block that requires host data only needs to wait for a small portion of the data to be transferred (when operating through pinned memory). Typical host-to-device copies make all blocks wait until all of the data associated with the copy operation is transferred. Keep in mind, however, that pinning too much memory can degrade overall system performance by reducing the amount of memory available to the system for paging operations. How much memory you can safely pin differs from system to system, so definitely experiment with this to find the optimal amount. Asynchronous Transfers Standard host/device transfers are known as blocking transfers. Control of the main thread is returned only after the data transfer is complete. The cudaMemcpyAsync() function is effectively a non-blocking version of the standard cudaMemcpy(). When executing an asynchronous transfer via cudaMemcpyAsync(), control is returned immediately to the main thread. If you’re not jumping up and down with excitement after hearing that, you should be! Asynchronous transfers required pinned memory and make use of CUDA streams. In CUDA, streams are essentially sequences of operations that are performed in order on the device. Creating multiple streams is a bit more of an advanced CUDA technique, but one that must be learned if you want the most bang for your buck. With multiple streams in a single application, operations within separate streams can be overlapped, providing a great way to mask the host/device transfer time. Let’s look at an example of how using multiple streams can benefit you and your application: cudaMemcpyAsync(deviceArray,hostArray,size,cudaMemcpyHostToDevice,0); kernel<<>>(deviceArray); //your code Here, both the transfer and kernel are using the default stream, 0. During execution, the kernel will not be launched until the entire copy operation is complete and control has been returned back to the main thread. This is because both the kernel and memory copy are part of the same stream. Now, let’s look at the code using multiple streams: cudaStreamCreate(&mystream1); cudaStreamCreate(&mystream2); cudaMemcpyAsync(deviceArray,hostArray,size,cudaMemcpyHostToDevice,mystream1); kernel<<>>(otherDataArray); //your code By defining two new streams, we are able to make use of concurrent copy and compute. The memory copy is executing in one stream while the kernel is off in another stream, asynchronous from one another. An important note is to make sure that your device supports concurrent copy and execute before you put this in all of your code. This can be done via the deviceOverlap field of the cudaDeviceProp structure. While this is an advanced technique, if your data can be broken into chunks and transferred in various stages, you can launch multiple kernel instances to operate on each chunk of data as it arrives on the device. Doing so will almost completely mask the transfer time between the host and device. So, armed with the knowledge of streams, asynchronous transfers, and pinned memory, you now have some insight on how to squeeze out some more performance from your application. My next post will discuss how to efficiently make use of the available memory types accessible to you within your GPU application. You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
GPU Memory Types – Performance Comparison justin.p.mckennon · This post is Topic #3 (part 1) in our series Parallel Code: Maximizing your Performance Potential. CUDA devices have several different memory spaces: Global, local, texture, constant, shared and register memory. Each type of memory on the device has its advantages and disadvantages. Incorrectly making use of the available memory in your application can can rob you of the performance you desire. With so many different types of memory, how can you be certain you’re using the correct type? Well, it is no easy task. In terms of speed, if all the various types of device memory were to race here’s how the race would turn out: Looking at the above list, it would seem that to have the best performance we’d only want to use register file, shared memory, and constant memory. In a simple world I’d agree with that statement. However, there are many more factors associated with choosing the best form of memory for various portions of your application. Memory Features The only two types of memory that actually reside on the GPU chip are register and shared memory. Local, Global, Constant, and Texture memory all reside off chip. Local, Constant, and Texture are all cached. While it would seem that the fastest memory is the best, the other two characteristics of the memory that dictate how that type of memory should be utilized are the scope and lifetime of the memory: How to Choose Memory Type Knowing how and when to use each type of memory goes a long way towards optimizing the performance of your application. More often than not, it is best to make use of shared memory due to the fact that threads within the same block utilizing shared memory can communicate. Combined with its excellent performance, this makes shared memory a good ‘all around’ choice when used properly. In some cases however, it may be better to make use of the other types of available memory. Shared Memory A common problem arises when memory is shared: with all memory available to all threads, there will be many threads accessing the data simultaneously. To alleviate this potential bottleneck, shared memory is divided into 32 logical banks. Successive sections of memory are assigned to successive banks (see Figure 1). Some facts about shared memory: When there are no bank conflicts present, shared memory performance is comparable to register memory. Use it properly and shared memory will be lightning fast. Register Memory In most cases, accessing a register consumes zero clock cycles per instruction. However, delays can occur due to read after write dependencies and bank conflicts. The latency of read after write dependencies is roughly 24 clock cycles. For newer CUDA devices that have 32 cores per multiprocessor, it may take up to 768 threads to completely hide latency. In addition to the read after write latency, register pressure can severely detract from the performance of the application. Register pressure occurs when there are not enough registers available for a given task. When this occurs, the data is “spilled over” using local memory. See the following posts for further details. Local Memory Local memory is not a physical type of memory, but an abstraction of global memory. Its scope is local to the thread and it resides off-chip, which makes it as expensive to access as global memory. Local memory is used only to hold automatic variables. The compiler makes use of local memory when it determines that there is not enough register space to hold the variable. Automatic variables that are large structures or arrays are also typically placed in local memory. Recommendation All in all, for most applications my recommendation is definitely to try to make use of shared memory wherever possible. It is the most versatile and easy-to-use type of memory. Shared memory allows communication between threads within a warp which can make optimizing code much easier for beginner to intermediate programmers. The other types of memory all have their place in CUDA applications, but for the general case, shared memory is the way to go. Conclusion So now that you know a little bit about each of the various types of memory available to you in your GPU applications, you’re ready to learn how to efficiently use them. The next post will discuss how you can optimize the use of the various types of memory throughout your application. You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
GPU Shared Memory Performance Optimization justin.p.mckennon · This post is Topic #3 (post 2) in our series Parallel Code: Maximizing your Performance Potential. In my previous post, I provided an introduction to the various types of memory available for use in a CUDA application. Now that you’re familiar with these types of memory, the more important topic can be addressed – accessing the memory. Think for a moment: global memory is up to 150x slower than some of the other types of device memory available. If you could reduce the number of global memory accesses needed by your application, then you’d realize a significant performance increase (especially if your application performs the same operations in a loop or things of that nature). The easiest way to obtain this performance gain is to coalesce your memory accesses to global memory. The number of concurrent global memory accesses of the threads in a given warp is equal to the number of cache lines needed to service all of the threads of the warp. So how do you coalesce your accesses you ask? There are many ways. The simplest way to coalesce your memory accesses is to have the N-th thread in a warp access the N-th word in a cache line. If the threads in a warp are accessing adjacent 4-byte words (float, for example), a single cache line (and therefore, a single coalesced transaction) will service that memory access. Even if some words of the cache line are not requested by any thread in the warp (e.g., several of the threads access the same word, or some of the threads don’t participate in the access), all data in the cache line is fetched anyways. This results in a single global memory access (see Figure 1). If sequential threads in a warp access sequential memory locations, but the memory locations are not aligned with the cache lines (overlapping), there will be two 128-byte (L1) cache lines requested. This results in 128-bytes of additional memory being fetched even though it is not needed (see the red blocks in Figure 2). Fortunately, memory allocated via cudaMalloc() is guaranteed to be aligned to at least 256 bytes. By choosing intelligent thread block sizes (typically multiples of the warp size), it facilitates memory accesses by the warps that are aligned to cache lines. This means fewer memory accesses are needed. Let your mind wander for a moment as to what would happen to the memory locations that are accessed by the 2nd, 3rd, 4th, etc thread blocks if the thread block size was not a multiple of warp size. Not good. So what happens if your memory accesses are misaligned? Let’s take a look. Below is a simple kernel that demonstrates aligned and misaligned accesses. __global__ void misalignedCopy(float *outputData, float *inputData, int offset) { int xid = blockIdx.x * blockDim.x + threadIdx.x + offset; outputData[xid] = inputData[xid]; } In the code example above, data is copied from the array inputData to the array outputData. Both of these arrays exist in global memory. The kernel here is executed within a loop in host code that varies the offset between 0 and 32. Here, global memory accesses with 0 offset, or with offsets that are multiples of 32 words, result in a single cache line transaction. When the offset is not a multiple of 32 words, two L1 cache lines are loaded per warp. This results in roughly 80% of the memory throughput achieved compared to the case with no offsets. Another technique, similar to coalescing, is known as striding. Strided memory accesses will be discussed in the next post. Shared Memory Bank Conflicts If your application is making use of shared memory, you’d expect to see increased performance compared to an implementation using only global memory. Because it is on-chip, shared memory has a much higher bandwidth and lower latency than global memory. But this speed increase requires that your application have no bank conflicts between threads. In order to actually achieve the high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (also known as banks) that can be accessed simultaneously. This means any memory load/store of N memory addresses than spans N distinct memory banks can be serviced simultaneously (see Figure 3). In performance gain terms, this means that the memory exhibits an effective bandwidth that is N times as high as that of a single memory module. The problem however, lies in situations where multiple addresses of a memory request map to the same memory bank. When this occurs (a bank conflict), the accesses are serialized, reducing the effective bandwidth. A memory request that has bank conflicts is split into as many separate conflict-free requests as necessary, which greatly reduces the performance of the application (by a factor that’s equal to the number of separate memory requests). As shown in Figure 4, serialized shared memory accesses can take much longer. The only exception is the case of shared memory broadcasts. These occur when all threads in a warp access the same location in shared memory. In this case, a bank conflict does not occur. Summary It really cannot be stressed enough to make as much use of shared memory as possible in your application. In my next post I will provide an example that illustrates just how much faster shared memory is compared to global memory, as well as the impacts with regards to performance that result when reads to global memory are coalesced and bank conflicts are removed. In addition, I will discuss strided memory accesses, and provide some additional insight into the optimization techniques for the other types of available memory. You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
Avoiding GPU Memory Performance Bottlenecks justin.p.mckennon · This post is Topic #3 (post 3) in our series Parallel Code: Maximizing your Performance Potential. Many applications contain algorithms which make use of multi-dimensional arrays (or matrices). For cases where threads need to index the higher dimensions of the array, strided accesses can’t really be avoided. In cases where strided access is actually avoidable, every effort to avoid accesses with a stride greater than one should be taken. So all this advice is great and all, but I’m sure you’re wondering “What actually is strided memory access?” The following example will illustrate this phenomenon and outline its effect on the effective bandwidth: __global__ void strideExample (float *outputData, float *inputData, int stride=2) { int index = (blockIdx.x * blockDim.x + threadIdx.x) * stride; outputData[index] = inputData[index]; } In the above code, threads within a warp access data words in memory with a stride of 2. This leads to a load of two L1 cache lines per warp. The actual accessing of the memory is shown below. Accesses with a stride of 2 result in a 50% load/store efficiency (shown above), since half of the elements involved in the transaction are not used (becoming wasted bandwidth). As the stride increases, the effective bandwidth decreases until there is a single cache line for each of the threads in a warp (wow, that’s a lot of lost performance!). Strided accesses can debilitate performance of even the most optimized algorithms. For large strides, the effective bandwidth is poor, regardless of the architecture of compute capability version. Intuitively, this makes sense. When concurrent threads are simultaneously accessing data located in memory addresses that are far apart in the physical memory, the accesses cannot be combined. For these types of situations, you absolutely must not use global memory if you wish to realize any sort of performance gain from your application for accesses with a stride greater than 1. In cases where you are stuck with strided memory accesses, you must ensure that as much data as possible is used from each cache line fetching operation. So, if I haven’t made it clear enough: if you can avoid global memory, you should. In my personal experiences programming with CUDA, you really can’t go wrong if you intelligently make use of shared memory. With the exception of bank conflicts (discussed in Shared Memory Optimization), you don’t suffer the painful penalties that accompany global memory usage when you have non-sequential memory accesses, or misaligned accesses by warps in shared memory. For those of us who are more advanced, if you can make use of registers without register pressure or read-after-write dependencies, you should. I briefly discussed register memory in previous posts, but feel that it warrants a bit more discussion here. Shared memory allows communications between threads, which is very convenient. However, for those of us looking to squeeze out every last drop of performance from our applications, you really need to make use of registers when you can. Think of it this way – shared memory is kind of the “jack of all trades” memory. It’s suitable for “most” applications and operations, but for register operations (without read-after-write issues) there is no comparison. Typically, register access consumes zero extra clock cycles per instruction. While this lack of processing latency makes register memory very appealing, read-after-write dependencies have a latency of roughly 24 clock cycles. When such a dependency appears in a loop of code, this latency will add up very quickly. The only other downside of register memory is called register pressure. Register pressure occurs when there are just simply not enough registers for a given task. Although every multiprocessor in a GPU contains literally thousands of 32 bit registers, these get partitioned amongst concurrent threads. You can set the maximum number of registers that can be allocated (by the compiler) via the command line. To summarize, when you’re developing your algorithms and applications you really need to be aware of how you’re making use of memory: The next portion of this blog will step away from the memory aspect of performance optimization and into optimizing configurations and the art of keeping all the multiprocessors on your device busy throughout the execution of your kernel. You May Also Like DGX A100 review: Throughput and Hardware Summary When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a… Deploying GPUs for Classroom and Remote Learning As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always… What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
Pranjal’s Substack Share this post Outperforming cuBLAS on H100: a Worklog CUDA matmul kernel - from scratch Share this post In this post, we’ll iteratively implement a CUDA kernel for matrix multiplication on latest generation1 NVIDIA hardware: H100. We’ll gain a deep understanding of H100 architecture and showcase these optimizations step by step. The final kernel outperforms cuBLAS by 7% for N=4096. It fits in a single C++ file without any dependencies. This post is intended as a sequel to Simon’s legendary blog which showcases similar optimizations for A6000 GPU. However H100 GPUs are completely different beasts, requiring entirely different algorithms. As an example, algorithm from Simon’s blog is only able to achieve 4% of cuBLAS performance2. In this post, we will pick up from Simon’s blog and iteratively reach 107% of cuBLAS. All my code is available on Github. Thanks for reading Pranjal’s Substack! Subscribe for free to receive new posts and support my work. Our aim is not to be a cuBLAS replacement, but to design a slightly faster, yet simplistic, matmul kernel which works for generally large matrices. cuBLAS performs well for varying matrix sizes, like small matrices with very large k-dimension or matrix-vector multiplications(relevant for LLM inference). Quick recap from Simon’s blog Let’s go over basic structure of matrix multiplication algorithm that Simon covered. We compute C[m, n] = A[m, k] x B[k, n] as shown in figure below: We break down large matrix multiplication into computing several output tiles. Each tile is BM x BN size - which represents a portion of output matrix C. We assign a thread block to compute this tile - which has upto 1024 threads working together. To compute all outputs in this tile. we will need to read BM x K row-block from A(blue) and K x BN column-block from B(green). These values are accessed multiple times, so we need to store them in SMEM for performance. However, these blocks are too big to store in SMEM. So we store them in chunks of BK size. For each chunk, we can multiply BM x BK and BK x BN matrices and get a BM x BN matrix for output tile. Remember the naive matrix multiplication: As the chunks are in k-dimension, we simply need to sum all these matrices to compute final values of BM x BN output tile. All the values in output tile are stored in registers, so accumulations are easy. Our H100 matmul kernel will follow this structure of computing output tiles by multiplying smaller chunks of matrices. Simon’s blog goes on further to fully utilize register space by moving parts of chunks from SMEM to registers. We will not go over these details here, as we don’t use them in our kernels. Setup For rest of the blog, we will consider large matrices of square sizes (M=N=K=4096) with bfloat16 types. bfloat16 is a specialized 16-bit data type used in recent deep learning applications. For our kernel performance, it isn’t any different from regular fp16. Matrices B and C are stored in column-major, while A is stored in row-major. This is a common setup for matmul benchmarks. We initialize our matrices using a normal distribution with mean = 0 and std_dev = 1. It turns out this is the best distribution for performance measurement. Interested readers can refer to this blog from Horace He for more details. For measuring flops, we average running time over 8 runs(ignoring the first warmup run). Then FLOPS are calculated by 2 * m * n * k / time. All benchmarks are run on H100 SXM with CUDA toolkit 12.6, V12.6.68 What lies in H100 Let’s go over some H100 specifications to understand new characterstics of this GPU.H100 comes in two variants: PCIe and SXM. They are very similar, except that the SXM variant is slightly faster. My machine has H100 SXM, which has the following specs: 132 Streaming Multiprocessors(SM) 1024 threads per SM 4 Tensor Cores per SM 80GB High Bandwidth Memory (3.35TB/s) 256KB combined Shared Memory + L1 cache per SM 65,536 registers per SM 50MB L2 Cache shared between all SMs3 Most of these terms are familiar from the previous blog. Compared to previous generations, H100 GPU has more SMs, faster global memory, faster clock speed, more shared memory and larger+faster L2 cache. Matmul kernels use all of these features - so we can expect an old-gen algorithm to naturally perform faster on H100. We indeed see a jump from 21 TFLOPs on A100 → 32 TFLOPs on H100. We clearly have a long way to get close to cuBLAS (716 TFLOPs). The key lies in a new spec that we haven’t seen before: Tensor Core Tensor core is a special hardware unit in GPU which does small matrix-matrix multiplications in a single hardware instruction. This comes in multiple flavors - mma wmma and wgmma instructions.4 In this blog we’ll look at the wgmma instructions which are introduced by Hopper architecture.Unfortunately, there’s no documentation of these instructions in CUDA C++ guide and we have to look at PTX guide. (yes we’ll have to write these assembly-like instructions in PTX instead of C++). Let’s look at an example instruction: wgmma.mma_async.sync.aligned.m64n16k16.f32.bf16.bf16This executes a matrix-multiply operation C = A*B + C with m=64, n=16 and k=16. A: mxk matrix of bfloat16 type stored in shared memory. B: kxn matrix of bfloat16 type stored in shared memory. C: mxn matrix of 32-bit float type stored in registers. Storing A and B needs (64*16 + 16*16) * 2 bytes = 2.5KB shared memory. Storing C will need 64*16 = 1024 registers.Note that a single gpu thread can only store upto 256 registers - hence tensor core instructions require storing C over 128 threads in a SM! Note that a warp = 32 threads, so 128 threads will comprise 4 warps. A group of 4 warps is called a warp-group in Hopper architecture. When we distribute C over a warp-group, each thread needs 1024/128 = 8 registers - which is a much resonable number. Note that the instruction is called wgmma which stands for warp-group-matrix-multiply-add. Asynchrony Note the term mma_async in tensor core instructions. These instructions are run asynchronously on the 4 tensor cores per SM. Consecutive tensor core instructions can be batched together and sent to tensor cores, running them in parallel. This is vital to fully utilizing all tensor cores. Full PTX code for these instructions is a bit verbose. We go over the details in Appendix section. Throughout the blog, we will use them like this: Instruction sizes H100 offers several such matrix multiplications instructions of varying sizes. From PTX guide: .shape = {.m64n8k16, .m64n16k16, .m64n24k16, .m64n32k16, ... ... .m64n232k16, .m64n240k16, .m64n248k16, .m64n256k16}; Among all these instructions m=64 and k=16 remain the same. n can vary from 8 → 256. Based on my experience, it is faster to use a single instruction with larger n, than multiple instructions with smaller n. However, note that larger n uses more resources. n=256 demands a whopping 40KB of SMEM and 128 registers per thread! Kernel 1: Simon’s blog Simon’s algorithm was designed for FP32 types. Adapting it for bfloat16 types gives us 32 TFLOPs. Note that this is not a fair comparison, as cuBLAS leverages tensor core operations for bfloat16 which are unavailable for FP32. In the blog post, Simon claims this can increase performance by 3.5x, but couldn’t get to it. We will pick up where he left off, and start using them. Kernel 2: Using Tensor Core instructions We are now ready to write a simple kernel which computes output tile using tensor core instructions. This section is a bit lengthier than I wanted, but it sets up the core concepts that we need through rest of the kernels. We will assign one output tile to each thread block, which will have 128 threads cooperating to execute a tensor core instruction. This is very similar to Kernel 5 from Simon’s blog. We will simply replace hand-written matrix multiplication of blocktile with a tensor core instruction. Let’s use WGMMA_M=64, WGMMA_N=64, WGMMA_K=16 notation to denote sizes of wgmma operation. For simplicity, let’s match our block size with wgmma size making our kernel simpler. Here’s the overall kernel structure: Note that we follow the same kernel structure previously discussed: We loop over K-dimension in chunks of size BK. For each chunk: We load corresponding chunks from A and B into SMEM We use a tensor core instruction to multiply these chunks and store them in registers After all chunks are processed, we write values in registers to corresponding tile in C. Note that we have skipped the code to load and store chunks. Let’s go over the loads first. Tensor core instructions need a very specific layout of chunks in SMEM - which isn’t simply row or column major. Here’s the diagram from Nvidia: This layout is heavily swizzled and too complex to be loaded by hand. Nvidia has implemented swizzling is order to avoid shared memory bank conflicts. Moreover, the memory layouts in diagram are incorrectly documented. Thankfully, Nvidia provides an out-of-box way to load tiles without worrying about these layouts: Tensor Memory Accelator(TMA). Loads using Tensor Memory Accelerator (TMA) TMA is a new hardware piece introduced in Hopper architecture. It is a faster way to load tiles of multi-dimensional matrices between GMEM and SMEM. This is implemented in an independent hardware unit, making it much faster than custom loads. TMA loads directly support the swizzling patterns required by tensor cores. TMA takes in a tiling configuration of matrix, and can load any requested tile into SMEM. One difference with TMA loads is that it needs to be called from a single thread. Previously, multiple CUDA threads cooperated to load a chunk of memory. Using TMA, a single thread can issue a TMA call, and all threads wait for it to finish. Following example, taken from CUDA programming guide, can be used to load a tile of A into SMEM. It uses cuda barriers to wait for the loads to finish: Storing output tile Values of output tile are stored across 128 threads in a thread block. It is possible to compute the mapping from thread id, register index → corresponding global memory address: (threadIdx, registerIdx) → (idx in BM x BN tile) Once we have the mapping, we can store all register values to GMEM. There’s nothing special about the mapping function, and I don’t recommend readers diving into it. We should just realize its a simple arithmetic mapping that can be computed when we want to: Performance We reach 317 TFLOPs throughput, a big jump over 32 TFLOPS of previous kernel. We also introduced several new features in this section: Tensor cores, TMA and CUDA barriers. All of these work together to give us a nice 10x boost in performance! Tensor cores indeed pack a lot of power. Note that A6000 gpu also has tensor cores, but it’s possible to achieve 92% throughput without using same. This is not true for H100, where tensor cores are mandatory for high throughput.We will keep improving our kernel using these features and more new H100 features in following sections :) Kernel 3: Handling larger output tiles We have previously set tile size equal to tensor core instruction. However, it is also possible to use larger values of BM and BN. We just need to break down this matmul into smaller matmuls as we’ve done before. We simply loop over M, N, K dimensions and perform a regular matrix multiplication of size [BM/WGMMA_M, BK/WGMMA_K] * [BK/WGMMA_K, BN/WGMMA_N] : Performance This reaches performance of 423 TFLOPs with BM=128, BN=128, BK=64 and using m64n128k16 wgmma instruction. Note that tensor cores provide a range of instructions for different values of n. It is always better to use the largest available instruction and set BN = WGMMA_N. Profiling Our kernel does 3 basic things during its lifetime: Loads, Tensor Core operations, Stores. Loads and TC operations are done in a loop over k-dimension. Once the computation is finished, we Store all values to output matrix: This visualization is important to uncover further optimizations. First, let’s also quantify our visualization and measure how much time each load/compute/store phases take. This is measured in number of clock cycles spent by GPU thread: Once we have times spent for all operations, we can store this information in a global array at the end of kernel: Once kernel has finished running, we will average load time taken for all thread blocks. Note that storing information for one thread from each thread block is enough for our usecase. Here’s what we find:Load: 1415Tensor Core: 703Store: 4572 We see that tensor core operations are 2x faster than loads. Store operation is 6.4x slower, but only runs once compared to Load+Tensor Core loop which runs 128 times. These numbers change with different tile sizes, but quantifying them gives us a good picture what’s happening. Kernel 4: Hiding load latencies Interestingly, it is possible to hide the load latencies if we run loads and tensor core operations in parallel. Think of this as a producer-consumer problem. We run tight loops of Producer(loads) + Consumer(tensor cores). Instead of running them sequentially, let’s decouple them and run them in parallel. Producer will keep loading chunks and keep putting them in the queue. Consumer will keep dequeueing items as they arrive and process them. A queue can also store multiple items if producer is fast enough to produce them. This way, both producers and consumers are not affected by each other’s latencies.To implement this, we will use the “Warp Specialization” technique from CUDA programming guide. This starts 2 warpgroups in a thread block. One warpgroup acts as a producer and other as a consumer. We will use barriers and a circular buffer to implement shared queue. Let’s initialize our data structures: Producer will keep loading tiles into shared buffer starting from index 0 in a circular way. Before loading tile, it calls empty[i].wait()to check if the index in shared buffer is ready to be filled. After filling the index, it calls full[i].arrive()to signal the producer that this is ready to be consumed. Similarly, consumer calls full[i].wait()to wait till tile is loaded into the index in shared buffer. After consuming it, it signals producer by calling empty[i].arrive(). Note that we initialize our barriers in a way that producers think the shared buffer is empty at the very beginning. Here’s is a flow diagram with a shared buffer of size 2. We show the state of queue after each interaction with producer/consumer. Note that both producer and consumer run in parallel, and their speeds may not be same as this example: Performance This reaches performance of 498 TFLOPs with 128 x 128 tile sizes and QSIZE=5. Kernel 5: Pushing Tile size limit So far, we have been using tile sizes of 128 x 128. Let’s see if we can push this to 128 x 256. This will allow us to use a larger wgmma instruction, and also reuse memory loads. Limiting factors for larger tile sizes are: SMEM size and register size. Let’s try increasing the tile size and see what happens: ptxas info : (C7511) Potential Performance Loss: wgmma.mma_async instructions are serialized due to insufficient register resources for the wgmma pipeline in the function '_ZN2M413matmulKernel4ILi128ELi256ELi64ELi256ELi3ELb0EEEviiiP13__nv_bfloat16PK14CUtensorMap_stS5_Pi' Performance: 123 TFLOPs We see a compiler warning of “insufficient register resources”, and a 5x dip in performance. Output tile uses 128 x 256 = 32768 registers in the thread block, which is only 50% of total register usage. This leaves more than enough room for other registers used by kernel to store variables. Clearly this isn’t the problem. Let’s look at register usage per thread instead:For 128 x 256 tile size, we will need 256 output registers in a warpgroup of 128 threads. 256 is already the maximum limit of registers a thread can have on H100. On top of this, the kernel will use more registers to store the variables as well. When a thread hits limit of register usage, it will store some registers to memory when they are not needed and load them back later when needed. This is called register spilling and considerably slows our kernel. In our case, spilling is done between tensor core operations, due to which they are serialized, and cannot be batched. Using 2 consumer warpgroups We know that we hit per-thread register limits, but not the overall register limits in a SM. The solution is simple, just use more threads! We will use 2 warpgroups to work together and do the wgmma operations. After loading the tile, we split the tile into two 64 x 256 tiles, and 2 warpgroups can compute output of each tile. Per-thread register usage will be halved while keeping the overall register usage same. This is surprisingly simple to implement. We simply need to start a kernel with 128*3 threads. We will have 3 warpgroups: One producer and Two consumers. Note that while the consumers process the output tiles in parallel, but they still wait for the whole chunks to be loaded by the producer. Both consumers will use the same code, except processing different parts of loaded tiles. They arrive and wait on same barriers at similar times. We just need to initialize barriers with higher token counts: Performance This gives us a nice boost in performance to 610 TFLOPs. Larger tile size is also more SMEM hungry - making us reduce QSIZE from 5 → 3. However, we still see an overall performance boost. Profiling shows that each thread uses 168 registers. Total register usage in a thread block of 3 warpgroups sums to 64512, which is just under the GPU limit of 65536 registers. Note that while consumer warpgroups need the high register usage, producer threads don’t need to use these many registers, as they don’t perform tensor core operations. Typically, nvidia compiler assigns same number of registers to every thread. However, Hopper architecture allows us a way to specify per-thread register usage of a warpgroup using PTX: Using these values still keeps our register usage to 64512 (240*128*2 + 24*128), but shifts register usage from producers to consumers. This boost performance up to 631 TFLOPS. This performance boost is nice to have, but hard to reason about. My theory is larger register count in consumers leads to fewer register bank conflicts. Please let me know in comments if you have other explanations! Kernel 6: Hiding store latencies We were able to hide load latencies by separating producers and consumers. Let’s see how we can hide store latencies. A SM processes multiple output tiles throughout the kernel lifespan. For first tile, we see that loads and tensor core operations are parallelized. At the end, we store all computed values to C matrix. During this time, we can also start loading chunks for the next output tile. Note that store and load operations do not use any common resources. Loads are stored in SMEM, while stores are done from RMEM to GMEM.According to our profiling, around 4572 cycles are spent in storing values to GMEM. If we start loading chunks for next thread block during this time, we can load 4572 / 1415 = 3.2 chunks. This means, consumers for next thread block can start running immediately after finishing current thread block! To implement this, we start our kernel with as many thread blocks as SMs - 132 for H100. Now, we need to decide which tiles are assigned to which SM. Previously, we had 1 tile for each thread block, and we let the GPU schedule these on different SMs. Now, we need to do this scheduling by ourselves. Let’s follow a simple scheduling logic of assigning consecutive output tiles to a SM: We don’t need much additional logic to overlap stores and loads across tiles. When processing a new tile, we will be reusing the barriers and shared queue instead of re-initializing them. Once the producer finishes loading chunks for a tile, it will immediately start loading chunks for next tile. Consumers will also know when they have finished processing a tile, and can start reading from next position in shared queue for next tile. Performance We see 400 TFLOPs with this strategy, which is a regression from previously 640 TFLOPS! That did not work as well as we planned. Our overlapping stores logic is pretty sound, let’s see if we have messed up with our scheduling logic. Scheduling and L2 Cache Instead of looking at what tiles are processed by a single SM, let’s look at the first tile processed by SMs. These tiles will be processed at the same time. We see that SMs process very far-away tiles at the same time. This means loading very different chunks of memories from A and B matrices at same time. If we are able to schedule nearby tiles at same time, then their loaded values will have lots of common parts of A and B matrices. These common parts will be cached in L2 cache of GPU - meaning we don’t have to load tiles from GMEM all the time! Let’s see how this scheduling looks like: Note that same colored tiles are scheduled at same time. This means lots of common accesses to A/B matrices at same time - all served by L2 cache! Note that we used only 128 SMs in the diagram because it is easily groupable in 16 x 8 configuration. Making everything a power of two makes our scheduling logic much simpler. Performance 660 TFLOPs. We hit a L2 cache hit rate of 83%, meanwhile cuBLAS only hits 70% L2 cache hit rate. It is not hard to modify the logic to use all 132 SMs. We can still keep this configuration but assign some tiles in next group to leftover SMs. We keep doing this till we go over all tile groups. Also, our tile configuration need not be 16x8, it can be something small like 2x2 as well. After trying several tile group configurations, I found that using 132 SMs instead of 128 SMs is slower(655 TFLOPs). This is because our tile counts divide roundly into 16 x 8 regions - leading to better L2 cache hits if we use 128 SMs. Kernel 7: Faster barriers Note that our current barrier implementation is recommended by the CUDA programming guide. It turns out there exists a faster barrier implementation which can significantly speed up our kernel. This implementation is only referenced in the PTX guide, without any CUDA API. And it is left as an exercise to the reader as to which one is better :) Let’s list out both barrier APIs, and start using the new one! CUDA Barrier API PTX Barrier API There are 2 differences in the APIs: Phase variable: We manually keep track of phase variable, which is parity of how many times we have called wait on the barrier. There is no other significance to the phase variable. The underlying API demands we manually track this and pass this in the API. This is an abstraction leak, which is probably needed for performance reasons. Note that we do not need to re-initialize a barrier once the wait call has completed. We can simply reuse it as it was freshly initialized with previous values. A barrier is typically reused hundreds of times as we load hundreds of tiles in shared queue. Tokens: Another difference is that this API does not use any tokens in arrive and wait calls. This makes the implementation cleaner, and allows us to further optimize our synchronizations. This means that not all threads who execute wait need to have called arrive first. We can reduce number of token synchronizations to from 257 to 3(one per producer and consumer). Using less synchronizations makes our code faster: Note that the new API needs to be implemented in PTX. These are simple CUDA wrappers over PTX code, highlighted in the github code. Performance The new barrier API gives a nice 10% performance boost, getting us to 704 TFLOPs. We have now achieved 98% performance of cuBLAS. The remaining optimizations will give us smaller returns, but will slowly take us upto and further than cuBLAS. Kernel 8: Thread Block Clusters Clusters are a new Hopper feature which groups multiple thread blocks running concurrently across multiple SMs. Multiple SMs in a cluster can synchronize and collaboratively fetch and exchange data. To use this feature, we need to declare this in the kernel function definition: // This launches a kernel with 2 SMs in each cluster. __global__ void __cluster_dims__(2, 1, 1) kernel(...) { // ... kernel code } TMA Multicast Multiple SMs in a cluster can load the same tile using TMA multicast operation. This is faster than loading the tile twice from L2 cache. Nearby tiles read same chunks from input matrices, making this feature very useful. Above figure shows when 2 vertically consecutive tiles run on different SMs in same cluster. They need to load 2 different chunks from A, but same chunk from B. This chunk from B can be multicasted to the SMs in the cluster. TMA supports this functioanlity. TMA multicast operation is a PTX instruction. Like other cases, this is not complex, but lacks a wrapper function in CUDA. cp.async.bulk.tensor.2d.shared::cluster.global.tile.mbarrier::complete_tx::bytes.multicast::cluster In order to use this, we will also need to synchronize barriers across different SMs in a cluster. PTX barrier provides this functionality by appending cluster keyword to the arrive function. We provide wrappers to both methods in our github code. Performance This gets us to 734 TFLOPs. We are now doing slightly better than cuBLAS, at 102% of cuBLAS performance. Note that it is possible to group cluster in different ways(horizontal tiles), and even using cluster of size 4 with 2x2 tiles clustered together. Our implementation supports all cluster shapes, but we found vertical clustering of 2 tiles to be the fastest. This compensates our uneven tile size(128 x 256). Using larger cluster sizes is much slower, likely due to expensive inter-SM synchronization. Kernel 9: Micro-optimizations This kernel includes a series of small optimizations. Reordering stores: We write values of several registers to GMEM. We can order them in a way that consecutive writes map to nearby memory locations. This results in slightly better performance. Relevant part of our store logic: Skipping L1/L2 cache for writes: We can use cache hints to skip store the value directly to GMEM, skipping L1/L2 caches - freeing up slightly more space for A,B matrices. Using __stwt() method provided by CUDA: Skip resetting registers to 0: Remember that Tensor core operations accumulate values, therefore we need to reset their registers to 0 between processing different tiles. If we look into tensor core spec, it is possible to set a flag to control whether tensor core operation does the accumulation. This switches the tensor core operations between C = A*B and C = A*B+C. We can set this flag on the first time we use the tensor core instruction for an output tile. This helps us avoid resetting registers to 0 every time we process an output tile. Performance These optimizations together help us get from 734 TFLOPs to 747 TFLOPs. We have started seeing diminishing returns from our optimizations, but this doesn’t stop us. Kernel 10: Async Stores We have spent some time optimzing performance for store operations, but there is a another way to achieve similar results. We can store register values to SMEM, and use TMA to store these values asynchronously to GMEM!Only caveat is we are left with smaller SMEM space to use for our shared queue. Its hard to reason if this is better, let’s try this and see what goes! Performance This gets us to 758 TFLOPs, another 2% improvement. At this point, we are running out of more ideas, so let’s bring in some big guns. Kernel 11: Hilbert Curves Let us revisit scheduling of output tiles on SMs in the diagram below. We schedule same-colored tiles to SMs at the same time. This time we number tiles by the order in which they run on SMs. Note that we don’t explicitly wait for all SMs to process their assigned tiles before scheduling next group. This naturally happens as we assume SMs take similar amount of time to process tiles. Note that while we see lots of L2 cache hits within the same tile group, our scheduling is not optimal across tile groups. We run tiles in order: Blue, Green, Gray, Red. Green(#2) and Gray(#3) tiles will have not share any common chunks from A/B. We can fix this by scheduling by swapping Red and Gray tiles below: Implementing this for large matrix can get very complex. Thankfully, filling a matrix in a spatial order is a well-researched problem - and the answer is Hilbert Curves. Hilbert Curve is a space-filling curve which covers all cells of a matrix while ensuring it visits “nearby” cells together. If we take any segment of it, we will find that all cells covered are spatially close. This gives us a new scheduling algorithm. Create a Hilbert Curve over [M/BM, N/BN] matrix and schedule tiles using this order. Consecutive tiles will be scheduled at same time. Following is a demonstration of Hilbert Curve on 8x8 matrix. It starts from top left, and ends at top right. Performance This gets us a 1% boost to 764 TFLOPs. We have a come a far way to 107% of cuBLAS performance. This is a good time to stop and conclude our thoughts. Conclusion Here’s a plot that compares our fastest kernel against cuBLAS across increasing matrix sizes: Our kernel performance varies for different N: 2% faster for N=512 17% faster for N=1024 7-8% faster for N=2048,4096 1.5% faster for N=8192 For small N, matmul kernel is memory bound. This leads to only a small room of improvement. For very large values of N, matmul kernel becomes power bound! H100 GPU has maximum power cap of 700W - which is not enough to use all tensor cores at the same time. This leads to diminishing returns for very large N values. Note that we don’t perform faster for all values of N - we see a mix of sometimes slower, and sometimes faster for different values of N. However, I believe it is possible to be at par with extensive autotuning of kernel parameters. It is further possible to tweak GPU settings to improve performance by diverting power from L2 cache to tensor cores. That should result in a performance boost on both cuBLAS and our kernels. All my code is available on Github. I’d also like to thank my friend Sriram Sankar for motivating me to learn GPU programming and discussing Hilbert curves with me. I’ve recently started writing GPU kernels as a hobby - and hope to do more of it :) Resources Here are some resources which helped me learn GPU programming: Programming massively parallel processors video lectures Simon’s matrix multiplication from scratch blog. GPU Mode Discord group Hopper whitepaper from Nvidia, with details on new Hopper architecture. CUTLASS docs for efficient matrix multiplication Flash Attention 3 paper: Highlighting several Hopper-specific techniques Appendix We go over some details of that we avoided before for brevity. Tensor core operations Following is the PTX implementation of m64n16k16 tensor core operation: It takes in shared memory descriptors where A and B are stored, and registers where output is stored. This operation uses 8 registers per thread. Larger instructions need more registers as parameters. Batching WGMMA operations As WGMMA operations are executed asynchronously on 4 tensor cores per SM, we can batch mulitple tensor calls and execute them in parallel: H100 is the latest “publicly available” GPU generation called Hopper. Blackwell is the successor to it - but its not available on any online provider. Simon’s algorithm only achieves 4% because its not using tensor cores. Simon mentions this in his blog - and notes that it can speed up performance by 3.5x. However, it still leaves a long way to catch up to cuBLAS. L2 Cache is partitioned into 2 parts, with SMs reading from the “nearer” partition. Data is copied into multiple partitions if required by different SMs. This effectively makes the L2 cache size 25MB. This is a good video explaining it. mma, wmma and wgmma instructions are different ways to use tensor cores. mma stands for matrix-multiply-add instruction executed by a thread. wmma is mma instruction executed cooperatively by 32 threads in a warp. wgmma does the same for 4 warps in a warp group. Note that mma and wmma instructions have CUDA API - but wgmma requires us to write PTX. Share this post Discussion about this post Incredible 100% amazing No posts Ready for more? Share
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] DeepSeek-R1 and FP8 Mixed-Precision Training DeepSeek has shocked the world with the release of their reasoning model DeepSeek-R1. Similar to OpenAI’s o1 and Google Gemini’s Flash Thinking, the R1 model aims to improve the quality of its replies by generating a “chain of thought” before responding to a prompt. The excitement around R1 stems from it achieving parity with o1 on several industry-standard benchmarks, including math, coding, and English and Chinese language understanding, while also being open-source and available through the DeepSeek API at a fraction of the cost. DeepSeek’s technical reports cover a wide swath of performance optimization techniques that enabled their breakthrough results on efficient LLM training and inference. Many of these techniques were already used to train DeepSeek-V3, a model comparable to Anthropic’s Claude Sonnet and OpenAI’s GPT-4o, from which the R1 model was obtained via fine-tuning and reinforcement learning. In this blog post, we’ll focus, in particular, on DeepSeek’s FP8 mixed-precision training strategy for the base DeepSeek-V3 model, described in section 3.3 of the DeepSeek-V3 paper and in the figure below (Figure 6 of that paper). As always, a core bottleneck is matrix multiplication (aka “matmul” or “GEMM”), indicated by the yellow boxes in the diagram. As the figure shows, model weights are stored in FP8 and all matrix multiplications are performed in FP8 with FP32 accumulation. Activations and gradients are stored in BF16, and FP32 is also used for some internal computations. Why do we care about FP8 training? On NVIDIA GPUs, GEMM computations can take advantage of hardware acceleration provided by the GPU’s Tensor Cores. On the Hopper architecture, FP8 GEMM is natively-supported and achieves the highest possible compute throughput, advertised at ~2 petaFLOPS on the H100 SXM GPU. In fact, NVIDIA finds low-precision computation so important that it’s expanding Tensor Core capabilities to FP4 and FP6 with Blackwell. Storing model weights in low precision also reduces the overall size of the model, placing less pressure on the memory and inter-GPU communication channels, which are already being pushed to their limits to keep up with the Tensor Cores. Working in FP8 comes with several tradeoffs. First, to prevent overflow, one typically scales a higher-precision weight or activation matrix down to the FP8 representable range before quantizing it — for example, by dividing the whole tensor by its maximum element. That maximum element is retained separately and used as a scaling factor in each matmul with the quantized tensor. However, this makes the quantization process extremely sensitive to outliers: the presence of a very large weight in some layer could force all other weights to be quantized to 0. The DeepSeek team handles this issue by introducing blockwise and tilewise scaling, in which each 128×128 submatrix of a weight matrix, respectively each 1×128 subvector of an activation vector, is scaled and quantized separately. Then, due to the presence of varying scaling factors along the “inner” or “contracting” dimension of the GEMM, the rescaling computations need to be fused into the matmul mainloop. This required the team to write a custom FP8-GEMM-with-rescaling kernel. We also remark that blockwise quantization only (i.e., also for activations) proved insufficient for their purposes due to training instability; cf. the ablation study described in appendix B.2 of the paper. Furthermore, optimal GEMM on Hopper GPUs uses warpgroup-wide MMA instructions (WGMMA), which we described in detail as part of our GEMM tutorial. Under these instructions, all of the Tensor Cores on a Hopper GPU’s Streaming Multiprocessor (SM) collaborate to compute fragments of the matrix product. However, this brings us to the second issue with FP8. The DeepSeek researchers found that the FP8 Tensor Cores were using a certain “fixed-point accumulation” strategy that effectively used only 14 bits of precision as opposed to true FP32 precision; cf. section 3.5.2 of the paper. This led to training inaccuracies that grew for large model sizes. DeepSeek’s solution was to move some of the accumulation outside the Tensor Cores. Their GEMM kernel performs each series of 4 consecutive WGMMA operations inside the Tensor Cores, accumulating in the lower-precision format, but then adds the result into a separate register-backed accumulator tensor in FP32. This second addition is performed using CUDA Cores (the GPU’s standard execution unit for non-matmul FP32 arithmetic) and thus takes place in ordinary FP32 precision, mitigating the loss of accuracy. The dequantizing scaling factor is also applied to this FP32 accumulator. The paper authors cite NVIDIA’s CUTLASS library for this technique. CUTLASS has supported promotion of FP8 matmul to FP32 accumulation in CUDA Cores since version 3.2. Moreover, blockwise scaling was added in this PR and merged to main in version 3.7, and tilewise scaling will soon be supported thanks to this PR (which renames the concept to groupwise scaling for clarity). As a CUTLASS user, you can invoke Hopper FP8 GEMM with promoted FP32 accumulation and blockwise scaling through the CollectiveBuilder with the KernelScheduleType set to KernelTmaWarpSpecializedCooperativeFP8BlockScaledAccum (cf. example 67). In fact, CUTLASS’s Hopper FP8 GEMM kernels use the CUDA Core accumulation technique by default. Alternatively, you can accumulate only in the Tensor Cores using schedules such as KernelTmaWarpSpecializedFP8FastAccum; this trades better performance for lower accuracy, which may work better for inference applications. At Colfax, we’re working to spread knowledge of these techniques so that anyone can take advantage of the optimizations that were central to DeepSeek’s success. If you’d like to learn more about using the CUTLASS library to build highly performant GEMM kernels, our tutorial series is a great place to start. If you are interested in customized training or have a more involved problem that could benefit from our expertise, please get in touch with our research team at [email protected]. Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] CUTLASS Tutorial: Persistent Kernels and Stream-K Welcome to Part 3 of our tutorial series on GEMM (GEneral Matrix Multiplication). In Parts 1 and 2, we discussed GEMM at length from the perspective of a single threadblock, introducing the WGMMA matmul primitive, pipelining, and warp specialization. In this part, we will examine GEMM from the perspective of the entire grid. At this scope, there are two main classes of optimizations: (1) use of threadblock swizzling and clusters to maximize L2 cache hits, and (2) better partitioning of work over threadblocks in order to saturate the GPU’s compute resources and achieve good load-balancing. This post will focus on the latter (though we also discuss the former in the Appendix). Specifically, we will discuss a certain partitioning strategy called Stream-K that addresses the problem of wave quantization, which arises when the number of work tiles is not divisible by the number of streaming multiprocessors (SMs). Stream-K is also useful when a standard tile-based partitioning of the output otherwise fails to occupy the GPU, such as when M and N are small but K is large. This blog post is organized as follows. We begin by describing the wave quantization problem and the concept of a persistent kernel. Then, we go over various strategies for partitioning a GEMM workload among the threadblocks, including Stream-K and its predecessor Split-K, with a focus on how they handle wave quantization. We then explain how a kernel author can write their own tile scheduler; as an example, we’ve added an implementation of Stream-K to our GEMM kernel from Part 2 of this tutorial series, available on Github. Finally, in the Appendix we do a deep dive into the Stream-K implementation found in CUTLASS. Big picture: The wave quantization problem An NVIDIA GPU consists of a number of streaming multiprocessors (SMs): each SM has its own shared memory, register file, Tensor Cores, etc., and they operate independently from each other. An ideal workload takes maximal advantage of parallelism between the SMs by evenly distributing work among the SMs, so that all SMs are kept busy for the entire duration of the kernel. However, if some SMs complete their portion quicker than others, then they will sit idle waiting for the rest of the SMs to complete. This is an example of load imbalance. Consider a computation that is divisible into equally-sized work units, where each work unit can be completed by a single SM in the same amount of time. For example, GEMM is generally partitioned into work units that each compute a single bM x bN output tile. These work units are then assigned to threadblocks (CTAs), and each CTA computes its assigned work unit on an available SM. We will call the assignment of work units to SMs scheduling. If the number of work units exceeds the number of available SMs, then the work units will be processed in multiple waves, where 1 wave is the completion of a single work unit by every available SM. Wave quantization then arises when the number of work units isn’t evenly divisible by the number of available SMs. For example, consider a case where there are 10 work units, and 4 SMs. Then the work unit execution timeline looks like: In this case, the first two waves are full waves, with every SM being utilized. But the final wave is a partial wave, where only half of the SMs are occupied. Wave quantization can seriously degrade performance when the number of work items is small relative to the number of SMs. For example, on an H100 PCIe GPU, which has 114 SMs, a computation with 115 work units will require 2 waves — exactly the same as a computation with 228 work units! In other words, adding the 115th work unit approximately halves the utilization of the device. On the other hand, while a computation with 114,001 work units would suffer from the same quantization effect, its cost would be minuscule in comparison with the total cost of the kernel. You can find more information in the NVIDIA Deep Learning Performance Guide. To observe the impact of wave quantization in an example, let’s use the GEMM kernel that we created in part 2 of this series and measure the performance over varying wave count. Consider an GEMM of a MxK matrix A and KxN matrix B. Let bM and bN be the dimensions of the work tiles, and for simplicity assume that they divide M and N evenly. Then the total number of waves is given by ceil((M/bM * N/bN)/num_SMs). To study the effect of the quantization, we want to vary the tiles-per-SM given by (M/bM * N/bN)/num_SMs; the decimals represent how full the last wave is. Thus, we will fix the values M=1024 and K=4096 and vary N in increments of bN (for us, this is 192). The left graph shows the performance in TFLOPs/s while the right shows the elapsed time instead, with benchmarks taken on an H100 PCIe GPU. The vertical dotted lines denote the wave boundaries, where tiles-per-SM crosses integer values. The left graph shows the wave quantization effect – sharp drops in performance when crossing wave boundaries. Correspondingly, the right graph shows that elapsed time is mostly determined by the total number of waves as a discrete parameter (given by 1 for x in (0,1], 2 for x in (1,2], and so on). Note that the second quantization effect is smaller than the first one – the impact of wave quantization decreases as the number of waves increases. However, increasing the number of waves can be difficult, especially considering that the number of SMs on NVIDIA GPUs continues to grow with newer architectures. So it is important that we come up with strategies to mitigate the impact of wave quantization without making assumptions on the problem size. Persistent Kernels To address wave quantization, we need to create a better partitioning and scheduling scheme. The kernels we’ve shown so far on this blog have used grids dependent on the dimensions of the problem, so that each CTA processes a single work unit. For example, in GEMM, the work units are bMxbN tiles of the MxN output matrix, where bM and bN are fixed at compile time. Each work unit would be computed by a single CTA in a M/bM x N/bN grid. So our launch parameters would look like: dim3 dimGrid(ceil_div(M, bM), ceil_div(M, bN)); The problem with this approach is that while we have some control over how threadblocks are distributed to SMs, it is difficult to implement more complex scheduling strategies. Therefore, we will be using a different design approach: persistent kernels. In a persistent kernel, the size of our grid is a fixed value. Typically, this value is equal to the number of available SMs, so that each CTA will have its own SM. We can find the number of SMs to use for dimGrid with the following CUDA code: int num_SMs; cudaGetDeviceAttribute(&num_SMs, cudaDevAttrMultiProcessorCount, device_id); dim3 dimGrid(num_SMs); Each CTA persists on its SM, processing multiple work units until all work has been completed. This design change offers the programmer significantly more control over scheduling, by telling each CTA how to iterate through the work units. With this flexibility, we can distribute work in a way that minimizes wave quantization and load imbalance. In practice, the assignment of work units to CTAs is typically delegated to a tile scheduler, which is essentially a glorified iterator that tells each CTA where to find its next work unit and when to stop. While the total work required for each output tile does not change, by changing tile schedulers we will be able to explore more complex strategies to minimize load imbalance, such as Stream-K. Handling wave quantization with persistent kernels To work our way up to Stream-K, it is useful to also examine some simpler but inefficient approaches to dealing with wave quantization. The paper on Stream-K has a great in-depth discussion on this, which we recommend reading. For the convenience of the reader, we give a summary of their discussion here. To keep our numbers easier to parse in this section, we’ll consider a fictional GPU, the Hipparchus H10, which has only 4 SMs. Data Parallel We’ll begin with the most basic version, which is to simply split the tiles evenly in the M- and N- mode and assign them in round-robin format. Note that this is essentially identical to the case when using the non-persistent, work tile grid launched kernels; the only difference is the guaranteed ordering. But it is still worth studying to understand the situations where wave quantization becomes a problem. As there is no dependence between the work units, this is referred to as a data-parallel work schedule. Figure 1 shows an example partition. Here the GEMM workload is divided into 9 work tiles. As the work items are identical, the tiles are processed in waves. Specifically, the 9 work tiles would be processed on the H10’s 4 SMs in 3 waves: 2 full waves, and a partial wave in which only 1 of the 4 SMs is occupied. If each work tile achieves 100% utilization on its SM, then the utilization across the whole computation is 2.25/3 = 75%. The most direct approach is to return to the realization that wave quantization is less of a problem if there are more work units — and we can increase the number of work units by making each work unit smaller. In Figure 2, we’ve divided bN by a factor of 2 in the N direction. We now have 18 work tiles, which could be performed in 5 waves: 4 full waves and a partial wave in which 2 of the 4 SMs are occupied. Assuming once again that each work tile is computed with 100% utilization, the utilization across the whole computation is 4.5/5 = 90%. Moreover, each work tile in Figure 2 needs half as many FLOPs as the work tiles in Figure 1 — to a first approximation, each wave should take half as much time as a wave in Figure 1. So even though there are 5 waves in Figure 2 compared to 3 in Figure 1, the time spent in Figure 2 is only (5*0.5)/3 = 83% of Figure 1! What could go wrong? Unfortunately, we’ve made a few too many simplifying assumptions, and are no longer correctly modeling the behavior of the Hipparchus H10. The central problem is that, as the tile size decreases, the computation of a work tile may become less efficient. Thus, it may be incorrect to assume that halving the tile size also halves the computation time or keeps the utilization of a single CTA constant. One of the main drawbacks is the loss of arithmetic intensity. As memory access is time consuming, we want to have a large number of arithmetic operations to mask the memory access latency. For GEMM, a CTA computing a matmul tile will perform arithmetic operations and GMEM accesses. Observe that halving halves the first number but not the second. For example, 128 x 128 x 128 work tile size would result in 85.3 operations per GMEM transfer, while a 128 x 64 x 128 work tile size would result in only 64 operations per GMEM transfer. As an additional complication, assuming the CTA size hasn’t changed, halving the tile size means that each warp in the CTA processes half as many instructions. This decreases the latency-hiding opportunities available to the warp scheduler, which are essential for good performance of a pipelined GEMM. Finally, there may be constraints on the tile size related to the choice of MMA atom. For example, the H10 could require the use of a 128 x 128 x 16 WGMMA atom for maximum throughput. This adds another limitation on the minimum size of tiles. The balance between these considerations is not entirely obvious, and finding a good tile size for a particular problem may require trial and error — for example, using the CUTLASS Profiler. Split-K So far we’ve only been splitting in M- and N-modes, but there is another dimension that we can split along: the K-mode. This is most effective when K is large; as before, there are costs to arithmetic intensity and latency-hiding when bK becomes too small. The Split-K schedule splits tiles into a constant number of pieces along the K-mode. For example, in Figure 3 we split along the K mode into 2 work items. This strategy introduces a new complication: each CTA has only accumulated partial results for its bM x bN output tile. To complete the computation, the CTAs that worked on this output tile need to combine their results. A typical way to handle this is turnstile reduction in an auxiliary GMEM workspace. Each CTA collaborating on a given tile waits for CTAs working on previous K-indices to arrive at a barrier, after which it reduces its partial results into the workspace and itself arrives at the barrier. The final CTA, instead of reducing into the workspace, reduces from the workspace to its own accumulators and computes the epilogue. Note that the additional GMEM accesses and barrier synchronizations introduce additional overhead, shown in Figure 3 in the form of “arrive” and “reduce” blocks. Split-K introduces a new hyperparameter, the number of splits, which comes with its own set of tradeoffs. Stream-K The strategies considered so far have improved the wave quantization problem, but they haven’t eliminated it. Returning to our original example of 9 work tiles spread across 4 SMs, it would be ideal if each SM could run 2.25 waves. This is the motivation behind Stream-K. The Stream-K strategy assigns a single, persistent CTA to each SM. Each CTA is assigned a fractional number of work tiles, where any work tiles that are split are split along the K-mode. As in the Split-K strategy, for each work tile that’s split, CTAs collaborating on that tile can combine their results using turnstile reduction in a GMEM workspace. For example, in Figure 4, the persistent CTA on SM0 calculates all of work tile 0, all of work tile 1, and 1/4 of work tile 2. The persistent CTA on SM1 calculates the rest of work tile 2, all of work tile 3, and half of work tile 4, and so on. The partial tiles are scheduled so that the first piece of a worktile is computed well before its last piece, minimizing synchronization overhead (though note that, with tiles that are extremely long in the K-direction, this may not always be possible.) Let’s compare Stream-K to the previous strategies we’ve discussed. Hybrid Stream-K There’s one final improvement we can make to the kernel, which concerns cache performance. The nature of a tiled GEMM kernel is that each operand tile is needed to compute multiple output work tiles. For example, in the split-MN case tile B0 is needed to compute tiles 0, 1, and 2 of the output. Here the output tiles 0, 1, and 2 are computed simultaneously. When one of the CTAs grab tile B0 from global memory, it is also placed in the L2 cache. Other CTAs also requesting tile B0 will then hit in the cache and be able to load it faster. The cache is finite in size and old data may get evicted, which makes it important for these requests to happen around the same time. More precisely, the operand tiles are also partitioned in the K-direction, and each CTA is performing an inner loop over the K-blocks of its operand tile. When wave 0 starts, SMs 0, 1, and 2 will simultaneously request the 0th K-block of tile B0, and two of them will hit in the cache. On the next iteration of their loops, SMs 0, 1, and 2 will request the 1st K-block of tile B0, and so on. However, the stream-K kernel introduces skew: since each SM starts by computing a partial tile of a different size, they will tend to be working on different K-offsets at the same time. Going back to Figure 4, SMs 0 and 2 are both using data from B0 at the beginning of wave 0 — but SM0 needs its 0th K-block, while SM2 needs data towards the middle. In fact, the K-offsets in this schedule never line up, making cache hits much harder to come by. To summarize, eliminating “waves” and scheduling the different SMs out of sync from each other has resulted in a hidden cost of worse cache performance. We can fix the problem by rescheduling the computation as a hybrid between a persistent kernel and an ordinary data-parallel kernel. Since a data-parallel schedule does not suffer from skew, it makes sense to use this schedule for as long as possible, reserving Stream-K for just enough tiles to handle the wave quantization effect. To properly balance the workload between the SMs during the Stream-K phase, it’s necessary to assign 1 full wave and any leftover partial wave to this phase. This schedule is shown in figure 6. The initial Stream-K phase processes between 1 and 2 full waves of the computation. Each SM receives at most 2 partial worktiles. By design, the total size of these tiles is independent of the CTA, so that all CTAs expect to finish this stage of the computation around the same time. Once this stage completes, only entire work tiles remain, and the number left is divisible by the number of SMs. Thus, these work tiles can be computed using a non-persistent, data-parallel strategy, which does not suffer from wave quantization, and which has a better cache performance. This is seen in Figure 6: Here, we can expect the computation of work tiles 6, 7, and 8 to take place at close to the same time and to result in a cache hit for operand tile B2. Similarly, work tiles 5 and 8 would be able to use the cache for their shared A tile. In this case, the data-parallel stage just consists of 1 wave, but a larger GEMM with more work tiles would have a longer data-parallel stage with more use of the cache. The tile scheduler abstraction Since the problem of partitioning and scheduling work is largely separate from the per-CTA memory and compute operations, GEMM implementations like CUTLASS will often wrap them in an abstraction called a tile scheduler. (This is more general than GEMM — for example, FlashAttention-3 also supports persistent kernels with tile scheduler classes.) In the next section, we examine CUTLASS’s implementation specifically; here, we outline in general the responsibilities of a tile scheduler. First, the grid shape of our kernel depends on tile scheduling. So the tile scheduler is responsible for determining the grid size of the kernel. For a non-persistent kernel, this will be the same as the logical grid and dependent on the problem size; for a persistent kernel, it will be fixed and likely equal to the number of SMs. We query the tile scheduler for the grid size at the start, and use it for the kernel launch. In-kernel, each thread will construct an instance of the tile scheduler. The mainloop and epilogue will now be wrapped in a work loop over tiles provided by the scheduler, which might look like this: for (auto worktile = scheduler.get_initial_tile(); scheduler.is_valid(worktile); worktile = scheduler.get_next_tile(worktile)) { auto [m_block, n_block, k_block_start, k_block_stop] = worktile.get_block_coord(); for (k_block = k_block_start; k_block < k_block_stop; ++k_block) { // mainloop } // epilogue } A simple way to implement these iterator primitives is to have the scheduler maintain a linear index into worktiles. For a persistent kernel, each CTA initially receives the worktile at index blockIdx.x (which is just the linear index of the underlying SM); it advances to the next tile by stepping forward by gridDim.x (the number of SMs); and the tile is valid as long as its index doesn’t exceed the total number of tiles. The work of mapping the linear index to the actual (M, N) tile coordinates is delegated to the worktile object. This is already enough for a persistent data-parallel schedule, but more sophisticated schedules demand more functionality. For Stream-K, the size of the work assignment in the K direction depends on the tile, meaning that the worktile should really provide the kernel with four coordinates, as in the code listing. For both Stream-K and Split-K, some or all CTAs will output partial results that then have to be aggregated, with the following implications. As the CUTLASS implementation shows, there are a number of improvements that can be made to this simple outline, including having the scheduler decide in what order to launch tiles, using heuristics to fall back from Stream-K to Split-K or data-parallel modes, and, on Hopper, properly using clusters. We examine these next. Our code sample on GitHub provides three examples of schedulers: a trivial non-persistent scheduler that assigns 1 worktile to each CTA over a grid determined by the problem shape; a data-parallel persistent scheduler; and a Stream-K hybrid scheduler which incorporates a some but not all of CUTLASS’s optimizations. In practice, we found that many of CUTLASS’s optimizations were necessary to get reasonable performance: notably, the additional GMEM accesses and smaller tile sizes caused by reduction are a real cost, and the boundaries of Stream-K work assignments need to be carefully tweaked to minimize this cost. Some performance metrics for the Stream-K tile scheduler are shown below. Relative to a data-parallel scheduler, our implementation of Stream-K performs well early in each wave, reducing the wave quantization effect, but its performance suffers as the partial tail wave starts to fill. The “Heuristic” curve uses CUTLASS’s heuristic of switching from Stream-K to data-parallel once the tail wave is at least half full. This is clearly a good choice. Conclusion In this article we have discussed wave quantization and how it affects the performance of GEMM. We observed the significant performance fluctuation from wave quantization in the GEMM implementation we created in part 2. Then we discussed the various strategies to combat wave quantization, with a focus on Stream-K. Finally, we presented a version of the Stream-K tile scheduler in order to remove the effects of wave quantization in our GEMM implementation. This concludes our three part series on implementing a performant Hopper-based GEMM with CUTLASS/CuTe abstractions. Appendix: Stream-K in CUTLASS This appendix explores some of the finer detail of Stream-K in CUTLASS: how to use it, its performance relative to other schedulers, and some of the optimizations used in writing it. Using Stream-K with the GEMM API First let’s discuss how to use the Stream-K scheduler with the CUTLASS 3.X GEMM API. We’ll start with a brief review of the CUTLASS 3.X GEMM API. The discussion will be limited to those parts that pertain to Stream-K, but you can find more details and examples on the CUTLASS repo. The code samples here are based on CUTLASS example 48. The CUTLASS GEMM API is organized into three parts: These are created using respective CollectiveBuilders, which provide developers the ability to configure the GEMM kernel. Developers can also choose to let CUTLASS automatically choose the appropriate configuration according to an internal heuristic. Here is the GEMM kernel using this auto feature: using CollectiveEpilogue = typename cutlass::epilogue::collective::CollectiveBuilder< cutlass::arch::Sm90, cutlass::arch::OpClassTensorOp, TileShape, ClusterShape, cutlass::epilogue::collective::EpilogueTileAuto, ElementAccumulator, ElementAccumulator, ElementC, LayoutC, AlignmentC, ElementC, LayoutC, AlignmentC, cutlass::epilogue::collective::EpilogueScheduleAuto >::CollectiveOp; using CollectiveMainloop = typename cutlass::gemm::collective::CollectiveBuilder< ArchTag, OperatorClass, ElementA, LayoutA, AlignmentA, ElementB, LayoutB, AlignmentB, ElementAccumulator, TileShape, ClusterShape, cutlass::gemm::collective::StageCountAutoCarveout< static_cast<int>(sizeof(typename CollectiveEpilogue::SharedStorage))>, cutlass::gemm::collective::KernelScheduleAuto >::CollectiveOp; using GemmKernel = cutlass::gemm::kernel::GemmUniversal< Shape<int,int,int>, // Indicates ProblemShape CollectiveMainloop, CollectiveEpilogue >; To specify the GEMM kernel to use Stream-K, we need to specify GemmKernel to use StreamKScheduler. using GemmKernel = cutlass::gemm::kernel::GemmUniversal< Shape<int,int,int>, // Indicates ProblemShape CollectiveMainloop, CollectiveEpilogue, cutlass::gemm::StreamKScheduler >; In addition, only certain mainloop and epilogue schedules support Stream-K. We will use TmaWarpSpecializedCooperative for both Mainloop and Epilogue. using CollectiveEpilogue = typename cutlass::epilogue::collective::CollectiveBuilder< // ..... // cutlass::epilogue::TmaWarpSpecializedCooperative >::CollectiveOp; using CollectiveMainloop = typename cutlass::gemm::collective::CollectiveBuilder< // ..... // cutlass::gemm::KernelTmaWarpSpecializedCooperative >::CollectiveOp; This GEMM kernel is now set up to use the Stream-K scheduler. An important note about the Stream-K scheduler is that it does not always use Stream-K partitioning. Instead, by default it will use an internal heuristic to determine what the best partitioning scheme is. The CUTLASS scheduler has four defined options for its DecompositionMode. We will discuss the decomposition modes in more depth later. For now, we can force it to use the Stream-K decomposition by setting it in the scheduler arguments. We can do this as part of the Gemm arguments. using DecompositionMode = typename cutlass::gemm::kernel::detail::PersistentTileSchedulerSm90StreamKParams::DecompositionMode; DecompositionMode decomp = DecompositionMode::StreamK; int splits=1; typename Gemm::GemmKernel::TileScheduler::Arguments scheduler_args; scheduler_args = { splits, static_cast<int>(options.swizzle), options.raster, decomp}; typename Gemm::Arguments arguments{ cutlass::gemm::GemmUniversalMode::kGemm, {options.m, options.n, options.k}, {block_A.get(), stride_A, block_B.get(), stride_B}, {{options.alpha, options.beta}, block_C.get(), stride_C, block_D.get(), stride_D}, hw_info, scheduler_args }; In addition to the DecompositionMode, the scheduler arguments also take in options related to Split-K and threadblock rasterization (which we also discuss in the Appendix below). Finally, with the arguments and GemmKernel ready, we can run GEMM using Stream-K partitioning. using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>; Gemm gemm; size_t workspace_size = Gemm::get_workspace_size(arguments); cutlass::device_memory::allocation<uint8_t> workspace(workspace_size); CUTLASS_CHECK(gemm.can_implement(arguments)); CUTLASS_CHECK(gemm.initialize(arguments, workspace.get())); CUTLASS_CHECK(gemm.run()); Stream-K Performance Now that we’ve discussed how to run GEMM with specific schedulers, let’s see how they perform given different input sizes. Once again, we will fix M and K and then vary N in increment of the tile size, using tiles-per-SM, (M/bM * N/bN)/num_SMs, for the x-axis. We benchmarked the three modes Stream-K, Split-K and DataParallel for comparison. In addition, we also repeated this process for different K values. The benchmark numbers were taken on an H100 PCIe GPU. The vertical dotted lines denote the wave boundaries. As expected, there is a sharp drop in performance for the DataParallel mode when going over wave boundaries. This is the wave quantization effect. The DataParallel mode matches or outperforms all other modes when the last wave is mostly full (tiles-per-SM is just under a whole integer), and underperforms when it is nearly empty (tiles-per-SM is just over a whole integer). Finally, we can see that the wave quantization effect is the most pronounced when the total number of waves is low. With Split-K, the effect of wave quantization is lessened. Split-K effectively multiplies the number of worktiles by a factor of K, so the number of waves goes up by a factor of K as well. You can see this in the graph, as the performance of Split-K with 2 splits oscillates with twice the frequency of DataParallel. Unfortunately, the additional overhead of reduction seems to outweigh the benefits for most cases, and Split-K only rarely performs well relative to the other two schedulers (typically when there are so few tiles that the GPU would be severely underutilized without splitting). The graph only shows Split-K with K of 2 in order to keep it uncluttered; higher values of K generally performed worse than K=2 except in very small X. Stream-K performance, by contrast, does not show wave quantization, fluctuating very little over the changing wave count. The Stream-K partitioning matches or outperforms Split-K in general, and beats out DataParallel partitioning at larger K values when the last wave is near empty. There is one point where the DataParallel and Stream-K get identical result at N=7296, which corresponds to X=1024*7296/114=4. Because the tiles were evenly distributable to the CTAs, there are no partial tiles or reduction required. So DataParallel and Stream-K get identical results. In addition to the three explicit decomposition modes, CUTLASS also has the Heuristic mode. The exact heuristic is discussed in a later section, but we can see how well it does against Stream-K and DataParallel (split-K dropped). As you can see, the CUTLASS Heuristic mode does a very good job predicting the best performing decomposition mode. It selects the DataParallel mode when the quantization effect is low, and selects Stream-K when it is high. As the Heuristic mode is the default, you are generally better off not specifying the decomposition mode and letting CUTLASS decide. CUTLASS implementation details Next let’s discuss the details of CUTLASS’s version of the stream-K scheduler (as of CUTLASS 3.6). Schedule. CUTLASS implements a version of the hybrid schedule explained above, in which the scheduler dedicates at most two waves to Stream-K work before organizing the rest of the work in a data-parallel fashion. As the data-parallel waves tend to work on the same K offsets at the same time, the L2 cache performance should be improved. Reduction. By default, CTAs collaborating on the same output tile do so in “turnstile” fashion. Suppose that a given output tile is worked on by CTAs 0, 1, …, n, ordered increasingly by the ranges of K-indices assigned. First, CTA 0 will compute its result, and write it to a global memory workspace. CTA 1 waits at a barrier for CTA 0 to finish writing, and then reduces its output into the same global memory workspace. CTA 2 waits for CTA 1, then reduces its output, and so on. Finally, CTA n waits for CTA n-1, but instead of reducing into the workspace, it reduces from the workspace into its accumulators, and finally computes the epilogue and writes to the output tensor. In an alternative “nondeterministic mode” (specified by the user with the argument ReductionMode::Nondeterministic), CTAs 1, …, n-1 no longer wait for each other, but simply atomically reduce into the workspace. All CTAs still have to wait for CTA 0, to initialize the workspace; CTA n still has to wait for CTAs 0, …, n-1. The nondeterminism results from the fact that reductions 1, …, n-1 can now occur in any order (and floating-point addition is nonassociative). Decomposition modes. The CUTLASS stream-K scheduler also supports Split-K and data-parallel persistent schedules, which the user can select using the decomposition_mode argument. (Passing an argument splits not equal to 1 will force the scheduler to run split-K with the given number of splits.) The user can also select DecompositionMode::Heuristic, in which the scheduler can fall back from stream-K to one of the simpler schedules: if either there is no wave quantization, or the tail wave is at least half full, then the scheduler falls back to data-parallel; if the number of CTAs assigned to stream-K work is a multiple of the number of stream-K tiles they should work on, then the scheduler falls back to split-K. Since Stream-K carries some extra overhead related to reduction and synchronization, it makes sense to fall back to data-parallel if wave quantization is not going to be an issue. From our tests, this heuristic almost always made the best choice across a variety of problem sizes. Threadblock rasterization. An advantage of persistent kernels independent of the wave quantization issue is the ability to choose the order in which worktiles are launched. For GEMM, this primarily matters because of cache performance: if worktiles in the same row or column (the same M or N index) of the output matrix are being worked on at around the same time, they will load data from one of the operand matrices from GMEM at the same time, which is likely to hit in L2 cache. Thus, the simplest way to improve cache performance for a persistent kernel is to launch worktiles in order along either the M or N mode. For example, if we launch worktiles along the N mode, keeping M fixed for as long as possible, data from operand matrix A will often be found in the cache. In CUTLASS, one can pass a raster_order argument to the scheduler, with RasterOrderOptions::AlongM and AlongN giving this behavior. Typically, one would like to raster along the shorter of the two modes, measured in units of worktiles; RasterOrderOptions::Heuristic will figure this out automatically. Figure 7 shows thread block rasterization for the case M<N with 6 SMs. RasterOrderOptions::Heuristic will pick AlongM in this case. For example on wave 0, the SMs work on tiles 0 through 5, and the number of operand loads from HBM reduces from an a priori count of 12 to 6 (assuming this fits into L2 cache). A more advanced technique is to try to consider proximity in both dimensions. For example, in Figure 7 the work tiles are adjacent in the M direction, but are offset by M in the N direction. We can improve on this by going along the N dimension for 2 tiles and then moving along the M direction. This is called threadblock swizzling, specifically for swizzle=2. We can specify the number of tiles to swizzle with the argument max_swizzle_size, but as the name suggests the scheduler may choose a smaller swizzle size if the problem is not big enough. The possible swizzle sizes are 1 (no swizzling), 2, 4, or 8. Figure 8 shows the order in which work tiles would be processed with AlongM raster order and swizzle size of 2 or 1. (Note that this is not the same as the XOR swizzle discussed in this post.) In Figure 8, each wave in swizzle=2 loads 5 operand tiles, whereas each wave in swizzle=1 loads 7 (once again assuming everything fits in L2). So with 6 waves, there are 30 operand tile loads for swizzle=2 and 42 operand tile loads for swizzle=1. The correct swizzle size for a given problem varies a lot with the problem and device characteristics. However, generally swizzle is only effective when there are enough tiles in the rasterized direction. More precisely, we would want the number of M tiles to be greater than SM/swizzle; otherwise, all the operand tiles in the rasterized direction are loaded anyway. With 114 SMs, the respective cutoffs for swizzles of 2, 4, and 8 are 57, 31, and 15. The figure above reflects these cutoffs, with the swizzle performing better once there are enough tiles. But as mentioned before, the number of tiles is not the only consideration; other factors like L2 cache size can further impact swizzle performance. So we recommend using the CUTLASS profiler to find the best swizzle number for your workload. Clusters and multicast. The Hopper architecture introduced threadblock clusters, groups of CTAs that are scheduled simultaneously on the same GPU processing cluster (GPC), with fast access to each other’s shared memory. Most importantly for the present discussion, TMA loads can be multicast, simultaneously loading the same data to the SMEM of all CTAs in a cluster in a single operation. This has some deep implications for tile scheduler construction. We said that it’s important for cache performance to try to schedule worktiles in the same row or column at around the same time. But it’s also important to try to assign them to the same cluster, as then the data from one of the operand matrices can be multicast. Moreover, for stream-K work, CTAs in a cluster should ideally be working on the same K offsets at the same time (i.e., the problem of skew that justified the hybrid schedule is also important within clusters). CUTLASS handles this elegantly. First, the entire schedule is constructed by dividing the output matrix into clusters of worktiles, rather than single worktiles: for example, if the cluster shape is 2×4, then during each data-parallel wave, each cluster will work on a rectangular 2×4 region of tiles in the output matrix. Second, for the stream-K phase, the scheduler attempts to divide the clusters doing stream-K work evenly into “groups”, where each group is assigned work with the same K-offset at the same time. The full algorithm is somewhat complicated, but fortunately, the user does not really have to think about it beyond specifying the cluster shape. Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] FlashAttention-3 for Inference: INT8 Quantization and Query Head Packing for MQA/GQA (External) In this blog post presented on the Character.AI research blog, we explain two techniques that are important for using FlashAttention-3 for inference: We also give microbenchmark results for both prefill and decode-type attention workloads, measured on an NVIDIA H100 SXM5 GPU. Joint work with Character.AI. Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] Epilogue Fusion in CUTLASS with Epilogue Visitor Trees Welcome to a supplemental article for our tutorial series on GEMM (GEneral Matrix Multiplication). Posts in the main series (1, 2) have discussed performant implementations of GEMM on NVIDIA GPUs by looking at the mainloop, the part responsible for the actual GEMM computation. But the mainloop is only a part of the CUTLASS workload. In this article, we will turn our focus to the epilogue, where post-processing (e.g., elementwise activations, scaling) and data store take place. Specifically, we’ll examine CUTLASS’s recipe for epilogue fusion, the Epilogue Visitor Tree (EVT) introduced by Chen et al., 2024. This post is structured in sections of increasing complexity. After an overview of the epilogue phase and EVT, we will showcase how to add a simple EVT to a CUTLASS GEMM kernel using both EVTs defined by CUTLASS and hand-constructed ones. We then give an extended example of developing an EVT for a novel use case that introduces a few more advanced tools: reduction operations and topological visitors. The code sample and documentation is available on our GitHub. Finally in the appendix we do a deep dive into the CUTLASS source code to explain how CUTLASS fuses the EVT into its epilogues. The epilogue phase and EVT In a kernel, the epilogue phase follows the mainloop phase and handles the post-processing of the output tensor. In the most trivial case, this phase simply stores the matrix product to global memory (GMEM). However, many AI workloads require additional processing of the output: addition of a bias term, calculation of elementwise activation functions like GELU, or application of more complicated reduction-type functions like layernorm or rmsnorm. These calculations may also require additional data to be loaded, such as when applying a residual connection or calculating loss using a set of ground-truth labels. It is typically beneficial to incorporate — or fuse — such operations into the GEMM kernel’s epilogue. There are several advantages to the fused kernel over having an additional kernel handle the post processing. This process of incorporating additional processing in between the GEMM mainloop and the kernel exit is called epilogue fusion. One difficulty in implementing epilogue fusion is that there are many types of operations to fuse. An epilogue may contain an essentially arbitrary sequence of computations, and may require the kernel to load or store additional data. Writing a fused kernel with each different epilogue pattern would quickly lead to an unmanageable explosion in the number of kernels. Moreover, programmers may want to experiment with novel epilogues, which for a properly fused epilogue would ordinarily require extensive changes to kernel code. To address this, CUTLASS uses a design pattern called the visitor pattern. In this pattern, the various types of epilogues are implemented in specialized epilogue visitor objects. The CUTLASS GEMM kernel is designed to accept an arbitrary epilogue visitor object to process the output data. The epilogue visitor will then visit the output data and process it. With this model, adding a new epilogue only requires creating a new specialized visitor class and swapping it out with the current visitor. Since an epilogue can involve a complex sequence of operations, it’s important that epilogue visitors be composable. An epilogue visitor tree (EVT) is a collection of visitors organized in a tree that collectively operate as a single visitor. Each leaf node in the tree represents a basic operation, such as add, multiply, load, or store. Non-leaf nodes are generally tree visitors (we’ll discuss an exception later). When a tree visitor visits the data, it recursively delegates to its children, using their outputs as inputs to its own operation. The output of the root of the tree is finally stored to GMEM. A basic example that computes is shown in Figure 1. The epilogue visitor tree abstraction is supported by CUTLASS in two ways. First, commonly encountered epilogues have pre-built visitor trees with user-friendly aliases. Second, developers can write their own visitor trees for customized epilogues. CUTLASS will then produce a fused kernel from the provided tree. We’ll walk through simple examples of both approaches, and then discuss how to create a more complex tree. Using Epilogue and EVT In this article we will be focusing on the CUTLASS 3.X syntax for EVT, which currently only supports the NVIDIA Hopper™ architecture and only works with warp-specialized kernels. For older generations, use the visitors in 2.X syntax — see cutlass/epilogue/threadblock/fusion/visitor_2x.hpp, and Example 35 for usage. The basic way to construct a kernel in the CUTLASS 3.X API is in terms of a CollectiveMainloop and a CollectiveEpilogue. using GemmKernel = cutlass::gemm::kernel::GemmUniversal<     cute::Shape<int,int,int,int>, // ProblemShape [M,N,K,L]     CollectiveMainloop,     CollectiveEpilogue >; We will of course be focusing on the CollectiveEpilogue half, and will only discuss the rest in relation to the epilogue. For more information on the Mainloop and GEMM API in general see the CUTLASS documentation on the 3.X API. Complete examples of kernel definitions can be found in CUTLASS’s example 49 and on our GitHub. CUTLASS offers multiple different ways to create a CollectiveEpilogue, which we will go over in order of increasing complexity. DefaultEpilogue For many common epilogues that only use elementwise operators, the shortest path to epilogue fusion is the DefaultEpilogue. One can define a CollectiveEpilogue as follows. using CollectiveEpilogue = cutlass::epilogue::collective::DefaultEpilogue< cutlass::gemm::TagToStrideC_t<LayoutC>, cutlass::gemm::TagToStrideC_t<LayoutC>, cutlass::epilogue::thread::LinearCombination<ElementC, 1, ElementAccumulator, ElementAccumulator>>; The final argument can be replaced with other elementwise operators, such as LinearCombinationReLU. You can find more operators in include/cutlass/epilogue/thread. One interesting note here is that the DefaultEpilogue does not use the visitor tree. Instead it simply loops over the output fragment (data) and applies the specified operation. So it is not designed for complex epilogues. Built-in EVTs If you need something more complex, then you will need to use EVT. CUTLASS provides a variety of common operations that are built using EVT, which can be found at include/cutlass/epilogue/fusion/operations.hpp. To use one of the built-in EVTs, or to use any EVT for that matter, we need to turn to the CollectiveBuilder for the epilogue. using EVTOp = cutlass::epilogue::fusion::LinCombEltAct< cutlass::epilogue::thread::ReLU, ElementD, ElementCompute, ElementC, ElementScalar>; using CollectiveEpilogue = typename cutlass::epilogue::collective::CollectiveBuilder< cutlass::arch::Sm90, cutlass::arch::OpClassTensorOp, Shape<_128,_128,_64>, Shape<_1,_1,_1>, // grid and cluster shapes cutlass::epilogue::collective::EpilogueTileAuto, // automatically compute epilogue tile size ElementAccumulator, ElementCompute, // dtypes ElementC, LayoutC, AlignmentC, ElementD, LayoutD, AlignmentD, EpilogueScheduleType, // need TMA warp-specialized to use EVT EVTOp >::CollectiveOp; The above code example implements LinearCombination with ReLU activation using EVT. For EVTOp we’ve selected the appropriate operation from the cutlass::epilogue::fusion. The template arguments are of course dependent on the operation in question, so refer to operations.hpp for more detail on the specific operation. For our example with LinCombEltAct , the first argument is the activation function (see cutlass/epilogue/thread/activation.h for more options), and the rest are the datatypes of the input and output as well as the datatype to use for accumulation. The objects found in operations.hpp are placeholders for the actual operations. To see the structure of the tree itself, we need to turn to sm90_callbacks_tma_warpspecialized.hpp, which maps these placeholders to their architecture-specific EVT implementations. We will discuss the tree implementation in the next section. This epilogue requires additional arguments, the scalars alpha and beta. For GEMMs built with the CollectiveBuilder, these arguments can be specified along with the rest of the arguments to the kernel when the kernel is initialized. Arguments to the kernel look like typename Gemm::Arguments arguments { cutlass::gemm::GemmUniversalMode::kGemm, // GEMM mode (batched, grouped, // etc.) problem_size, {block_A.get(), stride_A, // pointers and strides for mainloop block_B.get(), stride_B}, {{}, // arguments.epilogue.thread, modified below block_C.get(), stride_C, // pointers and strides for epilogue block_D.get(), stride_D}, hw_info // hardware info }; Arguments to the EVT are found inside arguments.epilogue.thread. For the built-in EVTs, this is a flat struct of conveniently named arguments, so that we can write; arguments.epilogue.thread.alpha = alpha; arguments.epilogue.thread.beta = beta; Gemm gemm; gemm.initialize(arguments, workspace_ptr); // workspace_ptr points to additional GMEM workspace, allocated elsewhere Other EVT arguments structs can be found inside sm90_callbacks_tma_warpspecialized.hpp. Looking at this file, we see a few more options available for Sm90LinCombEltAct. First, instead of specifying alpha and beta by value at initialization, we can pass pointers alpha_ptr and beta_ptr to their locations in device global memory. Second, some activation functions also require other arguments. For example, suppose that instead of applying ReLU, we wanted to clamp the output to between -1.0 and 1.0. These are the arguments lower_bound and upper_bound to cutlass::epilogue::thread::Clamp, and can be passed to arguments.epilogue.thread as the struct activation. Unpacking the structure of an EVT If none of the built-in operations suit your needs, then you need to create a custom EVT by constructing the visitor tree yourself. To discuss this process, we will look at how the built-in LinCombEltAct is constructed because these built-in operations are created using the same building blocks as one would use for a custom EVT. The LinCombEltAct we see in operations.hpp maps to the Hopper specific implementation defined in sm90_callbacks_tma_warpspecialized.hpp. using Sm90LinearCombination = Sm90EVT<Sm90Compute<homogeneous_multiply_add, ElementOutput, ElementCompute, RoundStyle>, // beta * C + (alpha * acc) Sm90ScalarBroadcast<ElementScalar>, // beta Sm90SrcFetch<ElementSource>, // C Sm90EVT<Sm90Compute<multiplies, ElementCompute, ElementCompute, RoundStyle>, // alpha * acc Sm90ScalarBroadcast<ElementScalar>, // alpha Sm90AccFetch // acc > >; using Sm90LinCombEltAct = Sm90EVT<Sm90Compute<ActivationFn, ElementOutput, ElementCompute, RoundStyle>, // activation(beta * C + (alpha * acc)) Sm90LinearCombination<ElementCompute, ElementCompute, ElementSource, ElementScalar, RoundStyle> // beta * C + (alpha * acc) >; The core of the CUTLASS visitor tree is Sm90EVT, which is an alias for Sm90TreeVisitor. This class represents a non-leaf node in the tree. The first argument is the operation associated with this node, while all arguments that follows are the child nodes. The template arguments allow for an arbitrary number of nodes — for example, the activation function in Sm90LinCombEltAct takes one node, while the fused-multiply-add operation in Sm90LinearCombination takes three nodes. Sm90Compute is a node op that defines a node to be a compute node. The first template parameter is an elementwise operation (e.g. ReLU, FMA) and the others determine the datatypes used and the floating-point rounding style. We see several other nodes used in this tree. Values from C and from the accumulator (AB) are obtained using Sm90SrcFetch and Sm90AccFetch respectively. Scalars are obtained using Sm90ScalarBroadcast. For a full documentation of available nodes, see our GitHub. We tend to think of epilogue operations in a slightly different way — the computation graph on the right of Figure 2. In this graph, the flow of computation moves downwards, and non-leaf nodes are simply operations accepting input from elsewhere. As we’ll discuss later, such a graph need not be a tree at all. Epilogue visitor trees are always trees; we draw them with the root at the top, so that the flow of recursion moves downwards, and non-leaf nodes are tree visitors. Each tree visitor performs the operation specified by its leftmost child on the rest of its children. (To ward off a potential confusion on terminology, we emphasize that this leftmost child of a tree visitor in the EVT is always distinguished as the tree visitor’s node operation and is not referred to as a child node with respect to the templated CUTLASS code. Note that in this article, whether or not we include the node op among the children can always be inferred from context; for example, we separate the two in the remainder of this subsection.) As with the built-in EVT, we need to pass in the arguments alpha and beta to run the GEMM. However, we can no longer use a flat named-argument interface for the custom EVT because there may be multiple instances of the same type of nodes. Instead, the arguments form a tree that reflects the structure of the EVT. Sm90EVT nodes take their arguments in the form: {first_child_args, ... last_child_args, node_op_args} Other nodes expect their own structure for arguments, also documented on our GitHub. For this tree, we can write: arguments.epilogue.thread = { // unary op: activation(beta * C + (alpha * acc)) { // ternary op (FMA): beta * C + (alpha * acc) {{beta}, {beta_ptr}}, // args to Sm90ScalarBroadcast {}, // no args to Sm90SrcFetch (kernel knows about C) { // binary op : alpha * acc {{alpha}, {alpha_ptr}}, // args to Sm90ScalarBroadcast {}, // no args to Sm90AccFetch {} // op args: multiplies }, // end binary op {} // op args: multiply_add }, // end ternary op activation_args // op args: activation }; // end unary op Note that the node_op_args to a tree visitor node appear after the arguments to all children — whereas in the template parameters to Sm90EVT, the node operation appears before the children. Thus, the trees for operations and arguments don’t have the same structure. The relation between the two is shown in Figure 3. Most of the operations we’ve used so far don’t require arguments, so the tree is mostly empty. The activation_args are additional parameters to the activation function, which may also be empty. Finally, Sm90ScalarBroadcast expects either an array of scalars or an array of pointers to scalars, which are then reduced before broadcasting. In this case, these arrays have length 1. For a more complete documentation of argument structures, see our GitHub. A more complex example: binary cross-entropy loss Let’s develop a more complex example that has real-world applicability and isn’t predefined by CUTLASS: binary cross-entropy loss. As motivation, suppose that we’re training a machine learning model to detect objects in images. For each image supplied, the model should label whether it contains a person, a dog, a bus, and so on. A given image could contain any number of these objects, and there are a large number of objects to be considered. In this situation, called extreme multi-label classification, one potential way to evaluate the model is to treat each label as a separate binary classification problem, evaluate the model’s performance on each problem independently, and aggregate the results. This would lead us to the following loss function: where In a real classification model like XML-CNN, the matrix might itself be obtained as the output of a linear layer, where and are parameters of the model (the weights and bias of the last layer) and are the outputs of the model’s previous layer on the current set of training examples. In particular, the loss computation occurs shortly after a GEMM, which makes it a good candidate for epilogue fusion. Chen et al. used the loss gradient computation as one of their graph compiler benchmarks. Taking the gradient allows one to skip computing the loss and results in a dramatically simpler computation. Here, we will compute the loss, not its gradient, to provide a more interesting example. A direct interpretation of the loss formula would be the graph in Figure 4. (For simplicity, we’ll ignore the -1/n scaling factor from now on.) This presents a set of new complications: Topological visitors EVTs are computation graphs represented as trees. During the visit process, the tree is traversed recursively; each tree visitor node calls the visit method of each of its child nodes and combines their results using its specified node operation. Importantly, each node is expected to be visited only once. But in general, a computation graph need not be a tree, but a directed acyclic graph. In practical terms, this means that the output of a node could be required by multiple other nodes. If we still represent such a graph as a tree just with the tree visitors, we would have to effectively duplicate the required node; one for each parent node that needs the output. This approach is inefficient as it would lead to a lot of repeated work. Instead, we use a node called a topological visitor. While a tree visitor is used to represent a single operation in a computation graph, a topological visitor represents any subgraph of this graph. A topological visitor has a child for each node in its subgraph. During the visit process, it delegates to its children in topological order, populating each child’s input with the outputs of children that were already visited. “Topological order” here means that no child node is visited before any of its predecessors in the computation graph — in other words, by the time a descendant is visited, all of its inputs must be ready. The return value of a topological visitor is the return value of the last node it visits. A simple example is shown in Figure 5. This computation graph has two nodes, 1 and 2, that both require the result of node 0, so we should structure the associated EVT with a topological visitor. Node 0 does not need any input, as it just returns the accumulator values. Nodes 1 and 2 each take one input, which is the output of node 0. Node 3 takes 2 inputs, which are the outputs of nodes 1 and 2. Finally, the topological visitor returns the output of node 3. The EVT is then a tree with a root (the topological visitor) and 4 leaves (the numbered nodes of the computation graph). CUTLASS syntax for the topological visitor is given on the right-hand side of the figure. The first template parameter is the data type of the computation. The second is a sequence of tuples, which we will return to shortly. The remaining template parameters are the nodes visited (which could themselves be tree or topological visitors). The nodes are enumerated in the order that they appear in the argument, with first being node 0. Returning to the tuples, they show the node dependence where the Nth tuple lists the nodes whose outputs will be used as input to node N. The ordering of both the tuples and the nodes is crucial, because the topological visitor visits the nodes in the order of the template arguments. Valid orderings correspond to valid topological orderings of the subgraph in question. For example, it is possible to swap node 1 (ReLU) and node 2 (Sigmoid) in the template argument list, but you can’t swap node 2 (Sigmoid) and node 3 (plus) because node 3 needs the result of node 2. To summarize, the purpose of topological visitors is to turn non-tree DAGs into trees. This means that, as a rule of thumb, a topological visitor only needs to visit a non-tree portion of a computation graph. As in Figure 5, this portion is usually “between a branch and a merge”, starting where multiple computation streams are generated and ending where they recombine. Constructing an EVT using topological visitors Using the topological visitor, we can reuse data from the accumulator and label matrix without having to reload it. Before writing the tree as a CUTLASS type, there are a few more adjustments we can make. Let’s return to the loss formula, Since each is either 0 or 1, only one of these terms is actually nonzero for any given (i, j), meaning that the term is equal to either or . Moreover, Thus the formula simplifies to This simplification improves the calculation in a couple of ways: Second, the new formula still underflows if is large (so that ). There are several ways to handle this, but the simplest is probably clamping the output of so that it’s never too close to 0. Making these changes leads us to the computation graph in Figure 6. This graph is still not a tree, so we have to use a topological visitor in the associated EVT. For complex graphs like this one, it can be helpful to abbreviate parts of the EVT with type aliases, as we’ve done below. using CMinus1 = Sm90EVT< Sm90Compute<cutlass::minus, ElementCompute, ElementCompute, RoundStyle>, Sm90SrcFetch<TC>, Sm90ScalarBroadcast<ElementScalar> >; using MatmulPlusBias = Sm90EVT< Sm90Compute<cutlass::plus, ElementCompute, ElementCompute, RoundStyle>, Sm90ColBroadcast<0, CtaTileShapeMNK, ElementBias, Stride<_1, _0, _0>>, Sm90AccFetch >; using TopoVisitor = Sm90TopologicalVisitor< ElementCompute, cute::tuple< cute::seq<>, cute::seq<>, cute::seq<0, 1>, cute::seq<0>, cute::seq<3>, cute::seq<4>, cute::seq<2, 5>, >, MatmulPlusBias, CMinus1, Sm90Compute<cutlass::multiplies, ElementCompute, ElementCompute, RoundStyle>, Sm90Compute<cutlass::epilogue::thread::Sigmoid, ElementCompute, ElementCompute, RoundStyle>, Sm90Compute<cutlass::epilogue::thread::Clamp, ElementCompute, ElementCompute, RoundStyle>, Sm90Compute<FastLog, ElementCompute, ElementCompute, RoundStyle>, Sm90Compute<cutlass::plus, ElementCompute, ElementCompute, RoundStyle> >; using BCELossEVT = Sm90EVT< Sm90ScalarReduction< cutlass::plus, // register reduce function cutlass::atomic_add, // GMEM reduce function ElementScalar, ElementCompute, RoundStyle, Stride<_0, _0, _0>>, // no batching here TopoVisitor >; A few comments about this: The Arguments for a topological visitor are the list of Arguments to each node it visits. The Arguments for the whole EVT are as follows: BCELossEVT::Arguments args_BCE = { { // TopoVisitor [(C - 1) * (bias + AB) + log(clamp(sigmoid(bias + AB)))] { // args to MatmulPlusBias = bias + AB (node 0) {d_bias_BCE.data().get(), 0, stride_bias_BCE}, // args to ColBroadcast {}, // args to AccFetch {} // op args: plus }, { // args to CMinus1 = C - 1 (node 1) {}, // args to SrcFetch {{ElementScalar(1.0)}}, // args to ScalarBroadcast {} // op args: minus }, {}, // op args: multiplies (node 2) {}, // op args: sigmoid (node 3) {0.001f, 0.999f}, // op args: clamp (node 4) {}, // op args: log (node 5) {}, // op args: plus (node 6) }, {d_result, 0, stride_result} // args to ScalarReduction }; For Sm90ColBroadcast, we need to provide a pointer to the bias vector, a default value to be used if this pointer is null, and the stride (dM, dN, dL), where dL can be nonzero in the case of a batched computation. Sm90ScalarReduction needs an pointer to store the result to, the reduction identity, and the stride, where again the stride allows for batching. Graph compilation and further optimizations As this example has shown, the process of constructing an EVT is not entirely trivial. Ideally one would like to describe the epilogue mathematically in a high-level language like Python, and have an automated system parse it into an EVT while applying obvious optimizations along the way. The authors of the EVT paper call such a system a deep learning compiler, and implement it in a torch.fx form on the paper’s GitHub repo. CUTLASS provides a simple Python-to-C++ version as part of their Python interface. A few of the optimizations carried out by the EVT compiler algorithm ought to be considered by anyone writing epilogue visitor trees by hand: Developers working with complex epilogues would be well-served by reading the paper. Conclusion In this article, we have presented a detailed discussion of epilogue fusion and epilogue visitor trees. We introduced epilogue fusion and its importance in high performance GEMM workloads. Then we discussed how EVTs provide a way to develop fusible epilogues independently of the kernel mainloop itself. Next we unpacked the different interfaces CUTLASS provides for epilogue fusion: the DefaultEpilogue, prebuilt EVT and custom EVT. Finally, we presented a complex real-world example by creating an EVT for binary cross-entropy. This example, as well as supplementary documentation on the various CUTLASS EVT nodes, is available on our GitHub. Appendix: CUTLASS’s implementation of EVT Advanced users writing their own custom kernels may also wish to use EVT to implement an epilogue or collection of epilogues with minimal modifications to the rest of their kernel. To this end, it’s worth examining how CUTLASS handles EVT objects in the TMA warp-specialized epilogue used in kernels constructed by its CollectiveBuilder. Understanding this structure could help developers interact with CUTLASS’s EVT objects, or write their own systems for epilogue fusion. Note that our discussion is accurate as of CUTLASS’s version 3.5; as these are internal details of the CUTLASS EVT implementation, they may change in future versions. We’ll begin by describing the high-level structure of the epilogue. Each CTA is responsible for an output tile of shape (CTA_M, CTA_N), and loops over subtiles of shape (EPI_TILE_M, EPI_TILE_N). The epilogue piggybacks off the warp-specialization of the mainloop; warps which were producer warps in the mainloop will load C and any auxiliary matrices required by the epilogue by executing the load() method, while consumer warps will perform computations and stores by executing the store() method. The two types of warps synchronize across a Pipeline called load_pipeline. (The load() method and pipeline aren’t used if it’s determined at compile time that the epilogue does not require loads.) Since consumer warps need to stage data in SMEM to perform TMA store, these warps synchronize with each other across another pipeline, store_pipeline. Corresponding to these two methods, the epilogue visitor tree supports two factory functions: get_producer_load_callbacks() and get_consumer_store_callbacks(). These produce objects pld_callbacks and cst_callbacks that will perform all operations required by the EVT in load() and store() respectively. Each EVT node defines some methods of one or both of these callbacks objects that allow it to act in the right place in the kernel. Let’s take a closer look at the store() function. Here is some pseudocode showing the basic structure. cst_callbacks.begin(); // column and row broadcasts copied from GMEM // outer loops over epilogue tiles for (int epi_n = 0; epi_n < EPI_N; ++epi_n) { for (int epi_m = 0; epi_m < EPI_M; ++epi_m) { cst_callbacks.begin_loop(epi_m, epi_n); // row broadcasts copied from SMEM if (is_producer_load_needed) wait for load_pipeline; // ensure that C and aux tiles are ready in SMEM if (is_C_load_needed) load tile of C from SMEM; // copy aux tensors from SMEM cst_callbacks.previsit(epi_m, epi_n, load_wait_state.count(), is_producer_load_needed); if (is_producer_load_needed) release and advance load_pipeline; // Inner loop over values held by thread for (int epi_v = 0; epi_v < EPI_V; ++epi_v) { // perform thread-local computations tRS_rCompute_frg(epi_v) = cst_callbacks.visit(tRS_rAcc_frg_mn(r2s_v + epi_v), epi_v, epi_m, epi_n); } // Reduce across CTA using current D subtile as SMEM workspace // After this executes, reduction results are held in tRS_rCompute_frg cst_callbacks.reduce(sD_epi(_,_,store_pipe_producer_state.index()), synchronize, epi_m, epi_n, is_last_iteration, tRS_rCompute_frg); if (D store is needed) do RMEM->SMEM copy of D; // handle other SMEM stores and any non-TMA GMEM stores cst_callbacks.postreduce(epi_m, epi_n, store_pipe_producer_state.count(), issue_smem_store); wait for SMEM stores to finish; if (D store is needed and this thread is leader) TMA store subtile of D; // callbacks now handle any *other* TMA stores cst_callbacks.tma_store(epi_m, epi_n, store_pipe_producer_state.count(), issue_tma_store); commit to and advance store_pipeline; acquire store_pipeline; cst_callbacks.end_loop(epi_m, epi_n); } } cst_callbacks.end(); // perform cross-CTA reductions in GMEM The highlighted lines call various member functions of cst_callbacks, which perform all the behavior required by the EVT. Removing these lines gives a simple, hardcoded epilogue that loads data from C and stores data to D. The load() method is similar but simpler: acquire load_pipeline; pld_callbacks.begin(tma_barrier, load_pipe_producer_state.count(), issue_tma_load); for (int epi_n = 0; epi_n < EPI_N; ++epi_n) { for (int epi_m = 0; epi_n < EPI_M; ++epi_m) { acquire load_pipeline; // aux TMA loads pld_callbacks.step(tma_barrier, epi_m, epi_n, load_pipe_producer_state.count(), issue_tma_load); if (is_C_load_needed and this thread is leader) TMA load subtile of C; commit to load_pipeline; advance load_pipeline state; pld_callbacks.end(); The callbacks objects obtained from the EVT by evt.get_consumer_store_callbacks() and evt.get_producer_load_callbacks() are themselves trees of the same structure as the EVT. Their member functions, cst_callbacks.begin() and so on, are recursive: when one of these functions is called on a non-leaf node, it will call the same function on each of its children. In addition, every leaf node overloads one or more of these functions to perform its stated behavior. For example, the Sm90AuxLoad node overloads: The most important one of these functions is cst_callbacks.visit(), which is overloaded by every node type and performs required thread-local operations. Next let’s discuss how the callbacks objects are provided information about the kernel setup and user-defined parameters. This information comes from two sources. Information about the kernel setup (the problem shape, CTA and epilogue tile sizes, tiled copies and MMAs, and so on) is passed as input to evt.get_consumer_store_callbacks() and evt.get_producer_load_callbacks() in the form of flat structs ConsumerStoreArgs and ProducerLoadArgs. As the thread index is one of these arguments, the callbacks objects must be constructed inside the kernel. Runtime data, such as scalars, additional operator parameters, and pointers to aux matrices, is wrapped in a nested Arguments struct as shown above, and used to initialize an object whose type is the given EVT. In GEMM kernels constructed by the CollectiveBuilder, this all takes place in the function call gemm.initialize(arguments, workspace_ptr); The arguments here are arguments to the entire GEMM, a nested struct containing the arguments to the EVT. A typical arguments would look like typename Gemm::Arguments arguments{ cutlass::gemm::GemmUniversalMode::kGemm, // GEMM mode (batched, grouped, // etc.) problem_size, {block_A.get(), stride_A, // pointers and strides for mainloop block_B.get(), stride_B}, {epilogue_fusion_args, // arguments to EVT block_C.get(), stride_C, // pointers and strides for epilogue block_D.get(), stride_D}, hw_info // hardware info }; Meanwhile, workspace_ptr points to a GMEM allocation used for additional workspace. Currently, this is workspace is only used by reduction nodes (so for an EVT that performs no reductions, a null pointer can be passed). The required size can be calculated by size_t workspace_size = Gemm::get_workspace_size(arguments); When gemm.initialize() is called, two things happen: Equipped with this knowledge, we can also get custom kernels to interact with CUTLASS’s epilogue visitor trees outside of its CollectiveBuilder interface. This basically entails: In particular, note that each type of node only overloads some of the callbacks methods — and thus it’s possible to introduce partial support for EVT into a kernel by incorporating some of these calls. As a basic example of this, we’ve added partial support for EVT to the sm90 kernel constructed for a previous blog post. The example is available on our GitHub repo. This kernel only calls cst_callbacks.begin(), cst_callbacks.visit(), and cst_callbacks.end(). That’s already enough to support any epilogue consisting of scalar and column broadcasts, elementwise operations, and scalar reductions — including the earlier binary cross-entropy loss example! Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] CUTLASS Tutorial: Efficient GEMM kernel designs with Pipelining Welcome to Part 2 of our tutorial series on GEMM (GEneral Matrix Multiplication). In Part 1, we discussed the computational side of GEMM by going over WGMMA, which is the primitive instruction to multiply small matrix tiles on GPUs based on the NVIDIA® Hopper™ architecture. In this part, we turn our focus to the memory side of GEMM. Specifically, we will explain how to efficiently bring small tiles of operand tensors from a GPU’s global memory into its on-chip memory, from where they can be passed into WGMMA (or other primitive MMA instructions, for that matter). The main concept to explain is how to orchestrate a pipeline of data in order to efficiently feed the tensor cores. In the context of GEMM kernel design, pipelining refers to the idea of overlapping copy and MMA operations through maintaining multiple data buffers. In this article, we will cover two pipelining strategies that are effective on the Hopper architecture: To then ensure correctness of the kernel, one needs to pay careful attention to the data dependencies at hand, which govern when buffers can be read by the MMA instructions or filled by the copy operations. We will go into detail on how to write the necessary synchronization logic for a pipelined GEMM kernel using tools from the CUTLASS library, most notably the CUTLASS Pipeline classes. We then present a performance evaluation of pipelining and show how exploiting this one optimization idea already achieves ~65% utilization for a Hopper GEMM kernel in half-precision. Finally, in the Appendix we explain how to write a pipelined GEMM kernel for GPUs based on the NVIDIA Ampere architecture. The big picture: “Feeding the beast” There are 2 main actions in a GEMM kernel: copying the numbers to the correct memory addresses, and multiply-accumulating them. The former action is handled by copy instructions: TMA in Hopper, cp.async in Ampere, and vanilla copy in earlier architectures. The latter action, since the Volta architecture in 2017, has become the exclusive business of the tensor cores. Through many generations, the tensor cores have become a beast at consuming the numbers fed to them. For instance, the H200 SXM GPU’s tensor cores can deliver up to 3,958 TFLOPS (TeraFLOPs per second). On the other hand, the memory bandwidth of the same H200 SXM GPU is only 4.8 TB/s (TeraBytes per second). This data transferring speed is much slower than the tensor cores’ speed, and oftentimes is not trivial to fully utilize! As such, a common theme of CUDA programming — and GEMM kernel design in particular — is to figure out how to copy numbers fast enough to keep the tensor cores busy. We call this process “feeding the beast.” In general, there are two overarching strategies to “feed the beast,” which are complementary and function at different scopes (grid vs. block). The first strategy is effective threadblock scheduling, which entails distributing the computation among the CTAs to obtain good load balancing and a higher rate of L2 cache hits. We will discuss this in a later blog post, but for now, we refer curious readers to the techniques of threadblock rasterization and persistent kernels, for instance as implemented in CUTLASS. The second strategy, which we focus on in this tutorial, is to overlap copying with math operations. In particular, while the tensor cores are busy multiplying a batch of numbers that they receive, we should tell the copying units to copy the next batch of numbers. That way, we effectively hide part of the copying latency. This is the goal of pipelining. Latency, warps, and warp-specialization Before discussing the mechanics of pipelining, we go over some history regarding the two overlapping strategies mentioned in the introduction: multistage and warp-specialization. First, the idea of overlapping memory copy with math operations is neither new nor specific to GPUs. Readers familiar with CPUs may find it similar to the cache prefetching technique, where an asynchronous fetch request is made before the data is needed. In fact, the pipelining technique we discuss in this post is conceptually the same as CPU cache prefetching! However, since prefetching on GPUs is expensive in terms of silicon area on the chip, the technique is implemented differently. The most basic method by which GPU programmers can create overlapping is via excess warps (warps are groupings of 32 contiguous threads). Nvidia GPUs allow a large number of warps per SM (streaming multiprocessors), and can switch between them with minimal overhead. In particular, the warp schedulers can simply switch to another warp if one warp encounters a slow memory fetch. In order to give the warp schedulers more opportunity to hide latency, a technique called warp-specialization was introduced circa 2011 [1, 2]. With warp-specialization, some warps are dedicated to memory fetches (producers), while others are dedicated to compute (consumers), and named barriers are used for synchronization between them. The idea is that the warp schedulers can then more easily hide the latency of copy operations within compute (and vice-versa). Starting with the Ampere architecture, Nvidia introduced cp.async, which allows memory copy to happen asynchronously in the same warps that are also doing math. Concretely, asynchrony means that a warp can issue cp.async to load data into the next buffer and then execute math operations on the current buffer without being stalled on the completion of the async load. In particular, this removes the need to use warp-specialization in order to mask data transfer with compute. Multistage kernel designs leverage this idea. The fastest Ampere GEMM kernels, as well as the famous FlashAttention-2, use the multistage kernel design. Finally, with the most recent GPU architecture — Hopper — new features such as TMA async copy and warpgroup-wide register reallocation were introduced, which when taken in conjuction makes warp-specialization very effective on Hopper (as we explain below). In particular, the fastest CUTLASS Hopper GEMM kernels use warp-specialization. Pipelining Illustrated Figure 1 illustrates a theoretical pipeline of LOAD and MMA. Here, LOAD refers to the process of copying operand matrix tiles from GMEM to SMEM, and MMA refers to the tensor core operation that multiplies the operand tiles stored in SMEM. As shown in the figure, by overlapping two LOADs with two MMAs, we save 2 units of time. One problem that arises from contemplating Figure 1 is: where do LOAD_1 and LOAD_2 copy the data into? Clearly, we don’t want subsequent loads to overwrite the data copied in by prior loads before MMA can compute on that data. Nor do we want unnecessary stalls caused by waiting on SMEM to become free to write into. Otherwise, the supposed gain of 2 units of time will not actually be achieved. A simple solution to this problem is to reserve twice as much memory in SMEM than is needed by MMA and use them in an alternating fashion. This strategy is called double buffering and is illustrated in Figure 2. Of course, we can generalize to having more than two alternating buffers. Doing so creates more opportunity for overlap, allowing for more efficient use of the available hardware, at the cost of using more SMEM. It is not trivial to implement pipelines correctly and efficiently. Programmers must handle the multiple buffers as well as asynchronous load calls across multiple threads. In the next section, we show how to implement pipelining via a CUTLASS abstraction: the Pipeline class. The CUTLASS Pipeline abstraction CUTLASS’ asynchronous Pipeline classes serve as an effective abstraction to manage copy and compute across multiple data buffers and participating threads. They include the classes PipelineAsync, PipelineTmaAsync, and PipelineTransactionAsync, for which “Pipeline” is a generic reference. We first explain how a CUTLASS Pipeline orchestrates the pipelining of data at a high-level. Let buffers be a shared memory buffer with N stages. We wish to synchronize between a producer writing data to the buffers (e.g., TMA) and a consumer operating with that data as it becomes available (e.g., WGMMA). Barriers. To synchronize the buffer stages across the producer and the consumer, a Pipeline adheres to the standard acquire and release model that uses locks to manage accesses to the buffers. To this end, let full_barrier and empty_barrier be two arrays of barrier objects, both of size N. These barrier objects possess a phase bit value which is initialized to 0 and flips between 0 and 1. Concretely, these barrier objects will be mbarrier objects resident in SMEM. An mbarrier object is initialized both with the aforementioned phase bit as well as an arrival count. It then supports arrive-on and wait operations and flips its phase based on reaching the arrival count threshold. Importantly, the values of these barrier objects can and should be visible to all threads. Thread-local pipeline state. Next, we have the PipelineState class as a thread-local enumerator that serves to track the thread’s current index and phase, with the number N of stages passed in as a template parameter. The index takes on integer values modulo N, and the phase is either 0 or 1. Moreover, the ++ operator for the PipelineState class is overloaded so that the index is incremented modulo N, and the phase is flipped when the index increments to 0. Synchronization. We now explain how the barrier objects and thread-local pipeline states are used to synchronize producers and consumers. To avoid confusion, let us distinguish the producer action from the producer thread(s) issuing that action, as these may potentially be decoupled (think of TMA). First, the producer action will flip the phase of full_barrier[i] to signal that it has filled the ith stage of the buffer, so that the consumer threads can now read from it. Similarly, the consumer threads will flip the phase of empty_barrier[i] to signal that they have finished consuming the ith stage of the buffer, so that the producer can now write to it. Note that we are agnostic as to exactly how the producer action or the consumer threads flip the phase bit in SMEM, as long as it is done via the arrival count mechanism. For example, all the consumer threads could collectively act to increment the arrival count, or one consumer thread per warp could be elected to do the same. Finally, each thread, whether consumer or producer, keeps track of a phase to match against the phases of the barrier objects, and in fact threads taking on both consumer and producer roles will need to track both phases. These “internal” phases of the threads need to be flipped as well as the kernel proceeds through iterations of its mainloop. Four pipeline methods. Now let pipeline be an instance of a Pipeline class initialized with pointers to full_barrier and empty_barrier, and let pipe_state be an instance of a PipelineState class. Then pipeline can invoke the following four key methods: In the description of the blocking instructions producer_acquire and consumer_wait, by flipping against the phase of pipe_state we mean that, for example, if the current phase of the barrier is 0, then the method blocks if the phase of pipe_state is 0 and doesn’t block if it is 1. Note that as written, the pair of methods (producer_acquire, consumer_release) and (producer_commit, consumer_wait) are completely symmetric in functionality. However, if the Pipeline class in question is PipelineTmaAsync, then full_barrier is wrapped as an instance of the cutlass::arch::ClusterTransactionBarrier class and the signaling mechanism for full_barrier is handled by the TMA load method itself via incrementing the transaction count. In this case, the producer_commit method is actually a no-op; we return to this point below. However, in pseudocode we will still insert producer_commit if the TMA copy is not written out, as we do now. Putting it all together, the following pseudocode shows the four pipeline methods in action: using PipelineState = typename cutlass::PipelineState<N>; // We initialize smem_pipe_write to start with an opposite phase // (i.e., 1 instead of 0), since the buffers start out as empty. PipelineState smem_pipe_write = cutlass::make_producer_start_state<Pipeline>(); PipelineState smem_pipe_read; for (int i = 0; i < total_steps; ++i) { pipeline.producer_acquire(smem_pipe_write); // Acquire data (e.g. TMA, cp.async, etc.) pipeline.producer_commit(smem_pipe_write); ++smem_pipe_write; pipeline.consumer_wait(smem_pipe_read); // Compute workload (e.g. WGMMA) pipeline.consumer_release(smem_pipe_read); ++smem_pipe_read; } We find the above code snippet helpful for illustrating the producer/consumer acquire and release pattern. We invite readers to go through a few steps of the loop while keeping track of all of the involved states, and to connect this pseudocode with the verbose description of synchronization given prior. However, this snippet features a serialized execution flow in which producer and consumer operations never run concurrently, and hence it is not useful in practice. In an effective pipelined workload, the producer and the consumer must overlap. We next discuss the multistage kernel design that gives one way to accomplish this. Multistage kernel design Let’s use the TMA-specialized version of the Pipeline class, PipelineTmaAsync, to create a 2-stage pipeline used in a Hopper GEMM kernel that overlaps TMA with WGMMA. This kernel is launched with 128 threads (i.e., 1 warpgroup). We assume the reader is familiar with the syntax of TMA and WGMMA in CUTLASS, which we discussed in detail in two previous blogposts. As such, we omit the preparation of the tensors that go into the cute::copy and cute::gemm calls. using MainloopPipeline = typename cutlass::PipelineTmaAsync<2>; using PipelineState = typename cutlass::PipelineState<2>; typename MainloopPipeline::Params params; // number of bytes transferred by TMA load per stage (A and B) params.transaction_bytes = TmaTransactionBytes; params.role = MainloopPipeline::ThreadCategory::ProducerConsumer; params.is_leader = threadIdx.x == 0; params.num_consumers = 128; // Disregard clusters for this example auto cluster_shape = Shape<_1,_1,_1>{}; // pipeline_storage is instance of cutlass::PipelineTmaAsync<2>::SharedStorage // Has full_barrier and empty_barrier as members // Located in the SharedStorage struct that manages objects in smem MainloopPipeline pipeline(shared_storage.pipeline_storage, params, cluster_shape); __syncthreads(); PipelineState smem_pipe_write = cutlass::make_producer_start_state<MainloopPipeline>(); PipelineState smem_pipe_read; // Prepare tensors for GEMM // ... // Issue the first TMA load with leader thread if(threadIdx.x == 0) { pipeline.producer_acquire(smem_pipe_write); BarrierType *tmaBar = pipeline.producer_get_barrier(smem_pipe_write); // smem_pipe_write.index() == 0 copy(tma_load_a.with(*tmaBar, 0), tAgA(_,0), tAsA(_,0)); copy(tma_load_b.with(*tmaBar, 0), tBgB(_,0), tBsB(_,0)); ++smem_pipe_write; } for (int i = 0; i < k_tile_count - 1; ++i) { // Only leader thread issues TMA load if(threadIdx.x == 0) { pipeline.producer_acquire(smem_pipe_write); BarrierType *tmaBar = pipeline.producer_get_barrier(smem_pipe_write); auto write_stage = smem_pipe_write.index(); copy(tma_load_a.with(*tmaBar, 0), tAgA(_,i+1), tAsA(_,write_stage)); copy(tma_load_b.with(*tmaBar, 0), tBgB(_,i+1), tBsB(_,write_stage)); ++smem_pipe_write; } // Compute on the completed load from prior iteration pipeline.consumer_wait(smem_pipe_read); auto read_stage = smem_pipe_read.index(); // WGMMA warpgroup_arrive(); gemm(tiled_mma, tCrA(_,_,_,read_stage), tCrB(_,_,_,read_stage), tCrC); warpgroup_commit_batch(); warpgroup_wait<0>(); pipeline.consumer_release(smem_pipe_read); ++smem_pipe_read; } // Handle the last compute iteration pipeline.consumer_wait(smem_pipe_read); auto read_stage = smem_pipe_read.index(); warpgroup_arrive(); gemm(tiled_mma, tCrA(_,_,_,read_stage), tCrB(_,_,_,read_stage), tCrC); warpgroup_commit_batch(); warpgroup_wait<0>(); pipeline.consumer_release(smem_pipe_read); // Epilogue for writing out accumulator axpby(alpha, tCrC, beta, tCgC); Here, in each iteration of the main loop, the (i+1)th TMA load is issued asynchronously and the ith WGMMA computation executes, noting that smem_pipe_write and smem_pipe_read are offset from each other by one. In this pseudocode, note that the cute::set_barrier_transaction_bytes method we used in the TMA blogpost (or its equivalent, cutlass::arch::arrive_and_expect_tx) doesn’t make an appearance. Instead, its function is taken over by producer_acquire in the PipelineTmaAsync class. Indeed, that method does the following internally, where stage and phase are the index and phase of its PipelineState argument: if (barrier_token != BarrierStatus::WaitDone) { empty_barrier_ptr_[stage].wait(phase); } if (params_.is_leader) { full_barrier_ptr_[stage].arrive_and_expect_tx(params_.transaction_bytes); } Moreover, we use the producer_get_barrier method with argument smem_pipe_write in order to retrieve a pointer to full_barrier[smem_pipe_write.index()], as needed by the TMA TiledCopy objects tma_load_a and tma_load_b in the cute::copy call. With the cute::copy call thus linked to the mbarrier object full_barrier of the pipeline, we can then use the transaction count-based completion mechanism of TMA to signal the consumer that the buffer is ready to be used, obviating the need to invoke producer_commit from the pipeline object itself. This is why CUTLASS makes producer_commit a no-op for PipelineTmaAsync. This way of structuring the pipelining allows for overlapping data transfer and computation, delivering on the potential of asynchronous operations to hide latency. Though we used TMA in this example, a similar technique is available in the Ampere architecture with cp.async. We discuss this in further detail in the Appendix. However, in the Hopper architecture, it is sometimes preferable to use a warp-specialized design instead of multistage, which we now explain. Warp-specialization In the multistage kernel, each warp takes on both producer and consumer roles. Switching between the two roles is handled using the PipelineState abstraction, and the asynchrony of TMA load allows the two types of operations to overlap. An alternative strategy, warp specialization, assigns different roles to different warps, so that we have producer warps entirely dedicated to memory copy and consumer warps entirely dedicated to computations. As mentioned above, the warp schedulers can then hide latency by switching between the two types of warps. Note that unlike the multistage kernel, warp specialization does not inherently rely on asynchronous execution, but still benefits greatly from it in practice. Specifically for our GEMM, the producer warps load data from global memory to shared memory using TMA, while the consumer warps compute tilewise GEMM using WGMMA. It is worth noting that in our simplified setting the execution flow in both types of warps is internally serial, i.e., the TMA and WGMMA instructions themselves are not being overlapped intra-warpgroup. However, there are more sophisticated kernel schedules that exploit the asynchrony of TMA and WGMMA to also achieve intra-warpgroup overlapping with other instructions, such as in FlashAttention-3. Warp-specialization is an especially attractive proposition for the Hopper architecture for three reasons: To expand on the last bullet point, each SM has a limited set of registers, and in architectures before Hopper, each warp was assigned a fixed, equal number of registers at kernel launch. This is fine for the multistage pipeline where every warp does identical work, but generally wasteful for a warp-specialization pattern: producer warps (which only load data) typically need fewer registers than consumer warps (which do math), especially when using TMA. For workloads that are register intensive, being able to utilize the wasted registers could mean allowing more warps per SM or avoiding register spilling. Let us now present a snippet of warp-specialization code. As before, the Pipeline class abstracts the complexity of setting up warp-specialized kernels. // Create the pipeline and the iterator for the stage using MainloopPipeline = typename cutlass::PipelineAsync<2>; using PipelineState = typename cutlass::PipelineState<2>; // Producer warps if (isProducerWarp(threadIdx.x)) { // Only one thread should be calling TMA if(isTMAThread(threadIdx.x)) { PipelineState smem_pipe_write = cutlass::make_producer_start_state<MainloopPipeline>(); for (...) { pipeline.producer_acquire(smem_pipe_write); copy(...); // TMA ++smem_pipe_write; } } } // Consumer warps else { PipelineState smem_pipe_read; for (...) { pipeline.consumer_wait(smem_pipe_read); // WGMMA pipeline.consumer_release(smem_pipe_read); ++smem_pipe_read; } // Epilogue } The format is similar to the basic pipeline we discussed earlier, but this time there is an outer conditional that splits the workload into the producer warps and the consumer warps. The epilogue belongs in the consumer warps since it involves writing out the accumulator held in the consumer threads’ registers. To see which warp and warp-group a thread is in, we can do the following. int warp_group_idx = __shfl_sync(0xffffffff, threadIdx.x / 128, 0); int warp_idx_in_warpgroup = __shfl_sync(0xffffffff, (threadIdx.x / 32) % 4, 0); int warp_group_thread_idx = threadIdx.x % 128; The above snippet also uses the __shfl_sync operation, which is a warp-wide broadcast of a value (more information here). This is there to ensure that all threads in the warp get the same value. Now let’s focus on how this applies to GEMM. In Part 1 of this series, we discussed the WGMMA instructions that are organized at the warpgroup level. As such, we also organize the producers and consumers at the warpgroup level. We use the TMA pipeline so that we can use TMA on the producer side. For 2 stages and 2 warpgroups, we first change the initialization of the pipeline for the WS kernel as follows: using MainloopPipeline = typename cutlass::PipelineTmaAsync<2>; using PipelineState = typename cutlass::PipelineState<2>; typename MainloopPipeline::Params params; params.transaction_bytes = TmaTransactionBytes; const int producerWarpGroupId = 0; if (warp_group_idx == producerWarpGroupId) params.role = MainloopPipeline::ThreadCategory::Producer; else params.role = MainloopPipeline::ThreadCategory::Consumer; params.is_leader = warp_group_thread_idx == 0; params.num_consumers = 128; auto cluster_shape = make_shape(Int<1>{},Int<1>{},Int<1>{}); // Create the pipeline MainloopPipeline pipeline(shared_storage.pipeline_storage, params, cluster_shape); We highlight line 12 to emphasize that, although params.num_consumers still equals 128, this now counts only the 128 threads of the consumer warpgroup, and not all 256 threads. Now on to the mainloop. The general structure is the same as the initial code sample, but there are a few differences for the producer side: // Example values for Hopper GEMM with 1 consumer warpgroup using LowerRegisterCount = Int<40>; using HigherRegisterCount = Int<256>; if (warp_group_idx == producerWarpGroupId) { cutlass::arch::warpgroup_reg_dealloc<LowerRegisterCount{}>(); int lane_predicate = cute::elect_one_sync(); if (warp_idx_in_warpgroup == 0 && lane_predicate) { PipelineState smem_pipe_write = cutlass::make_producer_start_state<MainloopPipeline>(); for (...) { pipeline.producer_acquire(smem_pipe_write); copy(...); // TMA ++smem_pipe_write; } } } else { // consumer warpgroup cutlass::arch::warpgroup_reg_alloc<HigherRegisterCount{}>(); PipelineState smem_pipe_read; for (...) { pipeline.consumer_wait(smem_pipe_read); gemm(...); // WGMMA pipeline.consumer_release(smem_pipe_read); ++smem_pipe_read; } // Epilogue to write out accumulator axpby(...); } In lines 6 and 18, we manually (de)allocate excess registers using a CUTLASS call, which in turn calls the PTX primitive setmaxnreg that adjusts the registers allocated to the threads in the warpgroup. As explained in the documentation, warpgroup_reg_dealloc<M>() releases extra registers to reduce the per-thread maximum register count to M, whereas warpgroup_reg_alloc<N>() requests additional registers in order to raise the per-thread maximum register count to N. The exact numbers to use for these register counts depend on the algorithm and the constraints imposed by the hardware. In the Hopper architecture, one thread can own up to 255 registers, and setmaxnreg can be set to a value between 24 and 256 (inclusive) at multiples of 8. In general, for a Hopper GEMM WS kernel it is advisable to arrange for one CTA to occupy an entire SM. Therefore, we should try to choose register counts so that (a) a minimal number of registers is assigned to the producer warpgroup issuing TMA, and (b) the entire register file size of 64K per SM is used. For example, a 24/240/240 split would be generally effective to use with 1 producer warpgroup and 2 consumer warpgroups (this adds up to 504 < 512, and 512*128 = 64*1024), and likewise a 32/160/160/160 split would be used with 1 producer and 3 consumer warpgroups. Note also that the program will crash if one tries to allocate a total register count that exceeds the register file size. Furthermore, we must make sure that only one thread in a warpgroup ever calls TMA. In our code sample, we make sure that only the first warp is involved in this, and that one thread, chosen using elect_one_sync, is responsible for the TMA call. This code is for 2 warpgroups, but the same can be used with minimal changes for larger numbers of warpgroups and stages as well. Choosing the number of warpgroups and stages to use should be done using careful profiling of the kernel. As a general rule of thumb for both, more stages and more warpgroups means more opportunity for parallelism and overlapping, but also uses more resources. In particular, using more stages requires more SMEM for the buffers, and using more warpgroups increases register pressure. Performance We used the CUTLASS Hopper GEMM tutorial code as the basis for both our multistage and warp-specialized GEMM kernels with half-precision (FP16) datatype. We additionally modified the code to accommodate FP32 accumulation and write out the output using TMA store. We then tuned both versions for MxNxK = 8192x8192x8192, with different tile sizes chosen for FP16 accumulation and FP32 accumulation. The tile sizes and stage count we selected are as follows (with bMxbNxbK dividing into MxNxK): We initialized matrices with random floating point numbers casted to FP16 and recorded the following TFLOP/s (10 iterations, average of 5 measurements): Note that the theoretical peak performance for dense half-precision MMA on the H100 PCIe GPU is 750 TFLOP/s, so we achieve ~65% of the theoretical peak in the standard setting of FP32 accumulation. Both the multistage and WS kernels are available on Colfax’s github. As a warning, note also that the CUTLASS Hopper GEMM tutorial code uses matrices initialized with ±1 chosen randomly, so it will report unrealistically good performance; see this article. For example, with matrices initialized with ±1, the performance of our multistage kernel with FP16 accumulation inflates from ~530 to ~630 TFLOP/s. Now for comparison’s sake, the fastest CUTLASS FP16 Hopper GEMM kernel we measured using the CUTLASS profiler with 10 profiling iterations yields 630 TFLOP/s (~84% utilization). (Note: an earlier version of this article reported a lower number of ~74% utilization since it used an overly high number of profiling iterations, leading to thermal throttling with the 350W TDP of the H100 PCIe GPU.) This number was obtained by the following kernel: cutlass3x_sm90_tensorop_s64x256x16gemm_f16_f16_f32_void_f16_128x256x64_2x1x1_0_tnn_align8_warpspecialized_cooperative_epi_tma Note that this CUTLASS kernel features the “Warp-Specialized Persistent Cooperative” design as described here. We expect that the gap between our current pipelined GEMM kernel and the fastest GEMM kernels will be largely bridged by implementing threadblock rasterization and a persistent kernel that overlaps prologue and epilogue between CTAs. Load balancing with Stream-K would also be a factor with more atypical problem geometries. In this square example, the Stream-K CUTLASS kernel performs almost as good (625 TFLOP/s). We now comment on the relevance of warpgroup-wide register reallocation for the WS kernel. To see register usage, we can compile our kernels using the following flag -Xptxas=--verbose. (Note: this flag does not work with --generate-code. Use --gencode instead.) With register reallocation in use, you will see a register usage count fixed as a function of the number of warpgroups used. For example, with 3 warpgroups in total: 0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads ptxas info : Used 168 registers Or with 4 warpgroups in total: 0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads ptxas info : Used 128 registers Note that 168*3 = 504 and 128*4 = 512, which are the numbers that the sum of producer and consumer register counts must be less than or equal to (relatedly: this is why a 32/240/240 split doesn’t work with 3 warpgroups). On the other hand, it’s possible that register usage was so low to begin with that register reallocation doesn’t have any practical impact. For example, with FP16 accumulation, when removing the register reallocation, we see: 0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads ptxas info : Used 90 registers Also, remeasuring times shows no impact from the change. But with FP32 accumulation, we see: 2784 bytes stack frame, 4764 bytes spill stores, 4760 bytes spill loads ptxas info : Used 168 registers And when remeasuring times, we now get about 21 TFLOP/s, a catastrophic loss of performance! However, we note that adjusting tuning parameters to (bM = 128, bN = 256, bK = 128, 2 stages, 2 MMA warpgroups, cluster (2,1,1)) yields almost as good performance (460 TFLOP/s) with no spilling and no register reallocation. Finally, in fused WS kernel designs such as FlashAttention-3 that feature multiple accumulators held in registers, the use of register reallocation becomes mandatory to avoid excessive spilling. Conclusion In this article, we have presented a comprehensive picture of the pipelining technique. We introduced its goal of hiding latency by overlapping memory copy and math operations, and why this is integral to good performance. Then we presented two pipelining designs: We went into detail on how to use the CUTLASS Pipeline classes to manage the synchronization logic necessary for implementing both pipelining strategies in a Hopper GEMM kernel. Finally, we did a comparison between the two types of pipelines for the GEMM example. Although the two performed about equally well in our simplified setting, in practice the best performing Hopper GEMM kernels use warp-specialization (for example, as demonstrated by the CUTLASS profiler). In Part 3 of this tutorial, we will discuss strategies to schedule the overall kernels, including threadblock rasterization, persistent kernels, and finally a recent innovation called Stream-K GEMM. Appendix: Pipelining for an Ampere GEMM In the main part of this article we discussed pipelining using TMA for memory transfer and WGMMA for compute. Both of these features were introduced with the Hopper architecture (sm90), so they will not work for older architectures. Implementing a similar paradigm in an older architecture requires some extra steps. As such, for completeness, we also discuss how to implement pipelining for GEMM in the Ampere architecture (sm80). Specifically, we study the implementation in the CUTLASS example for sm80. Compared to the code for sm90 we presented in the article, writing for Ampere introduces two complications: Figure 3, from the CUTLASS documentation, shows the overall structure of the kernel. (This image predates Ampere and so misrepresents Ampere in one small aspect: using cp.async, “load global” and “store shared” aren’t separate stages, but a single machine instruction.) Each iteration of the main loop initiates asynchronous loads of later tiles from GMEM into SMEM using the Ampere cp_async instructions, which are overlapped with work on the current tile. This outer pipeline is similar to the multistage pipeline we constructed for Hopper. An inner, unrolled loop loads successive fragments of the tile from SMEM to RMEM and does math on them. Although these operations are synchronous, we can still reduce their latency by a technique from CPU computing called (confusingly, in this context) software pipelining. Let’s begin by examining the outer pipeline, which starts with a pre-fetch stage before the main loop: TiledCopy copyA = make_tiled_copy(Copy_Atom<SM80_CP_ASYNC_CACHEALWAYS<TA>, TA>{},                                     Layout<Shape<_32,_8>,Stride<_8,_1>>{}, // Thr layout 32x8 k-major                                     Layout<Shape< _1,_1>>{});              // Val layout  1x1 TiledCopy copyB = make_tiled_copy(Copy_Atom<SM80_CP_ASYNC_CACHEALWAYS<TB>, TB>{},                                     Layout<Shape<_32,_8>,Stride<_8,_1>>{}, // Thr layout 32x8 k-major                                     Layout<Shape< _1,_1>>{});              // Val layout  1x1 // Number of tiles left to copy int k_tile_count = size<3>(tAgA); // Current tile index in gmem to read from int k_tile_next = 0; // Initial load. Start async loads for all pipes but the last. for (int k_pipe = 0; k_pipe < K_PIPE_MAX-1; ++k_pipe) { copy(copy_a, tAgA(_,_,_,k_tile_next), tAsA(_,_,_,k_pipe)); copy(copy_b, tBgB(_,_,_,k_tile_next), tBsB(_,_,_,k_pipe)); cp_async_fence(); --k_tile_count; if (k_tile_count > 0) { ++k_tile_next; } } // wait for first tile to be available before proceeding cp_async_wait<K_PIPE_MAX-2>(); __syncthreads(); The copies are issued asynchronously using CUTLASS’s cp_async API, which wraps cp.async PTX instructions. To explain the methods used here: Here’s the main loop of the kernel, omitting the SMEM->RMEM load and the computation to focus on the GMEM->SMEM pipeline: while (k_tile_count > -(K_PIPE_MAX-1)) { // handling a single block in a tiled gemm for (int k_block = 0; k_block < K_BLOCK_MAX; ++k_block) {   // Start the async copies for the next tile. if (k_block == 0) {   copy(copy_a, tAgA(_,_,_,k_tile_next), tAsA(_,_,_,smem_pipe_write));   copy(copy_b, tBgB(_,_,_,k_tile_next), tBsB(_,_,_,smem_pipe_write));     cp_async_fence(); --k_tile_count; if (k_tile_count > 0) { ++k_tile_next; } } // Load block from SMEM to RMEM (omitted) if (k_block == K_BLOCK_MAX-1) {   // wait for the previous copies to complete   cp_async_wait<K_PIPE_MAX-2>();   __syncthreads(); }   // Compute on block (omitted) } The outline is essentially the same as in the pre-fetch. At the beginning of each stage of the outer loop, another asynchronous copy is initiated. At the end of the loop stage, the CTA waits for the copy of the next necessary tile. Towards the very end of the computation, the pipeline runs out of GMEM tiles to copy. The code represents this by k_tile_count <= 0, and fires off unused dummy copies. Note that the example does not use the CUTLASS Pipeline class, since we don’t need to use an mbarrier object to manage synchronization. Instead, the example manually sets up synchronization to switch between data buffers. Then, the inner loop is over the size of the buffer, to keep track of which buffer to use. Despite the different details, the overall structure is identical to the simple pipeline example from the article. We finally turn to the inner loop containing the SMEM->RMEM load and MMA. Now, SMEM->RMEM transfer is significantly faster than GMEM->SMEM, but the access latency is still high enough that it is beneficial to overlap the load time with the math. The concept here is identical to the GMEM->SMEM case: we have additional buffers (registers) to which we issue load instructions, while the computation runs on other registers. However, instead of the explicit asynchronous calls, we will rely on software pipelining. Software pipelining is an optimization technique for maximizing hardware utilization by removing dependencies from consecutive high-latency instructions. Specifically for us, if the SMEM->RMEM load and the computation are independent both in terms of hardware and data, then they can run concurrently. Loading from SMEM to RMEM is handled by LSU (Load/Store Unit), while computation is handled by a computational unit (e.g., tensor cores). While not publicly documented, it’s generally believed that these hardware components can run concurrently, so hardware dependency is not an issue. Data dependency, however, can be an issue. Consider the following: for (i=0; i<N-1; i++) {   load2rmem(i);   compute(i); } The problem here is that compute(i) can not start until load2rmem(i) is completed because it needs the data loaded by the load operation. This data dependence makes these two operations sequential. So just like we did with the GMEM->SMEM pipeline, we load the next buffer. load2rmem(0); for (i=0;i<N-1; i++) {   load2rmem(i+1);   compute(i); } compute(N-1); Now there are no dependencies, data or hardware, between the load and compute, and so they can be executed concurrently. In the sm80 CUTLASS example, this is handled by the following lines. CUTE_UNROLL for (int k_block = 0; k_block < K_BLOCK_MAX; ++k_block) {   // Load A, B shmem->regs for k_block+1   auto k_block_next = (k_block + Int<1>{}) % K_BLOCK_MAX;   copy(tCsA_p(_,_,k_block_next), tCrA(_,_,k_block_next));   copy(tCsB_p(_,_,k_block_next), tCrB(_,_,k_block_next));   // Thread-level register gemm for k_block   gemm(mma, tCrA(_,_,k_block), tCrB(_,_,k_block), tCrC); } Here, both tCrA and tCrB are RMEM references created by CUTLASS’s make_fragment calls. The copy commands are able to run concurrently with the GEMM because they are accessing different k_block values. Bibliography [1] Michael Bauer, Henry Cook, and Bruce Khailany. 2011. “CudaDMA: optimizing GPU memory bandwidth via warp specialization.” In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC ’11). Association for Computing Machinery, New York, NY, USA, Article 12, 1–11. https://doi.org/10.1145/2063384.2063400 [2] Michael Bauer, Sean Treichler, and Alex Aiken. 2014. “Singe: leveraging warp specialization for high performance on GPUs”. In Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming (PPoPP ’14). Association for Computing Machinery, New York, NY, USA, 119–130. https://doi.org/10.1145/2555243.2555258 Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments 4 responses to “CUTLASS Tutorial: Efficient GEMM kernel designs with Pipelining” You mentioned that “…Of course, we can generalize to having more than two alternating buffers. Doing so creates more opportunity for overlap, allowing for more efficient use of the available hardware, at the cost of using more SMEM…”. Can you explain why this is possible? Maybe because increasing number of stages will increase number of potential candidate instructions that do not have dependency, I think? Hi! Sorry about the very late response to this. You have the correct idea; more stages means more options for the scheduler to choose from. In practice it is a tuning parameter that may or may not help, and it turns out that it doesn’t help to go beyond 2 for our example. So for our example we were better off maximizing SMEM space by keeping the stages to 2. But if you use the CUTLASS profiler, the best performing configurations for fp16 use 7 stages. And NVidia’s Nsight profiler shows a marked increase in throughput for both compute and memory. Hello, I love this article :) Can you please explain why you used the whole warp group as a producer when you could have used only one warp? Wouldn’t that be more efficient in terms of number of allocated registers? Thanks. Hi Pavlo, We use a producer warpgroup to take advantage of the register reallocation feature of Hopper, which operates at warpgroup-level and not warp-level granularity. As shown in the code snippet, you can then run the producer loop in a single leader thread among the 128, effectively masking out the three producer warps not containing that thread. In terms of register efficiency, the idea with register reallocation is more to grant the consumer warps more registers per thread for e.g. larger tile sizes with WGMMA. Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] CUTLASS Tutorial: Fast Matrix-Multiplication with WGMMA on NVIDIA® Hopper™ GPUs No series of CUDA® tutorials is complete without a section on GEMM (GEneral Matrix Multiplication). Arguably the most important routine on modern GPUs, GEMM constitutes the majority of compute done in neural networks, large language models, and many graphics applications. Despite its ubiquity, GEMM is notoriously hard to implement efficiently. This 3-part tutorial series aims to equip readers with a thorough understanding of how to write efficient GEMM kernels on NVIDIA Hopper GPUs using the CUTLASS library. The big picture. The 3 parts in our series loosely follow the entire development process of a GEMM kernel, but “inward-out”. First, we have the tilewise GEMM primitive that calls the Tensor Cores to ultimately do the computation. Second, we have the GEMM kernel design as seen “per CTA” — consisting of a prologue, mainloop, and epilogue — where the main challenge is to not bottleneck the fast Tensor Cores on memory loads. Lastly, we have the scheduling of CTAs at the outermost grid level, where load-balancing considerations rise to the forefront. We hope that after going through this series, readers will become experts on the GEMM algorithm, and can utilize some of the beautiful ideas that go into this algorithm to design and implement other kernels in their own work. Asynchronous Warpgroup MMA (WGMMA) Hopper introduces the asynchronous warpgroup-level matrix multiply and accumulate operation (WGMMA). A warpgroup consists of four contiguous warps, i.e., 128 contiguous threads, where the warp-rank of the first warp is a multiple of four. The wgmma.mma_async instruction is executed collectively by all 128 threads in a warpgroup. This operation typically follows one of these forms, where matrix C serves as the accumulator: A notable requirement of WGMMA is that operand B must always be stored in shared memory (SMEM). In contrast, operand A can be located in either SMEM or register memory (RMEM), and the accumulator C is always held in RMEM. This blog post is organized as follows. First, we discuss the essentials for invoking the wgmma.mma_async instruction in CUTLASS. This involves constructing the relevant TiledMMA object, as well as creating and partitioning the SMEM tensors in order to be compatible with WGMMA. Second, we discuss the synchronization mechanisms necessary to ensure the correctness of WGMMA. Finally, we discuss in greater detail the layouts used in WGMMA, including the concept of core matrices and matrix descriptors for operands sourced from SMEM. Throughout, for the sake of concision we will abbreviate wgmma.mma_async as wgmma. Our main code reference will be the CUTLASS wgmma tutorial contributed by Pradeep Ramani, which was added in the 3.5.1 release. WGMMA inside a CUTLASS kernel Our main goal in this tutorial is to explain the wgmma primitives for calling the Hopper Tensor Cores to do tile-based GEMM, and how to invoke it as part of a cute::gemm call. To set the stage, consider a standard GEMM kernel that takes input matrices A and B with dimensions MxNxK and computes C=A*B. To parallelize the computation, the kernel fixes static tile sizes bM, bN, and bK and launches a grid of ⌈M/bM⌉x⌈N/bN⌉ many CTAs, with each CTA computing a bMxbN tile rC of the output matrix. This will be held in the CTAs’ RMEM before being written back to the global C matrix. Per CTA, we then have the kernel’s mainloop. Over ⌈K/bK⌉ many iterations, we loop over the inner dimension and successively load in bMxbK and bNxbK tiles of A and B from global into shared memory as sA and sB; note that in CUTLASS, we fix the shape of sB to be the transpose of what it is mathematically. (In fact, mirroring common practice, we load tiles of A and B into circular SMEM buffers, where the number of stages is given by a compile-time integer such as 2 or 3. The last mode of the shape tuples for sA and sB is then given by this stage count.) The cute::gemm call then computes the product of (stagewise slices of) sA and sB and successively accumulates the value into rC. After the mainloop completes, the epilogue then writes out rC to global memory. Now, we wish to explain the following cute::gemm call and the arguments that go into it, as they appear in the following code snippet that we selectively extract from the wgmma tutorial (hiding the parts of the program not relevant for us, like the pipelined TMA loads): template <class TiledMMA, ... > __global__ device_gemm(TiledMMA tiled_mma, ...) { // PROLOGUE // ... // Define A/B partitioning and C accumulators ThrMMA thr_mma = tiled_mma.get_thread_slice(threadIdx.x); Tensor tCsA = thr_mma.partition_A(sA); // (MMA,MMA_M,MMA_K,PIPE) Tensor tCsB = thr_mma.partition_B(sB); // (MMA,MMA_N,MMA_K,PIPE) Tensor tCgC = thr_mma.partition_C(gC); // (MMA,MMA_M,MMA_N) // Allocate accumulators and clear them Tensor tCrC = thr_mma.make_fragment_C(tCgC); // (MMA,MMA_M,MMA_N) clear(tCrC); // Allocate "fragments" Tensor tCrA = thr_mma.make_fragment_A(tCsA); // (MMA,MMA_M,MMA_K,PIPE) Tensor tCrB = thr_mma.make_fragment_B(tCsB); // (MMA,MMA_N,MMA_K,PIPE) // PIPELINED MAIN LOOP while (k_tile_count > -K_PIPE_MAX) { // ... // MMAs to cover 1 K_TILE cute::warpgroup_arrive(); // (V,M,K) x (V,N,K) => (V,M,N) cute::gemm(tiled_mma, tCrA(_,_,_,read_pipe), tCrB(_,_,_,read_pipe), tCrC); cute::warpgroup_commit_batch(); // Wait for all MMAs in a K_TILE to complete cute::warpgroup_wait<0>(); // ... } // EPILOGUE // ... } In the CUTLASS paradigm for MMA, the cute::gemm method is designed to expose architecture-specific MMA instructions via a uniform interface. (Indeed, if you examine the SM80 tutorial GEMM kernel, you’ll see that the cute::gemm call there is syntactically identical to that given above.) However, the definitions of the arguments involved in the cute::gemm call involve many WGMMA-specific aspects: Finally, of course there are the warpgroup synchronization primitives surrounding the cute::gemm call. We will explain all of these concepts in turn. TiledMMA object for WGMMA In what follows, suppose the datatype is FP16, and A and B are MN-major, so in BLAS notation we are computing a NT gemm. We construct the TiledMMA object on host using the cute::make_tiled_mma method as follows: TiledMMA tiled_mma = cute::make_tiled_mma( SM90_64x64x16_F16F16F16_SS<GMMA::Major::MN,GMMA::Major::MN>{}); Though cute::make_tiled_mma also has some optional arguments, let’s focus on the one at hand — the MMA Atom. This is a struct that wraps an underlying PTX call, which in this case is: wgmma.mma_async.sync.aligned.m64n64k16.f16.f16.f16 The CUTLASS notation is such that one can immediately read off the relationship between the wrapped PTX instruction and the MMA atom. Firstly, SM90 is a different name for the Hopper architecture. SM90 MMA atoms are then labeled as SM90_MxNxK_XYZ_SS or SM90_MxNxK_XYZ_RS, with two template parameters that can be either GMMA::Major::MN or GMMA::Major::K. Their meanings are as follows: That’s it for the syntax you need to know for the MMA Atom! Now, we’ve emphasized that WGMMA is a warpgroup-wide instruction. In code, you can retrieve the number of threads participating in the MMA operation defined by the TiledMMA object using its size. For example, the following host code dim3 dimBlock(cute::size(tiled_mma)); stipulates that each CTA in the kernel launches with 1 warpgroup of 128 threads. Suppose instead that we wanted 2 warpgroups to execute the WGMMA, with separate warpgroups computing halves of the output tile independently (and each issuing their individual wgmma instructions). To do this, we can pass a non-trivial layout (the AtomLayoutMNK) to the make_tiled_mma method as its second argument. For example, the following code TiledMMA tiled_mma = make_tiled_mma( SM90_64x64x16_F16F16F16_SS{}, Layout<Shape<_2,_1,_1>>{}); defines a WGMMA operation where warpgroup 1 and 2 compute the upper and lower halves of the output tile as divided along the M mode (now assuming that bM is a multiple of 128). Moreover, size(tiled_mma) would then equal 256. In general, the two optional layout arguments to make_tiled_mma — AtomLayoutMNK and PermutationMNK — work the same for any MMA Atom. For understanding the uses of PermutationMNK, we recommend Cris Cecka’s excellent explanation. SMEM layout constraints for WGMMA Next, we explain the constraints on the tile sizes and layouts for operand matrices in SMEM given the choice of MMA atom. First, as for any MMA instruction, the MxNxK of the MMA atom needs to divide into that of the operand and accumulator tiles. In our case, this means that bM should be a multiple of 64, bN a multiple of 64, and bK a multiple of 16. Second, there is an additional constraint specifically imposed by WGMMA on the SMEM layouts for sA and sB (both shape and stride), and this constraint varies as a function of the chosen swizzling mode. In particular, the layout for (the stagewise slice of) sA is not simply (bM,bK):(1,bM) or (bM,bK):(bK,1) in general, and likewise for sB. To understand these requirements in depth, one needs the concept of core matrices, which we will introduce below. However, as a practical matter, we can always construct layouts guaranteed to be compatible with wgmma using certain pre-defined layout atoms provided by CUTLASS, followed by the cute::tile_to_shape method. In our example, we prepare tile sizes and sA, sB on host as follows (with T=cutlass::half_t which is CUTLASS’s name for FP16): auto bM = Int<128>{}; auto bN = Int<128>{}; auto bK = Int< 64>{}; auto bP = Int< 3>{}; // Pipeline auto sA = cute::tile_to_shape( GMMA::Layout_MN_SW128_Atom<T>{}, cute::make_shape(bM, bK, bP) ); auto sB = cute::tile_to_shape( GMMA::Layout_MN_SW128_Atom<T>{}, cute::make_shape(bN, bK, bP) ); Here, MN indicates that the layout atom is suitable for MN-major operand, and SW128 is the 128 byte swizzle mode. Printing out sA or sB displays Sw<3,4,3> o smem_ptr[16b](unset) o ((_64,_2),(_8,_8),_3):((_1,_512),(_64,_1024),_8192) Where does this layout come from? cute::tile_to_shape takes a layout (the eponymous tile) and replicates it to tile over a larger shape (akin to numpy.tile). Putting aside the swizzle function Sw<3,4,3>, we have that the layout atom is given by (64,8):(1,64) and is tiled over the shape (128, 64, 3) in column-major fashion, so for the MxK shape, the smaller outer stride of 512 lies in the M mode, while the larger outer stride of 1024 lies in the K mode. (The largest stride of 8192 lies in the stagecount P mode, which makes sense, since different stagewise slices of sA or sB shouldn’t be commingled in memory.) Note that 64 times sizeof(half_t) equals 128 bytes, which is the swizzle mode’s name. This is by design: because of how core matrices work, we always arrange for the length of the layout atom in the contiguous direction to equal the number of swizzle bytes — either 16 for no-swizzle, or one of 32, 64, or 128. In contrast, if we considered: auto sA = cute::tile_to_shape( GMMA::Layout_K_SW128_Atom<T>{}, cute::make_shape(bM,bK,bP) ); auto sB = cute::tile_to_shape( GMMA::Layout_K_SW128_Atom<T>{}, cute::make_shape(bN,bK,bP) ); then printing sA would give us Sw<3,4,3> o smem_ptr[16b](unset) o (_128,_64,_3):(_64,_1,_8192) since we instead tile (8,64):(64,1) over (128,64,3). (Note that the layout ((_8,_16),(_64,_1),_3):((_64,_512),(_1,_0),_8192) coalesces down to (_128,_64,_3):(_64,_1,_8192)). In general, we can choose among 8 possibilities for layout atoms, which correspond to either MN or K-major and one of four swizzle modes: The layout atoms are defined here in the CUTLASS codebase as: GMMA::Layout_MN_INTER_Atom<T> GMMA::Layout_MN_SW32_Atom<T> GMMA::Layout_MN_SW64_Atom<T> GMMA::Layout_MN_SW128_Atom<T> GMMA::Layout_K_INTER_Atom<T> GMMA::Layout_K_SW32_Atom<T> GMMA::Layout_K_SW64_Atom<T> GMMA::Layout_K_SW128_Atom<T> These layout atoms must then be passed into tile_to_shape with the SMEM shape for sA and sB given by make_shape(bM,bK,bP) or make_shape(bN,bK,bP), with the modes of the shape given in that order, such that the tile sizes of the layout atoms divide into those of the larger SMEM shape. This is ultimately a constraint on the SMEM shape caused by the choice of swizzling mode, and is separate from the other constraint imposed by the MMA atom shape. WGMMA Fragments and Descriptors We’ve created the TiledMMA object and prepared accordingly the SMEM layouts on host. Now, on device we can use the TiledMMA object tiled_mma to construct the appropriate partitioned tensors to be passed into the cute::gemm call. First, we create a ThrMMA object called thr_mma by invoking the get_thread_slice method on tiled_mma with the thread index, which runs inclusively from 0 to 127 in our case. Then, with reference to the kernel code snippet above, printing the tensors tCsA and tCsB for any thread index shows the following: tCsA: Sw<3,4,3>_smem_ptr[16b](0x7f8800000400) o ((_64,(_8,_2)),_2,_4,_3):((_1,(_64,_1024)),_512,_2048,_8192) tCsB: Sw<3,4,3>_smem_ptr[16b](0x7f880000c400) o ((_64,(_8,_2)),_2,_4,_3):((_1,(_64,_1024)),_512,_2048,_8192) As per the comment, the shape of tCsA should be thought of as (MMA,MMA_M,MMA_K,PIPE): The strides and swizzle mode carry over from sA. The WGMMA-specific thing to notice here is that tCsA isn’t actually a thread-level slice of SMEM, but rather the entire SMEM tensor with a reorganized layout. Next, printing the “fragments” tCrA and tCrB for any thread index shows: tCrA: GMMA::DescriptorIterator o (_1,_2,_4,_3):(_0,_64,_256,_1024) tCrB: GMMA::DescriptorIterator o (_1,_2,_4,_3):(_0,_64,_256,_1024) Internally, CUTLASS constructs a “matrix descriptor“, which is 64-bit value held in registers that describes the SMEM in a way suitable for use by the wgmma instruction. For the programmer, the most important thing to bear in mind is that values of SMEM are not copied into RMEM; rather, accessing the values of tCrA and tCrB instead accesses these 64-bit descriptors. Moreover, these tensors being “iterators” means that only the single 64-bit descriptor used for a given wgmma instruction is held in registers at a time (e.g., as opposed to all 24 of them). In contrast to the operands, the accumulator tensors are defined in a more standard fashion. Printing out tCgC and tCrC for thread 0 shows: tCgC: gmem_ptr[16b](0x7f877a780000) o ((_2,_2,_8),_2,_2):((512,_8,4096),_64,32768) tCrC: ptr[16b](0x7feee1fffbe0) o ((_2,_2,_8),_2,_2):((_1,_2,_4),_32,_64) tCgC is the slice of the output GMEM tensor that we want to copy the accumulator’s values to in the epilogue, and tCrC is the register-backed tensor created to hold these values as they are computed in the mainloop. The (MMA,MMA_M,MMA_N) shapes of these tensors can be interpreted as follows: in the MMA atom’s MxN=64x64 output tile, each of the 128 threads holds 32=2*2*8 values, and MMA_M=MMA_N=2 are the same as for tCsA and tCsB. Each thread holds its 32 values of the atom in a way that necessitates factoring 32 as (2,2,8) for the shape in order to be able to define the corresponding strides for the layout of tCgC. The specific partitioning pattern can be read off from this picture taken from the PTX documentation: This illustrates the replicated Z-pattern in which a thread’s 32 values are held. For example, thread 0 holds the values at (0,0), (0,1), (8,0), (8,1) and repeated every 8 columns to the right. The gemm call, revisited Let’s return to line 25 of the kernel code snippet above: // (V,M,K) x (V,N,K) => (V,M,N) cute::gemm(tiled_mma, tCrA(_,_,_,read_pipe), tCrB(_,_,_,read_pipe), tCrC); The various overloads of the cute::gemm method serve to first loop over the outer modes MMA_M/N and MMA_K. Once those coordinates are chosen, we’re just computing with the MMA atom tile shape. Put another way, we first reduce to the overload of cute::gemm for the dispatch shape (V)x(V)=>(V). The code then invokes the fma operation of the MMA atom (precisely, within the mma_unpack method). This contains the inline PTX assembly: CUTE_HOST_DEVICE static void fma(uint64_t const& desc_a, uint64_t const& desc_b, uint32_t& d00, uint32_t& d01, uint32_t& d02, uint32_t& d03, uint32_t& d04, uint32_t& d05, uint32_t& d06, uint32_t& d07, uint32_t& d08, uint32_t& d09, uint32_t& d10, uint32_t& d11, uint32_t& d12, uint32_t& d13, uint32_t& d14, uint32_t& d15, GMMA::ScaleOut const scale_D = GMMA::ScaleOut::One) { #if defined(CUTE_ARCH_MMA_SM90A_ENABLED) asm volatile( "{\n" ".reg .pred p;\n" "setp.ne.b32 p, %18, 0;\n" "wgmma.mma_async.sync.aligned.m64n64k16.f16.f16.f16 " "{%0, %1, %2, %3, %4, %5, %6, %7, " " %8, %9, %10, %11, %12, %13, %14, %15}," " %16," " %17," " p, %19, %20, %21, %22;\n" "}\n" : "+r"(d00), "+r"(d01), "+r"(d02), "+r"(d03), "+r"(d04), "+r"(d05), "+r"(d06), "+r"(d07), "+r"(d08), "+r"(d09), "+r"(d10), "+r"(d11), "+r"(d12), "+r"(d13), "+r"(d14), "+r"(d15) : "l"(desc_a), "l"(desc_b), "r"(int32_t(scale_D)), "n"(int32_t(scaleA)), "n"(int32_t(scaleB)), "n"(int32_t(tnspA)), "n"(int32_t(tnspB))); #else CUTE_INVALID_CONTROL_PATH( "Attempting to use SM90_64x64x16_F16F16F16_SS " "without CUTE_ARCH_MMA_SM90A_ENABLED"); #endif } The corresponding PTX documentation for this syntax is here. Consistent with the descriptions of the tensors tCrA, tCrB, and tCrC above, observe that we have the uint64 variables desc_a and desc_b for the operands along with 16 uint32 variables for the accumulator. scale_D is either 0 or 1, and controls whether or not the accumulator is zero-initialized. In addition, the variables scaleA, scaleB, tnspA, tnspB are determined at compile-time outside the fma method via template parameters. scaleA and scaleB are either 1 or -1 for negating the operand, while tnspA and tnspB indicate whether to transpose the operand, and are 0 or 1 for GMMA::Major::K or GMMA::Major::MN, respectively. Synchronization for WGMMA It remains to explain the synchronization primitives surrounding the cute::gemm call: cute::warpgroup_arrive(); cute::gemm(tiled_mma, tCrA(_,_,_,read_pipe), tCrB(_,_,_,read_pipe), tCrC); cute::warpgroup_commit_batch(); cute::warpgroup_wait<0>(); Why are these additional commands necessary at all? They have to do with wgmma‘s nature as an asynchronous instruction. In the context of the Hopper architecture, asynchronous indicates that wgmma can run concurrently with other operations, hence necessitating a synchronization mechanism for dependent steps. This mechanism is elaborated upon in the PTX memory consistency model. Improper synchronization in code can result in (a) subtle race conditions, leading to challenging bugs, (b) the compiler serializing the wgmma instructions, which can cause significant performance degradation, or (c) undefined behavior. The highlighted cute methods wrap the following PTX instructions: (Note that we’ve been using wgmma as shorthand for wgmma.mma_async throughout, but in this subsection only we disambiguate this.) Let’s connect the usage of these commands to the following description of WGMMA-based GEMM taken verbatim from the PTX documentation: We explain these points in order. First, a wgmma.fence instruction ensures that wgmma.mma_async only accesses certain RMEM addresses after all prior accesses to such addresses have finished. Without the wgmma.fence, the behavior is undefined. An exception to this rule is that Hopper allows multiple wgmma.mma_async instructions to be in flight simultaneously. As long as these wgmma.mma_async instructions have the same accumulator shape, they can share the same accumulator tensor, i.e., write to the same register memory addresses. In that case, a fence is not required. For example, we don’t need to insert a wgmma.fence within the loop over MMA_K done as part of the cute::gemm call. Just like TMA operations, wgmma.mma_async is performed in the async proxy. Hence, if operations performed in the generic proxy affect the SMEM read by wgmma.mma_async, we need to issue fence.proxy.async. For example, this would be the case if we copied A and B into SMEM via ordinary ld.global / st.shared operations. Since we use TMA load, we don’t need fence.proxy.async in our example, and indeed it doesn’t appear in the WGMMA tutorial code or in the mainloop of CUTLASS Hopper GEMM kernels. (To verify this, note that fence.proxy.async is wrapped by cutlass::arch::fence_view_async_shared()). The wgmma.commit_group instruction creates a new wgmma-group per warpgroup and batches all prior wgmma.mma_async instructions initiated by the executing warpgroup but not committed to any wgmma-group into the new wgmma-group. In our example, cute::warpgroup_commit_batch() batches MMA_M*MMA_N*MMA_K many wgmma.mma_async instructions into one wgmma-group. Finally, the wgmma.wait_group instruction with argument N will make the executing thread wait until only N or fewer of the most recent wgmma-groups are pending and all the prior wgmma-groups committed by the executing threads are complete. In our example, we let N=0, so the warpgroup simply waits for the completion of the entire wgmma-group before continuing to execute any subsequent instructions. In situations where the warpgroup has the opportunity to perform independent computation, flexibility with the parameter N comes in handy. For example, this comes into play with the GEMM-softmax overlapping strategy employed in the design of FlashAttention-3. WGMMA core matrices This last section discusses further the layout requirements for tiles of matrices A and B loaded into SMEM, supposing that wgmma sources both of its operands from SMEM. To simplify the discussion, first suppose that A is row-major and B is column-major (i.e., both are K-major). Recall also that the wgmma instruction’s tile shape MxNxK is constrained so that M is 64, K times the size of the datatype is 32 bytes, and N is a multiple of 8 running from 8 to 256. To avoid confusion with A/B or sA/sB, let’s notate the WGMMA atom tiles as wA and wB.The matrices wA and wB are divided into a number of smaller matrices called core matrices. Each core matrix has a strided direction and a contiguous direction, such that its length is 8 in the strided direction and 16 bytes in the contiguous direction. Matrix wA is made up of 8x2 core matrices and Matrix wB is made up of 2x(N/8) core matrices. We illustrate a tiling of wA and wB by core matrices as follows (with images taken from the PTX documentation): Layout of wA in SMEM Layout of wB in SMEM As mentioned above, wgmma in SS mode requires matrix descriptors for both wA (desc-a) and wB (desc-b) as inputs. This descriptor encodes five parameters: LBO and SBO are indicated in the figures above. The make_gmma_desc method in CUTLASS constructs the descriptor (as an instance of GmmaDescriptor) based on the SMEM tensor’s layout provided as input. Provided that the input tensor’s layout is created using one of the eight canonical GMMA layout atoms and tile_to_shape, as previously detailed in “SMEM layout constraints for WGMMA”, make_gmma_desc will accurately calculate the LBO and SBO, determine the swizzling mode, and construct the descriptor. For example, the GmmaDescriptor describes the following admissible WGMMA layouts in the K-major case (where T*sizeof(dtype)=16): No swizzle : Swizzle<0,4,3> o smem_ptr o ((8,m),(T,2)):((1T,SBO),(1,LBO)) 32-byte swizzle : Swizzle<1,4,3> o smem_ptr o ((8,m),(T,2)):((2T,SBO),(1, T )) 64-byte swizzle : Swizzle<2,4,3> o smem_ptr o ((8,m),(T,2)):((4T,SBO),(1, T )) 128-byte swizzle : Swizzle<3,4,3> o smem_ptr o ((8,m),(T,2)):((8T,SBO),(1, T )) For the compact layouts produced by the GMMA layout atom => tile_to_shape pattern (note the GMMA layout K atoms have a larger K-mode than the WGMMA atom shape in the case of 64 and 128-byte swizzle!), we have these corresponding values for LBO and SBO: No swizzle : LBO = 16x8 = 128 bytes. SBO = 32x8 = 256 bytes. 32-byte swizzle : SBO = 32x8 = 256 bytes. 64-byte swizzle : SBO = 64x8 = 512 bytes. 128-byte swizzle : SBO = 128x8 = 1024 bytes. Most notably, for 64 and 128-byte swizzle, the strides are such that the given admissible WGMMA layouts are not compact. Rather, one has sets of 2 or 4 WGMMA atom operand tiles stacked side-by-side in the K-direction, resulting in strides of 4T and 8T for the core matrix M-mode. Put another way, when swizzling one interleaves in memory the 2, 4, or 8 core matrices logically adjacent in the K-mode, and these core matrices will belong to different WGMMA atoms for 64 and 128-byte swizzle. For the sake of completeness, we also give the admissible WGMMA layouts in the MN-major case: No swizzle : Swizzle<0,4,3> o smem_ptr o ((T,1,m),(8,k)):((1,T,SBO),(1T,LBO)) 32-byte swizzle : Swizzle<1,4,3> o smem_ptr o ((T,2,m),(8,k)):((1,T,LBO),(2T,SBO)) 64-byte swizzle : Swizzle<2,4,3> o smem_ptr o ((T,4,m),(8,k)):((1,T,LBO),(4T,SBO)) 128-byte swizzle : Swizzle<3,4,3> o smem_ptr o ((T,8,m),(8,k)):((1,T,LBO),(8T,SBO)) Conclusion In this [Part 1] of the GEMM series, we have covered the core concepts involved in using WGMMA (warpgroup matrix-multiply and accumulate) as a primitive in Hopper-based GEMM. WGMMA requires a warpgroup — 128 threads — to collectively execute the matrix multiplication, and can only operate on certain fragments of matrices. We went into the special shapes and layouts involved in this, with an emphasis on how to construct operand layouts guaranteed to be accepted by WGMMA using the canonical GMMA Layout => tile_to_shape pattern. For its usage to be well-defined, WGMMA also requires certain synchronization mechanisms. To this end, we explained the uses of wgmma.fence, fence.proxy.async, wgmma.commit_group and wgmma.wait_group in relation to wgmma.mma_async. Lastly, we explained in some detail the inner workings of WGMMA core matrices and how CUTLASS constructs matrix descriptors for those operands sourced from SMEM. Taken as a whole, this blog post should enable the programmer to write CUTLASS kernels on Hopper that use WGMMA. In [Part 2], we will extend this discussion to incorporate TMA, and how to use TMA and WGMMA in tandem in a Hopper GEMM kernel so as to overlap copy and compute. Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments 10 responses to “CUTLASS Tutorial: Fast Matrix-Multiplication with WGMMA on NVIDIA® Hopper™ GPUs” Hi, Thanks for really advanced and excellent series of blog posts. The technical part and details of this post are great. However the main outcome such as motivation to use TMA and WGMMA in terms of performance are missed. Especially comparison between A100 and H100 hw capabilities and maximum utilization of them. Hi, Thank you for your interest in this series. If you want to see real performance gains as motivation, you need to refer to much more serious and technical work. For instance, Jay Shah, a co-author of this series, is the first author of Flash Attention 3. FA3 is an H100-specific kernel, making heavy use of TMA, WGMMA, and warp-specialized pipelines, which are all the concepts we have been discussing. Compared to that, FA2 is an A100-specific kernel, making heavy use of cp_async, WMMA, and multi-stage pipelined kernels. Their performance difference (2x – 3x?), is the evidence for H100 vs. A100. Now, we cannot make *each* blogpost becomes an FA3. FA3 took 6 very smart and experienced people 5 months of continuous work to complete, give or take. You should think of this series as a “how to” guide to study each techniques in FA3. If you learn all of them, and you apply them correctly, you can make the FA3 for your favorite kernel. Nit: there is a typo in the conclusion section. Should be fence.proxy.async, not fence.async.proxy. Thank you! Fixed. Hi, Wondering if wgmma or a warpgroup w/ 4 warps has to be dispatched and executed in the same EU’s tensor core? Before the concept of “warpgroup” is introduced, think the warp scheduler could schedule one warp to each EU then move to the next EU. So there could be two possible scenarios: A) warpgroup w/ 4 warps executed in 1 EU B) warpgroup w/ 4 warps in 4 EUs, 1 warp executed in 1 EU In both cases, 4 warps can be executed in lock-step style to achieve wgmma. Not sure which way is hopper using? Sorry, I do not know the answer to this one. Some of the hardware details, especially around execution pipelines, seem to be proprietary and there aren’t documentations on them (to the best of our knowledge). That said, there are 4 Tensor Cores per SM. So if the answer is A then there needs to be minimum 4 warpgroups or 16 warps per SM (out of 64 possible) constantly assigned to WGMMA in order to saturate the Tensor Cores. And of course in a realistic scenario with TMA loads etc. you need a lot higher occupancy to actually saturate the Tensor Cores. And from (admittedly anecdotal) experience we didn’t need occupancy this high in order to saturate the Tensor Cores. So my guess is that the answer is B, where the warps are split among Tensor Cores. But again, please take this answer with a grain of salt. The images for “layout of wA in SMEM” and “layout of wB in SMEM” are broken. You can see them in this tweet: https://x.com/hyhieu226/status/1821572717877022876/photo/1 Thanks for the heads up — we just fixed them. Hi, can you explain how you get this conclusion “However, for non 16-bit operand datatypes, the layout must always be K-major.”? Since smem layout define multiple swizzle mode, it seems that for all operand datatype, the layout can be M/N major or K major? Swizzle mode is independent of the leading-dimension/majorness of the smem tile. See Table 37 in the PTX docs and discussion around that: https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#asynchronous-warpgroup-level-swizzle-lead-dim. The point about 16-bit dtype vs non 16-bit dtype is asserted in CUTLASS and corresponds to the absence of arguments “imm-trans-a” and “imm-trans-b” for wgmma.mma_async in PTX (https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#asynchronous-multiply-and-accumulate-instruction-wgmma-mma-async) and explicitly said in this way: “The transpose operation is only supported for the wgmma.mma_async variants with .f16/ .bf16 types on matrices accessed from shared memory using matrix descriptors.” Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision Jay Shah*1, Ganesh Bikshandi*1, Ying Zhang2, Vijay Thakkar3,4, Pradeep Ramani3, Tri Dao5,6 1Colfax Research, 2Meta, 3NVIDIA, 4Georgia Tech, 5Princeton University, 6Together AI Cross-posted from {https://www.together.ai/blog/flashattention-3, https://pytorch.org/blog/flashattention-3/, https://tridao.me/blog/2024/flash3/} Attention, as a core layer of the ubiquitous Transformer architecture, is a bottleneck for large language models and long-context applications. FlashAttention (and FlashAttention-2) pioneered an approach to speed up attention on GPUs by minimizing memory reads/writes, and is now used by most libraries to accelerate Transformer training and inference. This has contributed to a massive increase in LLM context length in the last two years, from 2-4K (GPT-3, OPT) to 128K (GPT-4), or even 1M (Llama 3). However, despite its success, FlashAttention has yet to take advantage of new capabilities in modern hardware, with FlashAttention-2 achieving only 35% utilization of theoretical max FLOPs on the H100 GPU. In this blogpost, we describe three main techniques to speed up attention on Hopper GPUs: exploiting asynchrony of the Tensor Cores and TMA to (1) overlap overall computation and data movement via warp-specialization and (2) interleave block-wise matmul and softmax operations, and (3) incoherent processing that leverages hardware support for FP8 low-precision. We’re excited to release FlashAttention-3 that incorporates these techniques. It’s 1.5-2.0x faster than FlashAttention-2 with FP16, up to 740 TFLOPS, i.e., 75% utilization of H100 theoretical max FLOPS. With FP8, FlashAttention-3 reaches close to 1.2 PFLOPS, with 2.6x smaller error than baseline FP8 attention. The improvements from FlashAttention-3 will result in: FlashAttention-3 is available at: https://github.com/Dao-AILab/flash-attention. Paper FlashAttention Recap FlashAttention is an algorithm that reorders the attention computation and leverages tiling and recomputation to significantly speed it up and reduce memory usage from quadratic to linear in sequence length. We use tiling to load blocks of inputs from HBM (GPU memory) to SRAM (fast cache), perform attention with respect to that block, and update the output in HBM. By not writing the large intermediate attention matrices to HBM, we reduce the amount of memory reads/writes, which brings 2-4x wallclock time speedup. Here we show a diagram of FlashAttention forward pass: with tiling and softmax rescaling, we operate by blocks and avoid having to read/write from HBM, while obtaining the correct output with no approximation. New hardware features on Hopper GPUs – WGMMA, TMA, FP8 While FlashAttention-2 can achieve up to 70% theoretical max FLOPS on Ampere (A100) GPUs, it does not yet take advantage of new features on Hopper GPUs to maximize performance. We describe some of the new Hopper-specific features here, and why they are important. 1. WGMMA (Warpgroup Matrix Multiply-Accumulate). This new feature makes use of the new Tensor Cores on Hopper, with much higher throughput1 than the older mma.sync instruction in Ampere (image from the H100 white paper). 2. TMA (Tensor Memory Accelerator). This is a special hardware unit that accelerates the transfer of data between global memory and shared memory, taking care of all index calculation and out-of-bound predication. This frees up registers, which is a valuable resource to increase tile size and efficiency. 3. Low-precision with FP8. This doubles the Tensor Core throughput (e.g. 989 TFLOPS with FP16 and 1978 TFLOPS with FP8), but trades off accuracy by using fewer bits to represent floating point numbers. FlashAttention-3 makes use of all of these new features of Hopper, using powerful abstractions from NVIDIA’s CUTLASS library. By rewriting FlashAttention to use these new features, we can already significantly speed it up (e.g., from 350 TFLOPS in FlashAttention-2 FP16 forward pass to around 540-570 TFLOPS). However, the asynchronous nature of the new instructions on Hopper (WGMMA and TMA) opens up additional algorithmic opportunities to overlap operations and thereby extract even greater performance. For this blogpost, we’ll explain two such techniques specific to attention. The generic technique of warp specialization, with separate producer and consumer warps doing TMA and WGMMA, is well-covered elsewhere in the context of GEMM and works the same here. Asynchrony: Overlapping GEMM and Softmax Why overlap? Attention has GEMMs (those matmuls between Q and K and between attention probability P and V) and softmax as its two main operations. Why do we need to overlap them? Isn’t most of the FLOPS in the GEMMs anyway? As long as the GEMMs are fast (e.g., computed using WGMMA instructions), shouldn’t the GPU be going brrrr? The problem is that non-matmul operations are much slower than matmul operations on modern accelerators. Special functions such as exponential (for the softmax) have even lower throughput than floating point multiply-add; they are evaluated by the multi-function unit, a unit separate from floating point multiply-add or matrix multiply-add. As an example, the H100 GPU SXM5 has 989 TFLOPS of FP16 matrix multiply, but only 3.9 TFLOPS (256x less throughput) for special functions2! For head dimension 128, there are 512x more matmul FLOPS than exponential, which means that exponential can take 50% of the time compared to matmul. The situation is even worse for FP8, where the matmul FLOPS are twice as fast yet exponential FLOPS stay the same speed. Ideally we want matmul and softmax to operate in parallel. While the Tensor Cores are busy with matmul, the multi-function units should be calculating exponential! Inter-warpgroup overlapping with pingpong scheduling The first and easiest way to overlap GEMM and softmax is to do nothing at all! The warp schedulers already try to schedule warps so that if some warps are blocked (e.g., waiting for GEMM results), other warps can run. That is, the warp schedulers do some of this overlapping for us, for free. However, we can improve on this by doing some of the scheduling manually. As an example, if we have 2 warpgroups (labeled 1 and 2 – each warpgroup is a group of 4 warps), we can use synchronization barriers (bar.sync) so that warpgroup 1 first does its GEMMs (e.g., GEMM1 of one iteration and GEMM0 of the next iteration), and then warpgroup 2 does its GEMMs while warpgroup 1 does its softmax, and so on. This “pingpong” schedule is illustrated in the figure below, where the same color denotes the same iteration. This would allow us to perform the softmax in the shadow of the GEMMs of the other warpgroup. Of course, this figure is just a caricature; in practice the scheduling is not really this clean. Nevertheless, pingpong scheduling can improve FP16 attention forward pass from around 570 TFLOPS to 620 TFLOPS (head dim 128, seqlen 8K). Intra-warpgroup overlapping of GEMM and Softmax Even within one warpgroup, we can have some part of softmax running while the GEMMs of that warpgroup is running. This is illustrated in this figure, where the same color denotes the same iteration. This pipelining increases throughput from around 620 TFLOPS to around 640-660 TFLOPS for FP16 attention forward, at the cost of higher register pressure. We need more registers to hold both accumulators of the GEMMs, and the input/output of softmax. Overall, we find this technique to offer a favorable tradeoff. Low-precision: reduce quantization error with incoherent processing LLM activation can have outliers with much larger magnitude than the rest of the features. These outliers make it difficult to quantize, producing much larger quantization errors. We leverage incoherent processing, a technique used in the quantization literature (e.g. from QuIP) that multiplies the query and key with a random orthogonal matrix to “spread out” the outliers and reduce quantization error. In particular, we use the Hadamard transform (with random signs), which can be done per attention head in O(d log d) instead of O(d^2) time, where d is the head dimension. Since the Hadamard transform is memory-bandwidth bound, it can be fused with previous operations such as rotary embedding (also memory-bandwidth bound) “for free”. In our experiment where Q, K, V are generated from a standard normal distribution but 0.1% of the entries have large magnitudes (to simulate outliers), we found that incoherent processing can reduce the quantization error by 2.6x. We show numerical error comparison in the table below. Please see the paper for details. Attention benchmark We show some results with FlashAttention-3, and compare it to FlashAttention-2, as well as the implementation in Triton and cuDNN (both of which already use new hardware features of Hopper GPUs). For FP16, we see about 1.6x-1.8x speedup over FlashAttention-2: For FP8, we can reach close to 1.2 PFLOPS! Discussion This blogpost highlights some of the optimizations for FlashAttention available on Hopper GPUs. Other optimizations (e.g., variable length sequences, persistent kernel, and in-kernel transpose for FP8) are covered in the paper. We have seen that designing algorithms that take advantage of the hardware they run on can bring significant efficiency gains and unlock new model capabilities such as long context. We look forward to future work on optimization for LLM inference, as well as generalizing our techniques to other hardware architectures. We also look forward to FlashAttention-3 being integrated in a future release of PyTorch. Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] CUTLASS Tutorial: Mastering the NVIDIA® Tensor Memory Accelerator (TMA) TMA (Tensor Memory Accelerator) is a new feature introduced in the NVIDIA Hopper™ architecture for doing asynchronous memory copy between a GPU’s global memory (GMEM) and the shared memory (SMEM) of its threadblocks (i.e., CTAs). Compared to prior approaches, TMA offers a number of advantages, such as (1) improving GPU utilization through facilitating warp-specialized kernel schedules via asynchrony, and (2) handling the computation of auxiliary copy data such as addresses and strides in a single-threaded manner via the TMA copy descriptor, which is both more register-efficient and necessarily handles predication (e.g., out-of-bounds checks). These advantages are well articulated in NVIDIA’s technical blog and Hopper tuning guide, which we highly recommend to readers for understanding the rationales behind the design of TMA. In contrast to those sources, this blog post is focused on achieving an operational understanding of how to write kernels that use TMA. Throughout, we rely on the CuTe library, in which TMA is exposed through APIs wrapping lower-level GPU instructions. These instructions include PTX instructions cp.async.bulk.tensor and cp.reduce.async.bulk.tensor, as well as the cuTensorMap operand, which we will also discuss in this post. We organize this blog post into three main sections: the first about TMA load, the second about TMA store, and lastly the third covering more advanced operations such as TMA store reduce and TMA load multicast. In essence, TMA load copies (“loads”) data from the GPU’s GMEM into one of its CTA’s SMEM, while TMA store copies (“stores”) data from a CTA’s SMEM to the GPU’s GMEM. Since TMA load, TMA store, and the more advanced variants share many concepts, we will introduce the bulk of the necessary concepts in the TMA load section and only focus on the remaining differences in the subsequent sections. Also, given that TMA is an asynchronous operation (executed in the async proxy), we will need to use certain memory consistency enforcement tools, such as async memory barrier (i.e., mbarrier) and async memory fence (i.e., fence.proxy.async), to ensure correct behavior of the kernel. Synchronization is a vast topic of discussion by itself, so we will only cover these concepts to the degree needed for their practical use. Finally, for readers looking for a resource that covers many of the same points with no reference to CUTLASS or CuTe concepts, we recommend the treatment of TMA in the CUDA® programming guide. TMA Load TMA load copies data from GMEM into SMEM. In this section, we demonstrate how to write a kernel that uses TMA load for this goal. A kernel that uses TMA load is quite different from a kernel that uses other memory copy methods, so we will first show how to write such a kernel for a simple example task. Then, we will explain the involved concepts. Example task To demonstrate the usage of TMA load, we consider a simple task of tiling a 2D row-major matrix. We are given a matrix A of shape [m,n] and two positive integers CTA_M and CTA_N. Note that CTA_M and CTA_N are known at compilation time, while m and n are given to us at runtime via the matrix A. For simplicity, let’s also assume that m % CTA_M == n % CTA_N == 0, though we will see later that this requirement can be relaxed. We launch a grid of CTAs with size {m/CTA_M, n/CTA_N, 1}, where the SMEM of the (i,j)-th CTA holds the (i,j)-th tile with shape [CTA_M, CTA_N] from A. We can depict this assignment in numpy pseudocode as: A = np.random.uniform(M, N) for i in range(M): for j in range(N): cta_i_j = A.reshape(M // CTA_M, CTA_M, N // CTA_N, N)[i, :, j, :] The two-step process. To perform this task, we use TMA load. In CuTe, a TMA load operation is implemented in two steps. The first step is the construction of the TMA copy descriptor in the host code, while the second step is the execution of the actual TMA load using this descriptor inside the kernel code. Note that this two-step process is different from what we normally do with CuTe’s TiledCopy — where all the copy steps are written in the kernel code — as shown in this tutorial. Host code On the host, we create three objects: the GMEM tensor which we copy from, the layout of the SMEM tensor on each of the CTAs that we copy into, and a tma_load object that takes these two as arguments. Note that since we create the SMEM layout on the host, all CTAs will share the same SMEM layout for the purposes of the TMA load. Once we have these objects, they can be passed to the kernel on device, inside of which the TMA load operation is invoked. The entire code block on the host is: template <typename T, int CTA_M, int CTA_N> void host_fn(T* data, int M, int N) { using namespace cute; // create the GMEM tensor auto gmem_layout = make_layout(make_shape(M, N), LayoutRight{}); auto gmem_tensor = make_tensor(make_gmem_ptr(T), gmem_layout); // create the SMEM layout auto smem_layout = make_layout(make_shape(CTA_M, CTA_N), LayoutRight{}); // create the TMA object auto tma_load = make_tma_copy(SM90_TMA_LOAD{}, gmem_tensor, smem_layout); // invoke the kernel tma_load_kernel<CTA_M, CTA_N> <<<1, dim3{M / CTA_M, N / CTA_N, 1}>>> (tma_load, gmem_tensor, smem_layout); } The lines that create gmem_layout, gmem_tensor, and smem_tensor simply use basic CuTE concepts, so we refer readers to these CuTe tutorials for a memory refresh. Here, we focus on explaining the tma_load object. This object is an instance of cute::TiledCopy, which holds the information and implements the methods to perform a CTA-wide copy operation. In the code snippet, the tma_load object is created via this explicit default of the cute::make_tma_copy function. This function’s full implementation has some nuances, which we will dive into when we discuss MULTICAST later in this blog post, but the explicit default suffices for most use cases, such as our example task. We recommend using the explicit default to avoid unnecessary complications (and bugs). Let’s look into the signature that we used for make_tma_copy: Kernel code The relevant kernel code snippet looks like this. These lines pack many important TMA concepts, which we will explain below. template <typename T, int CTA_M, int CTA_N, class TmaLoad, class GmemTensor> void tma_load_kernel(__grid_constant__ const TmaLoad tma_load, GmemTensor gmem_tensor) { using namespace cute; constexpr int tma_transaction_bytes = CTA_M * CTA_N * sizeof(T); __shared__ T smem_data[CTA_M * CTA_N]; __shared__ uint64_t tma_load_mbar; auto smem_layout = make_layout(make_shape(CTA_M, CTA_N), LayoutRight{}); auto smem_tensor = make_tensor(make_smem_ptr(smem_data), smem_layout); if (threadIdx.x == 0) { auto gmem_tensor_coord = tma_load.get_tma_tensor(shape(gmem_tensor)); auto gmem_tensor_coord_cta = local_tile( gmem_tensor_coord, Tile<Int<CTA_M>, Int<CTA_N>>{}, make_coord(blockIdx.x, blockIdx.y)); initialize_barrier(tma_load_mbar, /* arrival count */ 1); set_barrier_transaction_bytes(tma_load_mbar, tma_transaction_bytes); auto tma_load_per_cta = tma_load.get_slice(0); copy(tma_load.with(tma_load_mbar), tma_load_per_cta.partition_S(gmem_tensor_coord_cta), tma_load_per_cta.partition_D(smem_tensor)); } __syncthreads(); wait_barrier(tma_load_mbar, /* phase */ 0); // after this line, the TMA load is finished } First, at line 2, the tma_load argument for the kernel must be annotated with __grid_constant__ const. If we have two tensors that we want to copy from GMEM into SMEM, each of them must have its own TiledCopy instance, and each instance must be __grid_constant__ const. This is a requirement for passing a cuTensorMap from host to device as documented here, for instance. The next important point is that for a TMA copy, only one thread will be responsible for issuing the TMA operation. In the code snippet, all the TMA-related variables and instructions are contained in the if block starting at line 12, which is only executed by thread 0. On the other hand, line 30 contains an instruction for all threads in the CTA to wait for the TMA operations to finish. Coordinates and Arithmetic tuples For now, let’s look into the TMA load logic. This starts at line 13, where we create a gmem_tensor_coord object that holds the coordinates of the GMEM tensor to be copied. If we try the following: if (cute::thread(0)) { cute::print(gmem_tensor_coord); } then we see the output like so (for M=N=1024): ArithTuple(_0,_0) o (1024,1024):(_1@1,_1@0) Lines 15-18 are self-explanatory for readers familiar with the way tiled copy works in CuTe, where a GMEM tensor is tiled into smaller partitions, and each CTA slices into the tiled tensor according to the block coordinate to obtain its view of GMEM. Note however that the partitioning applies to the aforementioned ArithTuple representing the coordinates of gmem_tensor, instead of to gmem_tensor itself. In particular, the ArithTuple is partitioned into tiles of shape [CTA_M,CTA_N], and then each CTA takes its tile. If we print gmem_tensor_coord_cta using print_tensor as follows: if (cute::block(7)) { cute::print_tensor(gmem_tensor_coord_cta); } then for CTA_M == CTA_N == 16, we see: ArithTuple(0,112) o (_16,_16):(_1@1,_1@0): (0,112) (1,112) (2,112) (3,112) (4,112) (5,112) (6,112) (7,112) (8,112) (9,112) (10,112) (11,112) (12,112) (13,112) (14,112) (15,112) (0,113) (1,113) (2,113) (3,113) (4,113) (5,113) (6,113) (7,113) (8,113) (9,113) (10,113) (11,113) (12,113) (13,113) (14,113) (15,113) // more lines (0,127) (1,127) (2,127) (3,127) (4,127) (5,127) (6,127) (7,127) (8,127) (9,127) (10,127) (11,127) (12,127) (13,127) (14,127) (15,127) These numbers are the coordinates in gmem_tensor whose values will be copied into the smem_tensor of CTA 7. We encourage readers to try running this code snippet while replacing cute::block(7) with other indices to understand which CTAs copy from which coordinates in gmem_tensor. Next, the copy operation itself, issued in lines 25-27, has the usual signature of a TiledCopy operation, where the source tensor is replaced by the partitioned coordinates. Memory barrier We have left out lines 20, 22, and 30, all of which involve the uint64_t variable tma_load_mbar which lives in SMEM. This is the asynchronous transaction barrier that we use to synchronize the TMA load with the rest of the kernel that consumes the resulting data loaded into SMEM. A high-level description of this type of barrier is given in the NVIDIA technical blog on Hopper architecture. In terms of our kernel, the important points are as follows: REMAINDER TILES WITH TMA and STRIDE REQUIREMENTS In our above example, we supposed that m%CTA_M==0 and n%CTA_N==0. However, for the purposes of doing a TMA load, we can dispense with this assumption entirely. Instead of needing to handle the out-of-bounds logic ourselves when loading in remainder tiles from GMEM to SMEM, the TMA copy unit will necessarily predicate the memory copy to not read out-of-bounds. This is consistent with the use of special “implicit” CuTe tensors with ArithTuple as described above in the TMA load — if we used ordinary CuTe tensors instead, then they could be sliced to produce new CuTe tensors with possibly out-of-bounds pointers to GMEM, invariably leading to bugs. However, there is one important requirement on the strides of the GMEM tensor itself to bear in mind for TMA, which is the 16-byte boundary requirement. As one might expect, TMA doesn’t support copying arbitrarily strided regions of GMEM. Rather, we need to assume that the tile being copied has (i) a contiguous direction (stride 1), and (ii) other strides as multiples of 16 bytes. This is asserted in the CUTLASS codebase. For example, for our row-major GMEM tensor of floats, with shape (m, n) and stride (n, 1), this imposes the requirement that n%4==0. If this isn’t satisfied, then one can pad the input tensors to be of the right extent before invoking the kernel. TMA Store Equipped with the basics of TMA load, studying TMA store is a lot easier thanks to the many similarities between the two operations. Similar to TMA load, implementing TMA store is a two-step process: defining the TMA copy descriptor on the host, and then issuing the TMA store operation inside the kernel. Example task and code For illustration purposes, let’s consider the reverse example of TMA load, where we copy from the SMEM in multiple CTAs to corresponding tiles in a partitioned GMEM tensor. A difference here is that we will fill the SMEM tiles in the CTAs with a simple pattern of numbers before copying them to GMEM (otherwise, we would be copying undefined values). A functional code snippet is as follows: template <typename T, int CTA_M=32, int CTA_N=32> void host_fn(T* data, int M, int N) { using namespace cute; // create the GMEM tensor auto gmem_layout = make_layout(make_shape(M, N), LayoutRight{}); auto gmem_tensor = make_tensor(make_gmem_ptr(T), gmem_layout); // create the SMEM layout auto smem_layout = make_layout(make_shape(CTA_M, CTA_N), LayoutRight{}); // create the TMA object auto tma_store = make_tma_copy(SM90_TMA_STORE{}, gmem_tensor, smem_layout); // invoke the kernel tma_store_kernel<CTA_M, CTA_N> <<<CTA_M, dim3{M / CTA_M, N / CTA_N, 1}>>> (tma_store, gmem_tensor, smem_layout); } template <typename T, int CTA_M, int CTA_N, class TmaStore, class GmemTensor> void tma_store_kernel(__grid_constant__ const TmaStore tma_store, GmemTensor gmem_tensor) { using namespace cute; __shared__ T smem_data[CTA_M * CTA_N]; auto smem_layout = make_layout(make_shape(CTA_M, CTA_N), LayoutRight{}); auto smem_tensor = make_tensor(make_smem_ptr(T), smem_layout); // fill the rows of smem_data for (int j = 0; j < CTA_N; ++j) { smem_data(threadIdx.x, j) = threadIdx.x; } __syncthreads(); tma_store_fence(); if (threadIdx.x == 0) { auto gmem_tensor_coord = tma_store.get_tma_tensor(shape(gmem_tensor)); auto gmem_tensor_coord_cta = local_tile( gmem_tensor_coord, Tile<Int<CTA_M>, Int<CTA_N>>{}, make_coord(blockIdx.x, blockIdx.y)); auto tma_store_per_cta = tma_store.get_slice(0); copy(tma_store, tma_store_per_cta.partition_S(smem_tensor), tma_store_per_cta.partition_D(gmem_tensor_coord_per_cta)); // tma_store_arrive(); } // tma_store_wait<0>(); } The host code looks almost identical to that of TMA load, except for the call to tma_store_kernel. Note that we have arranged for each CTA to have CTA_M threads. Our example then has each CTA hold a[CTA_M,CTA_N]tile in SMEM such that in lines 29-32, thread i fills row i with the value i. In the kernel code, the if block in lines 39-49 is similar to the if block in the tma_load_kernel. In particular, only thread 0 issues the TMA store operation. All of the tensor tiling logic is conceptually the same. However, the copying direction is reversed: for TMA store, the tma_store_per_cta.partition_S method is applied to smem_tensor, while the tma_store_per_cta.partition_D method is applied to the coordinates of the GMEM tensor. Note that the coordinates are also represented as an ArithTuple, similar to TMA load. Memory fence The most important difference between the code for TMA load and store is that we no longer see any mbarrier object being used with TMA store. This is because TMA store uses another mechanism to enforce memory consistency: a memory fence. The intention of a memory fence is to establish a guaranteed ordering between memory accesses requested by the executing thread before and after the fence. In our example, we need to ensure that all the writes to SMEM done in lines 29-32 are visible to the TMA store executed by thread 0. To this end, on line 35 we have the CuTe method tma_store_fence() that wraps the PTX instruction fence.proxy.async.shared::cta. This instruction contains two important qualifiers that describe the effect of the fence: the scope and the proxykind. The scope indicates the set of threads that participate in the ordering enforced by the fence. In our case, the qualifier cta defines the scope as given by all threads in the CTA (which is the smallest possible scope for the purposes of the memory consistency model). The proxykind indicates the type of proxy that will participate in the ordering enforced by the fence, in addition to the generic proxy. In our case, we choose the proxykind to be async.shared since the TMA store is executed in the async proxy (with respect to each CTA). If we replaced the async fence by a different memory fence primitive such as __threadfence_block() that doesn’t involve the async proxy, we would destroy the guarantee needed for correct behavior of the kernel, leading to race conditions in practice. TMA STORE ARRIVE AND WAIT In lines 49 and 51, we have tma_store_arrive(), which commits the TMA store operation (technically, as a cp.async.bulk-group), and tma_store_wait<Count>(), which waits until at most Count many of the committed TMA store operations are pending (e.g., if all should be completed, then set Count to be 0). These operations are useful when one has other in-kernel work waiting on the completion of the TMA store — for example, this would be needed to reuse the freed SMEM made available after writing out. However, because our kernel simply exits after the TMA store is done, we don’t need the TMA store arrive and wait pattern here, so we comment out those lines. A Deeper Look at TMA Operations Thus far, we have learned how to invoke the TMA load and TMA store operations. The above table compares and contrasts these operations. To invoke either operation, we need to create an object akin to TiledCopy via the cute::make_tma_copy method on the host code, and then pass this object into a kernel function, where we use them in cute::copy to actually invoke the operation. In this section, we take a deeper dive into what really happens when we call these TiledCopy objects in the kernel function. From this deep dive, we discuss two extensions: TMA store reduce and TMA load multicast. PTX Instructions of TMA Load and Store PTX (Parallel Thread Execution) is a low-level intermediate language for NVIDIA GPUs. For our discussion, the relevant part of PTX comprises a set of instructions that can be inserted into CUDA code via blocks wrapped by the asm volatile key words. In particular, when we call cute::copy(tma_load, ...) or cute::copy(tma_store, ...) as described in previous sections, certain PTX instructions are called to perform these operations. By studying the PTX, we can better understand TMA load and TMA store. Let us start with TMA load. Recall that when we create the tma_load object in the host code, we must provide the GMEM tensor (which contains the source data to copy from) and the SMEM layout (which describes how the data will look like inside each CTA). Using this tensor and layout, CuTe determines the underlying PTX instruction to be executed when cute::copy(tma_load, ...) is invoked in the kernel. The PTX instruction is chosen depending on the rank of the GMEM tensor (note that rank here means the number of dimensions of the tensor, as opposed to matrix rank/nullity in linear algebra). In our example, the GMEM tensor has rank two, so the following PTX instruction will be executed: asm volatile ( "cp.async.bulk.tensor.2d.shared::cluster.global.mbarrier::complete_tx::bytes" " [%0], [%1, {%3, %4}], [%2];" : : "r"(smem_int_ptr), "l"(gmem_int_desc), "r"(smem_int_mbar), "r"(crd0), "r"(crd1) : "memory"); Looking at this PTX instruction, we see many familiar concepts. For instance, gmem_int_desc refers to the coordinates kept in the TMA descriptor, while mbarrier::complete_tx::bytes and smem_int_mbar refer to the memory barrier. Note also that tensor.2d refers to the fact that we are copying a rank-2 tensor, i.e., a 2D matrix. It turns out that not only TMA load but all TMA operations are wrappers around certain cp.async.bulk instructions. The NVIDIA PTX documentation dedicates an entire section to discuss cp.async.bulk instructions, specifically their syntaxes and operands. We encourage readers to read that section and the references therein for a more thorough study of TMA operations, which cover a much larger scope than this blog post is intended to. Here, we will discuss two extensions of TMA that are exposed via these cp.async.bulk instructions. TMA Store Reduce Recall that TMA store copies data from the SMEM of multiple CTAs into the corresponding tiles in a GMEM tensor. We can interpret TMA store as an assignment operation illustrated by the following Python pseudocode: for cta_idx in range(number_of_ctas): gmem_dst[cta_idx] = smem_src[cta_idx] What if we want to do the following instead? for cta_idx in range(number_of_ctas): gmem_dst[cta_idx] += smem_src[cta_idx] # or this: gmem_dst[cta_idx] = max(gmem_dst[cta_idx], smem_src[cta_idx]) # or this: gmem_dst[cta_idx] = min(gmem_dst[cta_idx], smem_src[cta_idx]) All of these operations — namely reduce sum, reduce max, and reduce min — are fairly common in tensor programs. In particular, reduce sum is an inevitable subroutine in Split-K GEMM, while reduce max and reduce min are often used in attention. As simple as these operations look, implementing them in CUDA kernels is not very straightforward. We invite readers to briefly think through how many rounds of data movements between GMEM and SMEM must be carried out to achieve these goals before reading the next paragraph. The vanilla implementation of a reduce operation that “accumulates” values from a CTA’s SMEM into a tile in a GMEM tensor consists of one GMEM read, one processing block, and one GMEM write. First, the original value from the GMEM is loaded into the CTA’s SMEM or register, then the reduce operation happens, and finally the result is written back out. This process is slow. Making a slight modification to the constructor of the TMA store TiledCopy object allows us to condense this three-step procedure to just one PTX instruction, namely cp.reduce.async.bulk instead of cp.async.bulk. Precisely, we can make the following one line change on the host code: // original: create a TMA store object auto tma_store = make_tma_copy(SM90_TMA_STORE{}, gmem_tensor, smem_layout); // to create a TMA reduce sum object auto tma_reduce_sum = make_tma_copy(SM90_TMA_REDUCE_ADD{}, gmem_tensor, smem_layout); and then use tma_reduce_sum instead, which now calls cp.reduce.async.bulk instead of cp.async.bulk under the hood. As an aside, the PTX instruction cp.reduce.async.bulk has been available since the release of CUDA 12.0, but was not exposed through CUTLASS and CuTe until the CUTLASS 3.5 release. We hope other reduction operations will be exposed in future releases, but if they are not, it’s fairly simple to adapt the CuTe code for TMA reduce add to perform the max and min reductions, as well as other bitwise reductions that cp.reduce.async.bulk offers: and, or, xor, inc, and dec. TMA Load Multicast In the previous section, we have seen that studying PTX instructions allows us to discover TMA reduce operations, which can be used instead of TMA store for certain applications. In this section, we will study the multicast extension of TMA load. To aid our understanding, we first take a look at the full syntax of cp.async.bulk.tensor: // global -> shared::cluster: cp.async.bulk.tensor.dim.dst.src{.load_mode}.completion_mechanism {.multicast}{.level::cache_hint} [dstMem], [tensorMap, tensorCoords], [mbar] {, im2colOffsets} {, ctaMask} {, cache-policy} .dst = { .shared::cluster } .src = { .global } .dim = { .1d, .2d, .3d, .4d, .5d } .completion_mechanism = { .mbarrier::complete_tx::bytes } .load_mode = { .tile, .im2col } .level::cache_hint = { .L2::cache_hint } .multicast = { .multicast::cluster } Again, without the need to completely understand the syntax of PTX instructions, we see many familiar concepts such as .dim, .global for src, and .mbarrier for completion_mechanism. This section focuses on the multicast operand. Multicast refers to a situation where we have a tile in a GMEM tensor that we want to copy to multiple SMEM locations in multiple CTAs. This is typically the case in GEMM kernels (i.e., matrix multiplication), where an input matrix column tile is needed for multiple row tiles or vice versa. In such cases, while TMA load is still perfectly functional — we simply provide the same TMA descriptor to the multiple CTAs that need it — the .multicast operand allows us to guarantee L2-cache hits. Let’s consider an extension of the above TMA load example to one with multicast. To begin with, we need to define the cluster dimensions of our kernel to be non-trivial, since a requirement for a subset of CTAs to collectively participate in a TMA load multicast operation is that they belong to the same (threadblock) cluster. In order to keep things simple, we will just change the grid dimensions as so: // old grid dimensions and implicit trivial cluster dimensions dim3 grid_dims = dim3{M / CTA_M, N / CTA_N, 1}; dim3 cluster_dums = dim3{1, 1, 1}; // new grid dimensions and cluster dimensions dim3 grid_dims = dim3{M / CTA_M, N / CTA_N, 2}; dim3 cluster_dums = dim3{1, 1, 2}; Note that when using clusters, the cluster dimensions must evenly divide into the grid dimensions, or the kernel will not launch. In our new kernel, we will then arrange for the same tile of GMEM to be loaded into each CTA’s SMEM for every pair of CTAs in the same cluster, which occurs if and only if the two CTAs have the same blockIdx.x and blockIdx.y. First, in the host code we make the following change to the definition of the TMA load TiledCopy object: // original: create a TMA load object auto tma_load = make_tma_copy(SM90_TMA_LOAD{}, gmem_tensor, smem_layout); // new: create a TMA load multicast object for the given cluster size auto tma_load = make_tma_copy(SM90_TMA_LOAD_MULTICAST{}, gmem_tensor, smem_layout, cute::_2{}); We write _2{} for the last parameter (the cluster size) to pass it as a compile-time constant, using the CuTe integer types provided for this purpose. In practice and more idiomatically, we would have defined the ClusterShape type prior (in our case, to be Shape<_1,_1,_2>) and then write size<2>ClusterShape{} for that parameter. We then change the kernel code as follows: template <typename T, int CTA_M, int CTA_N, class ClusterShape, class TmaLoad, class GmemTensor> void tma_load_kernel(__grid_constant__ const TmaLoad tma_load, GmemTensor gmem_tensor) { using namespace cute; uint32_t block_rank_in_cluster = cute::block_rank_in_cluster(); constexpr uint32_t cluster_size = size<2>(ClusterShape{})); constexpr uint16_t tma_mcast_mask = (uint16_t(1) << cluster_size) - 1; constexpr int tma_transaction_bytes = CTA_M * CTA_N * sizeof(T); __shared__ T smem_data[CTA_M * CTA_N]; __shared__ uint64_t tma_load_mbar; auto smem_layout = make_layout(make_shape(CTA_M, CTA_N), LayoutRight{}); auto smem_tensor = make_tensor(make_smem_ptr(T), smem_layout); auto gmem_tensor_coord = tma_load.get_tma_tensor(shape(gmem_tensor)); auto gmem_tensor_coord_cta = local_tile( gmem_tensor_coord, Tile<Int<CTA_M>, Int<CTA_N>>{}, make_coord(blockIdx.x, blockIdx.y)); if (threadIdx.x == 0) { initialize_barrier(tma_load_mbar, /* arrival count */ 1); } __syncthreads(); cute::cluster_sync(); cutlass::arch::fence_barrier_init(); if (threadIdx.x == 0) { set_barrier_transaction_bytes(tma_load_mbar, tma_transaction_bytes); auto tma_load_per_cta = tma_load.get_slice(block_rank_in_cluster); copy(tma_load.with(tma_load_mbar, tma_mcast_mask), tma_load_per_cta.partition_S(gmem_tensor_coord_per_cta), tma_load_per_cta.partition_D(smem_tensor)); } __syncthreads(); wait_barrier(tma_load_mbar, /* phase */ 0); // after this line, the TMA load is finished cute::cluster_sync(); } We have highlighted the relevant changes. First, we now need to track the internal index of the CTA within its cluster, which we fetch via the CuTe method block_rank_in_cluster(). This returns the value of the special register %cluster_ctarank, which will take on values 0 and 1 in our example. For brevity, let us refer to this as the ctaid. We then have the following three modifications to the code to unpack: For (1), we use the CuTe method cluster_sync(), which does both a cluster barrier arrive and wait operation in sequence. We insert this in two places: in lines 7-8 we use cluster_sync() together with a fence to ensure cluster-wide visibility of the mbarrier initialization, and on line 41 we use another cluster_sync() to ensure that one of the two CTAs in the cluster doesn’t exit prematurely while the other is still waiting for the multicast load to complete. In general, there would be compute done on the data loaded into SMEM, and the last cluster_sync() would appear at the very end of the kernel code. For (2), we pass a uint16 bitmask to the copy operation to specify which CTAs will participate in the TMA multicast load. The bits set to 1 in the mask indicate which CTAs are active, with a maximum of 16 CTAs in a cluster (maximum nonportable size) and the position of the bit corresponding to the ctaid. Thus, in our example, by setting tma_mcast_mask to 0b11 we specify that both CTAs in the cluster will participate. Finally, for (3), the ctaid is used to specify the offset used when slicing into GMEM for the TMA multicast load operation launched from the given CTA. To explain this point clearly, consider the following example of loading in a 16 x 16 tile of integers, initialized to be 0-255 in ascending row-major order, from GMEM to the SMEM of two CTAs in a cluster. Suppose we mistakenly gave 0 as the parameter to tma_load.get_slice for both CTAs. Then we obtain the following in both CTAs’ SMEM after completion of the load: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 In contrast, if we have 1 be the given parameter for both CTAs, then we get this in both CTAs’ SMEM: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 Finally, giving either 0 from ctaid 1 and 1 from ctaid 0, or 0 from ctaid 0 and 1 from ctaid 1, would correctly load in the entire tile to both CTAs’ SMEM. These printouts illustrate that issuing the multicast operation from one CTA in the cluster loads half of the GMEM into each of the two CTAs’ SMEM, with the slice of the TiledCopy determining the respective half. This is consistent with the description of multicast for cp.async.bulk.tensor in the PTX documentation: The source data is multicast to the same CTA-relative offset as dstMem in the shared memory of each destination CTA. In terms of the TiledCopy object, which generically has a layout TiledLayout_TV mapping thread-value tuples to logical coordinates of the tile, CuTe treats the ctaid as the thread index for the purposes of slicing. For example, printing out the TiledCopy in our 16 x 16 example yields the following: TiledCopy Tiler_MN: (_16,_16) TiledLayout_TV: (_2,((_16,_16))):(_8,((_16,_1))) Copy_Atom ThrID: _1:_0 ValLayoutSrc: (_1,_256):(_0,_1) ValLayoutDst: (_1,_256):(_0,_1) ValLayoutRef: (_1,_256):(_0,_1) ValueType: 32b which has two “threads” corresponding to the two CTAs in the cluster, with the offset position given by the logical coordinate (8,0) in the (16,16) tile for ctaid 1. Conclusion In this blog post, we walked through a few simplified examples of using TMA load, store, store reduce, and load multicast to perform memory copy between GMEM and SMEM in a CUDA kernel, using the methods provided by the CUTLASS library. We started by providing an overview of TMA and went into how a user can invoke these operations in a GPU kernel. Then, we dived deeper into the low-level PTX instructions in order to elicit a greater understanding of TMA. We hope this blog post is helpful for readers who want to understand TMA, to refresh their knowledge on the topic, or to debug their existing projects which use TMA. We left out a few important topics such as supported swizzling modes for TMA and the ability for TMA to copy GMEM to SMEM in an interleaved format, permuting strides outside the contiguous dimension. These are important when using TMA in conjunction with the Warpgroup Matrix-Multiply-Accumulate (WGMMA) instructions, also new to the Hopper architecture, in order to load tensor data in a memory format compatible with WGMMA. We will explain these points when we discuss Hopper-based GEMM in a future post. Lastly, fully-worked out examples of the kernels discussed in this blog post can be found on our Colfax Research GitHub repository. Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] Tutorial: Matrix Transpose in CUTLASS The goal of this tutorial is to elicit the concepts and techniques involving memory copy when programming on NVIDIA® GPUs using CUTLASS and its core backend library CuTe. Specifically, we will study the task of matrix transpose as an illustrative example for these concepts. We choose this task because it involves no operation other than copying data from one set of addresses to another, allowing us to study in isolation those aspects of memory copy optimization, such as coalesced accesses, that can be separated from workloads also involving computation. Our treatment takes inspiration from Mark Harris’s Efficient Matrix Transpose tutorial, which we recommend for an in-depth discussion of the matrix transpose problem that does not directly involve the abstractions from CuTe that we use here. Conversely, our tutorial can also serve as an introduction to these abstractions for the reader already familiar with Harris’ tutorial. In any case, we will review the key ideas from that tutorial here before explaining how to implement the corresponding optimized solution using CuTe. Review of Coalesced accesses: In many compute workloads, particularly those found in ML/AI applications, we work with multidimensional arrays known as tensors. Since computer memory is inherently one-dimensional, these tensors must be linearized, or organized into this one-dimensional space. As a consequence, adjacent elements in some dimensions of a tensor might not be adjacent in memory. We say that a dimension is contiguous when elements adjacent in that dimension are also adjacent in memory. A block of consecutive elements in a contiguous dimension is also called contiguous. Accesses — i.e., reads or writes — to a contiguous block of memory are called coalesced, whereas accesses to non-contiguous blocks of memory are called strided. Coalesced accesses typically offer faster performance than strided ones because they align more effectively with the GPU’s memory architecture, allowing for more efficient data caching and retrieval. For this reason, optimizing for coalesced memory access is highly desirable when programming for GPUs. Certain workloads, however, necessitate strided accesses and cannot be implemented otherwise. The matrix transpose — or more generally, tensor permute operations — are prime examples where strided accesses are unavoidable. In such cases, it is crucial to minimize the performance impact of these less efficient access patterns. One standard technique is to only perform the strided accesses at the lower and faster levels of the GPU memory hierarchy, which we now recall. For the purposes of our discussion, there are three programmable levels to the GPU memory hierarchy. From the highest to lowest level, we have global, shared, and register memory. The global memory (GMEM), i.e. High-Bandwidth Memory (HBM), is the largest of the three, and also the slowest one to read from or write into. For instance, an NVIDIA H100 Tensor Core GPU has 80 GB of GMEM. Strided access here will have the worst effect on performance. Next is the shared memory (SMEM), which is much smaller but considerably faster than GMEM. An NVIDIA H100 Tensor Core GPU, for example, has up to 228KB of SMEM per streaming multiprocessor (SM). For readers more familiar with memory architectures, we note that SMEM is physically carved out of the L1 cache. SMEM is shared amongst all threads within the same cooperative thread array (CTA), and each CTA operates within its own segment of SMEM. Strided access here is still suboptimal, but is greatly preferred over strided access in GMEM. Finally, we have the register memory (RMEM), which is dedicated to individual threads. In this tutorial, memory accesses only comprise copying numbers (e.g., 32-bit floats) around, either from one level to another, or between different locations in the same level. The naive transpose method discussed in Harris’ tutorial starts out with strided accesses in GMEM -transpose-> GMEM. He then improves upon this by first copying data GMEM to SMEM so that we have GMEM -> SMEM -transpose-> SMEM -> GMEM. This way, the strided load happens in SMEM whereas both GMEM accesses are coalesced. CuTe way: We now discuss how to implement these two approaches using the CuTe library. We start with the naive method, mainly to demonstrate what not to do. Data in the CuTe framework is abstracted as cute::Tensor objects. A CuTe tensor consists of a pointer (in the C sense) to the first element of the tensor, as well as a cute::Layout object, which describes the offset of each element in the tensor with respect to the first element through defining shape and stride integer tuples. For example, for a row-major matrix of dimensions M by N, we would define the layout to have shape (M, N) and stride (N, 1). For a cute::Layout, we note that one of the available options when defining the layout of a new tensor is to specify whether it is row-major or column-major in terms of the strides (GenRowMajor or GenColMajor). In a column-major matrix, adjacent elements within a column are contiguous while adjacent elements across columns are strided in memory. By default, CuTe uses column-major layouts. More generally, we could specify strides for each dimension of the shape of the layout. One easy way to implement transpose is to simply define the input as column major, output as row-major and let CuTe figure out the copy. using namespace cute; int M = 2048, N = 2048; float *d_S, *d_D; // Allocate and initialize d_S and d_D on device (omitted). // Create the row major layouts. auto tensor_shape = make_shape(M, N); auto tensor_shape_trans = make_shape(N, M); auto gmemLayoutS = make_layout(tensor_shape, GenRowMajor{}); auto gmemLayoutD = make_layout(tensor_shape_trans, GenRowMajor{}); // Create the row major tensors. Tensor tensor_S = make_tensor(make_gmem_ptr(d_S), gmemLayoutS); Tensor tensor_D = make_tensor(make_gmem_ptr(d_D), gmemLayoutD); // Create a column major layout. Note that we use (M,N) for shape. auto gmemLayoutDT = make_layout(tensor_shape, GenColMajor{}); // Create a column major view of the dst tensor. Tensor tensor_DT = make_tensor(make_gmem_ptr(d_D), gmemLayoutDT); An important note here that although we have three tensors, we only have two actual copies of the data. This is because tensor_D and tensor_DT both use the data in d_D — they are two different views on the same data. We will be using the column-major view for our transpose kernel, but using the row-major view when we are verifying the transpose result. Next, we need to determine how to divide the input Tensor into smaller chunks that we can distribute over the CTAs. We can do this using the cute::tiled_divide method. using namespace cute; using b = Int<32>; auto block_shape = make_shape(b{}, b{});       // (b, b) Tensor tiled_tensor_S = tiled_divide(tensor_S, block_shape); // ([b,b], m/b, n/b) Tensor tiled_tensor_DT = tiled_divide(tensor_DT, block_shape); // ([b,b], m/b, n/b) Here, we specify the tile size to be 32 by 32. The values for tile size is an important tuning parameter, and should be tuned for each specific workload. In fact 32 by 32 is not the optimal value for the transpose kernel, and we will tune it before benchmarking. tiled_divide creates a tensor with the same data but a different layout, i.e., a different view of the data. In our case, for tensor_S we start out with a 2D matrix of size (M, N). cute::tiled_divide with tile size of b generates a view of a 3D matrix of size ([b,b], M/b, N/b); b by b matrices in a M/b by N/b grid. This view makes it much easier to access the correct tile once inside the kernel. Tensor tile_S = tiled_tensor_S(make_coord(_, _), blockIdx.x, blockIdx.y); Tensor tile_DT = tiled_tensor_DT(make_coord(_, _), blockIdx.x, blockIdx.y); Here, placing make_coord(_, _) as the first argument takes the entire first dimension, while specifying the integer values of the second and third dimensions as the block indices takes the corresponding slice of the tensor. (For those familiar with numpy: the underscore (_) in CuTe is equivalent to the colon (:) notation there.) In other words, tile_S represents the entire b by b matrix located at grid point (blockIdx.x, blockIdx.y). Note that we don’t swap blockIdx.x and blockIdx.y when slicing into tiled_tensor_DT since we already took the column-major view with shape (M, N) (by contrast, if we instead took a tiled divide of tensor_D, we would need to swap the block indices and then use different thread layouts for source and destination in local_partition below). We can then get the part assigned to a specific thread with: auto thr_layout = make_layout(make_shape(Int<8>{}, Int<32>{}), GenRowMajor{}); Tensor thr_tile_S = local_partition(tile_S, thr_layout, threadIdx.x); Tensor thr_tile_DT = local_partition(tile_DT, thr_layout, threadIdx.x); Here, we launched the kernel with 256 threads per CTA and chose a thread layout so that loads from gmem are coalesced whereas stores to gmem are uncoalesced (as we emphasized above, no matter which thread layout is chosen there will be uncoalesced accesses). Finally we can use cute::copy to copy the data from thr_tile_S to thr_tile_DT. Tensor rmem = make_tensor_like(thr_tile_S); copy(thr_tile_S, rmem); copy(rmem, thr_tile_DT); Now we can benchmark this against a pure copy kernel. The code for the copy kernel is based on the CUTLASS’s tiled_copy example, so we will leave unpacking it as an exercise for the reader. Additionally, we have found empirically that the tile size 32 by 1024 gave the best performance for our workload. Just as we saw in Harris’ post, the speed of this naive method is not very good. This is because this copy is a strided copy from GMEM -> GMEM. In order to confirm this, let’s profile this transpose using NVIDIA Nsight™ Compute. This profiling tool can detect problems in code that lead to lowered performance. Profiling the naive transpose, the summary page of the GUI shows us: Nsight Compute has a wide range of tools to help with optimization, but full exploration of Nsight is beyond the scope of this article. For this article we will simply be looking at the summary page. In the above summary page, we see that indeed the problem of uncoalesced accesses constitutes the primary reported issue. Next, we study the improved algorithm: copy the data from GMEM into SMEM first, then do the transpose, then copy back from SMEM to GMEM. To move the strided access to SMEM, we need a Tensor that uses SMEM. We will use CuTe to allocate an array_aligned object in a CTA’s SMEM. using namespace cute; using CuteArray = array_aligned<Element, cosize_v<SmemLayout>>; extern __shared__ char shared_memory[]; CuteArray &smem = *reinterpret_cast<CuteArray*>(shared_memory); Here, smemLayout is the Layout for the SMEM used in a single tile. We can now create a tensor whose data pointer is shared_memory: Tensor sS = make_tensor(make_smem_ptr(smem.data()), smemLayout); One important note here is that we must ensure the SMEM tensor is small enough to fit on a single SM. In other words, the size of smemLayout multiplied by the number of bytes per Element must be less than the total SMEM capacity on a single SM. Beyond that, we have occupancy considerations to contend with depending on the SMEM used per CTA. Now we can repeat the column-major view trick we did with the data in GMEM, except that this time we apply it to SMEM. We create two different views of SMEM — one row-major and the other column-major. using namespace cute; using b = Int<32>; auto block_shape = make_shape(b{}, b{});       // (b, b) // Create two Layouts, one col-major and one row-major auto smemLayout = make_layout(block_shape, GenRowMajor{}); auto smemLayoutT = make_layout(block_shape, GenColMajor{}); // Create two views of smem Tensor sS = make_tensor(make_smem_ptr(smem.data()), smemLayout); Tensor sD = make_tensor(make_smem_ptr(smem.data()), smemLayoutT); Finally, we can use the cute::copy to copy from GMEM to SMEM and then back from SMEM to GMEM. Note here that S and D are tiled_divide‘s of tensor_S and tensor_D and tS and tD are thread layouts chosen to ensure coalesced accesses to GMEM (in fact, they are both equal to thr_layout from above!). // Slice to get the CTA's view of GMEM. Tensor gS = S(make_coord(_, _), blockIdx.x, blockIdx.y); // (bM, bN) Tensor gD = D(make_coord(_, _), blockIdx.y, blockIdx.x); // (bN, bM) // Create the thread partitions for each Tensor. Tensor tSgS = local_partition(gS, tS, threadIdx.x); Tensor tSsS = local_partition(sS, tS, threadIdx.x); Tensor tDgD = local_partition(gD, tD, threadIdx.x); Tensor tDsD = local_partition(sD, tD, threadIdx.x); // Copy GMEM to SMEM. cute::copy(tSgS, tSsS); // Synchronization step. On SM80 and above, cute::copy // does LDGSTS which necessitates async fence and wait. cp_async_fence(); cp_async_wait<0>(); __syncthreads(); // Copy transposed SMEM to GMEM. cute::copy(tDsD, tDgD); Now when we benchmark, we get a much better result. Nonetheless, we are still a way off from the copy result. Profiling the code again, we can spot the next issue at hand — memory bank conflicts. Memory Bank conflicts: The strided SMEM version gets much better performance than the naive version, but it still does not match the copy performance. A large part of this discrepancy is due to memory bank conflicts. On most NVIDIA GPUs, the shared memory is organized into 32 memory banks. Only one thread in a warp is able to access a memory bank at a time; this is true for both read and write accesses. Hence, if multiple threads try to access the same memory bank, the accesses are serialized. This is called a bank conflict. For a more in-depth discussion about bank conflicts, we recommend Lei Mao’s excellent blog post. In more detail, elements are assigned in 32 bits to memory banks in a round-robin format. The first 32 bits are assigned to 0, the next to 1, and so on, until the 33rd set of 32 bits is assigned to bank 0 again. So in a 32 by 32 (row-major) tile of float, each column maps to the same memory bank. This is the worst case scenario; with 32 threads in a warp, this causes a 32-way bank conflict. Mark Harris’ tutorial solves this issue by padding the rows by 1 number. This offsets the elements, causing every element in a column to fall in a different bank. We could replicate this workaround in CuTe by using non-default strides. The CuTe Layout contains information about stride, which defines the offset between elements in each dimension. We can add padding by setting the stride for the columns to be 33 instead of 32. In code, this can be done simply by: auto block_shape = make_shape(Int<32>, Int<33>); // (b, b+1) // Create two Layouts, one col-major and one row-major auto smemLayout = make_layout(block_shape, GenRowMajor{}); auto smemLayoutT = make_layout(block_shape, GenColMajor{}); However, this wastes memory for the extra 32 numbers in SMEM. In this article we will implement an alternate solution — swizzle. Swizzle and Layout Composition: In order to discuss swizzle, we first need to expand on a CuTe Layout. A Layout is not simply a container that stores information about a Tensor’s structure, but is a function that map one coordinate to another. For example, take a column-major tensor A with M rows and N columns. Given coordinates (4,5) — 4th column, 5th row — this Layout for A will map the tuple (4,5) to the integer 5M+4. This is the index for the element at coordinates (4,5) in the 1D pointer to the data. This abstracts away the often confusing coordinate math that comes with working with higher dimensional Tensors. Normally, the coordinate calculation is done using just the stride of the Tensor, which defines the offset in 1D memory space between adjacent elements in a dimension. For example using the same Tensor A, the stride is (1,M). Elements in a column are adjacent to each other, i.e., with offset of 1, while elements in a row are offset by M. CuTe provides tools for more complex coordinate mapping functions. One such tool is Swizzle. The details of swizzling are beyond the scope of this tutorial, and we refer curious readers to NVIDIA’s PTX documentation. By defining an appropriate swizzling function, CuTe programmers can access data the same way they would in the non-swizzling case, without worrying about bank conflicts. CuTe abstracts away the swizzling details by baking in swizzle as a property of a tensor’s layout using the composition operation. Composition, as its name suggests, creates a functional composition of the layout arguments. Specifically, when a programmer accesses data in a swizzled tensor in SMEM — say by calling tensor(i) in CuTe, where the logical index i is what they think the accessing location is — they actually access the data at swizzle_function(tensor(i)). Returning to transpose, the swizzle function we need is Swizzle<5,0,5>. The number 5 here refers to the number of bits in the mask. Per the CuTe documentation, this function modifies the lower 5 bits by taking the xor with the upper 5 bits (the mask). Then with this pattern applied to a 32 by 32 set of addresses, no two elements in a column map to the same memory bank, thereby avoiding all bank conflicts. We add this swizzle pattern to our Layouts. auto tileLayoutS = make_layout(block_shape, GenRowMajor{}); auto smemLayoutS_swizzle = composition(Swizzle<5, 0, 5>{}, tileLayoutS); We also note that other data storing patterns in SMEM will require different swizzling functions. We encourage the reader to experiment with the generic swizzle functions exposed via CuTe and pick what works best for them. Transposing via Layout Composition: Above, we have discussed how to transpose a tile in SMEM by defining a column-major Layout of the tile. Here, we show an alternate method using Layout composition. Specifically, we make a Layout composed out of the swizzled LayoutS and LayoutD. auto tileLayoutD = make_layout(block_shape_trans, GenRowMajor{}); auto smemLayoutD_swizzle = composition(smemLayoutS_swizzle, tileLayoutD); The trick here is that both of these layouts are defined to be row-major, but CuTe uses column-major by default including for layout algebra. We now claim that composition(tileLayoutS,tileLayoutD) equals auto tileLayoutDT = make_layout(block_shape_trans, GenColMajor{}); To explain, let the block dimensions be bM and bN, so tileLayoutS and tileLayoutD have Shape:Stride given by (bM,bN):(bN,1) and (bN,bM):(bM,1), respectively. Then we have: tileLayoutS(tileLayoutD(x,y)) = tileLayoutS(bM*x+y). Now to compute what the integer bM*x+y maps to under tileLayoutS, it is convenient to represent it as a coordinate pair in the domain shape (bM,bN). But since CuTe algebra for mapping 1D indices to coordinates in a shape is done column-major (or left-to-right), we see that bM*x+y corresponds to the coordinate (y,x). Thus, we get: tileLayoutS(bM*x+y) = tileLayoutS((y,x)) = bN*y+x. This shows that the composed Layout function equals that for the Layout (bN,bM):(1,bN), which validates the claim. Finally, we note that in the presence of post-composition with a swizzle function, pre-composition leaves the same swizzle in-place, thus avoiding some code duplication. Our swizzled solution gets us close to the performance of the copy kernel, just as Mark Harris’ article did. With performance approaching the bandwidth limit, we are nearing the hardware limitation. When profiling the swizzle version, the summary page shows: We see that we have resolved the memory bank conflict issue. The last reported issue on long scoreboard stalls may be disregarded as we are profiling a completely memory-bound kernel. TMA: Note that data transfer between GMEM and SMEM constitutes by far the majority of time spent in our transpose kernel. The Tensor Memory Accelerator (TMA) is a feature introduced in the NVIDIA Hopper™ architecture that can be used in place of regular load and store instructions between GMEM and SMEM, thereby potentially improving the performance of our transpose kernel. We studied the usage of TMA for this tutorial, and found a mixed set of results, which we describe in this section. To review, TMA is a dedicated asynchronous memory copy unit for copying multi-dimensional data from GMEM to SMEM and vice-versa. In the TMA model for async copy, instead of having the threads/warps in a CTA cooperatively copy a portion of the source tensor to the target tensor, one elects a single thread in the CTA to issue the load or store TMA instruction. While the instruction executes in the async proxy, threads are free to do other independent work. Barrier objects and synchronization primitives (fence, arrive and wait) are used to synchronize the data movement with computation that relies on the data. When used in conjunction with a software pipelining scheme, TMA allows overlap memory copy instructions to overlap with computations, which helps to hide latency. However, since the transpose kernel only does memory copy, we don’t have an opportunity to demonstrate this benefit of TMA in this tutorial. To clarify the performance of TMA for memory copy in isolation, we first studied the performance of TMA load and store copy kernels versus other alternatives, such as CuTe’s TiledCopy tutorial, which does 128-bit vectorized loads and stores passing through RMEM only. We found that in this situation, TMA performed on par with this simpler alternative (after tile size tuning for both), with both near the memory bandwidth spec of the device. This outcome was in line with our expectations — indeed, we have no reason to expect TMA to outperform in the situation of pure memory copy according to a regular pattern. In contrast, a naive attempt at using TMA for both load and store in a transpose kernel, with the same tile sizes chosen as above, performed worse than our best-performing version. This was due to the presence of bank conflicts! The immediate problem was that TMA only supports a restricted set of swizzle functions (intended for use in conjunction with WGMMA); for instance, see this section of the CuTe codebase. In particular, it doesn’t support the Swizzle<5,0,5> function that we used above, which makes it less straightforward to completely eliminate bank conflicts. Note however that we have no reason to believe this is an essential problem, but we chose not to pursue this line of investigation further in light of our benchmarking with the copy kernel. Moreover, when trying out a version with TMA store only and with 128-bit vectorized load into registers and then writing to SMEM, we found that it performed only slightly under par even though the profiler still reported shared store bank conflicts (but avoiding bank conflicts for the TMA store from SMEM to GMEM). Because of these mixed results, we do not describe in detail the mechanics of how to use TMA, deferring this to a future blog post in which we will aim to study TMA in a context more suited to its strengths. Conclusion: In this tutorial, we introduced readers to a number of fundamental GPU memory concepts and how to program for them using the CuTe library, by way of implementing an efficient matrix transpose kernel. Starting with coalesced reads and writes, we touched upon the concepts of CuTe Layouts and Tensors, bank conflicts, swizzle functions, and TMA. Except for TMA, we have seen how a good understanding of these concepts is necessary to implement an efficient transpose kernel. In a subsequent post, we plan to study TMA in a setting where it is important for optimization. To conclude this tutorial, we present the runtimes of the various kernels that we discussed. We included the JustCopy kernel as the headroom of what can be achieved, as well as both a naive PyTorch implementation (given by invoking contiguous() on torch.transpose) and one using torch.compile, in order to demonstrate the magnitude of the efficiency gains available through writing these low-level kernels. The source code for all of these kernels as well as the benchmarking script is available in a Colfax Research GitHub repository. Edit (05/07/24): Added JIT compiled version of PyTorch transpose for reference (h/t @CHHillee). Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments 7 responses to “Tutorial: Matrix Transpose in CUTLASS” The shape comment is incorrect in this line. “` Tensor tiled_tensor_DT = tiled_divide(tensor_DT, block_shape); // ([b,b], n/b, m/b) “` It should be ([b,b], m/b, n/b). Fixed, thanks! Thanks for sharing! Regarding the code snippet below: “` Tensor rmem = make_tensor_like(thr_tile_S); copy(thr_tile_S, rmem); copy(rmem, thr_tile_DT); “` Why do we need to explicitly define a register fragment instead of simply writing: “` copy(thr_tile_S, thr_tile_DT); “` Both code snippets will achieve the same copy functionality. However, for the copy through an explicitly defined register fragment the compiler nvcc will generate SASS with all LDG instructions before STG while the second will have LDG and STG interleaved, which can increase occurrence of long scoreboard stalls (from gmem latency) and thereby degrade performance. I pushed an update to the cfx-article-src repo that includes both copy kernels for comparison. Thanks for the reply! If I get it correctly: It’s quite similar to the pipelining (or double buffer) optimization when writing GEMM kernel (but with the help of compiler instead of manually coding), which reduces the instruction latency to achieve maximum memory throughput. Please correct me if I’m wrong. Yes, that’s a good analogy, though I would say the coding is ‘manual’ since writing it in the two different ways we described generates different SASS. In the analogy, this would be akin to choosing the number of buffers. Thanks, this really helps me understand it! Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
2805 Bowers Ave, Santa Clara, CA 95051 | [email protected] Delivering 1 PFLOP/s of Performance with FP8 FlashAttention-2 We recently released an update to our FlashAttention-2 forward pass implementation on NVIDIA Hopper™ architecture that incorporates a number of new optimizations and improvements, including a software pipelining scheme and FP8 support. In this article, we will explain a challenge with achieving layout conformance of register fragments for WGMMA instructions that we encountered in the process of fusing back-to-back mixed-precision GEMMs with FP8 operands and FP32 accumulator. After explaining why this problem surfaces for FP8 but not FP16 precision operands, we’ll describe a performant method to achieve layout conformance through using a concise combination of two NVIDIA® CUDA® intrinsics – namely, the byte permute and shuffle sync instructions. Our solution takes some inspiration from a blog post of Manish Gupta that studied a similar problem in the context of mixed-input GEMMs. Furthermore, we’ll discuss another complication that arises with the majorness of the V matrix for FP8 WGMMA. Finally, we will present some FLOPs/s benchmarks collected from runs with synthetic data on the NVIDIA® H100 Tensor Core GPU with SXM5 board form factor. In particular, for head dimension 256 and large sequence length, we obtain over 1 petaflop/s of performance. This is consistent with other claims on FP8 fused attention performance in the same regime, e.g. by HippoAttention. Recollections on FlashAttention-2 FlashAttention-2 is a memory-aware algorithm for multi-head attention (MHA) that was introduced by Tri Dao last year, building on earlier work of Dao and his collaborators. It serves as blueprint for implementing MHA as a fused CUDA kernel (FMHA), in which intermediate steps of the attention computation aren’t read back to HBM (i.e., global memory), but rather kept in shared memory or in registers. The key idea of FlashAttention-2, as well as the original FlashAttention, is to leverage tiling of the input tensors Q, K, V in conjunction with the online-softmax algorithm, which allows one to circumvent keeping the entire S matrix live for the softmax calculation. Apart from the algorithm itself, there are a number of interesting challenges in terms of optimization to get a performant implementation on modern GPUs, in particular those built on NVIDIA Hopper architecture like the H100 GPU. For example, to make optimal use of the Tensor Cores on an H100 GPU for carrying out the matrix multiplications in FMHA, it’s important to target the new Hopper-specific WGMMA instructions as well as the Tensor Memory Accelerator (TMA) for loads from global to shared memory. We described how to accomplish this using tools from NVIDIA’s open-source CUTLASS library in our paper from last December. There, we chose to work with half-precision FP16 input tensors Q, K, V for the head-to-head performance comparison with Dao’s base implementation written for NVIDIA Ampere architecture. However, in practice we want to transition to even lower precision types, in keeping with the philosophy of model quantization. Therefore, as a first step we need to understand how to accommodate FP8 input tensors in the design of the FMHA kernel. In what follows, when we discuss invoking WGMMA (that is, wgmma.mma_async in PTX) to multiply matrices such as Q·KT or P·V, we will implicitly be referring to tiles of these matrices. For example, we could choose 64×64 tiles. The challenge with FP8: layout conformance of WGMMA register fragments WGMMA is warpgroup-level, so involves 128 threads (4 warps) to carry out the MMA. When invoking a WGMMA instruction, accumulation occurs in those threads’ registers. It is then paramount to understand the thread ownership pattern, i.e. register fragment layout, of the entries of the result matrix, as prescribed by WGMMA. For example, we have the following layout for a slice of the FP32 accumulator tile, which is extracted from Figure 119 in the PTX documentation: In Figure 1, we took the subset of the 64×N output tile consisting of the two rows 0 and 8 and columns 0-15. Figure 1 then indicates that these entries are evenly divided among threads 0-3. Moreover, the entries are arrayed contiguously in each thread’s registers according to the d index. This is the register fragment layout that would appear as part of the wgmma.mma_async calls for computing either Q·KT or P·V in FMHA, for instance, and regardless of the precision type of the operands given FP32 accumulator. When invoking WGMMA to do a matrix multiplication A·B, you have the option of arranging for the operand A to either be in shared memory or in registers (by contrast, operand B must always be in shared memory). For the second GEMM P·V in FMHA, we want to use the option of keeping the first operand in registers for performance reasons, avoiding unnecessary writes and reads with shared memory. In order to compute a correct result, we then need to match the FP32 accumulator register fragment layout to either the FP16 or FP8 operand A register fragment layout. To be more precise, we should first downcast the entries held in registers in place to the desired precision format, and then potentially do some register movement, both within and across threads. This sounds complicated, and it turns out to involve some non-trivial maneuvers in the case of FP8, but let us first point out that one doesn’t need to do any register movement at all in the case of FP16. This is because the layouts in question are then identical: just compare Figure 119 and Figure 118 in the PTX documentation. In fact, this makes it easy to take our FP16 FMHA kernel and allow for the input tensors Q and K to be FP8 while still fixing the input tensor V to be FP16. We call this the FP8 hybrid FMHA kernel, which is akin to the FP8 version of FMHA in the Triton tutorial. Let’s now consider the case at hand, with all three tensors Q, K, V in FP8 precision. Consider the following extract from Figure 122 in the PTX documentation: This is the thread ownership pattern, or layout conformance requirement, that must be satisfied prior to invoking an FP8 WGMMA. For example, according to Figure 2, we need to arrange that thread 0 has these 8 entries in its registers in order: T0d0, T0d1, T1d0, T1d1, T0d2, T0d3, T1d2, T1d3. In particular, observe that we will need to exchange data among threads to achieve layout conformance. The Solution: Byte Permute and Shuffle Sync Our solution in code is exposed on our repo via ReorgCFp8toAFp8, which we invoke in the consumer path of the pipelined implementation immediately before the second gemm call. Invoking that method on the CUTLASS Tensor tSrSPrec (the FP8 downcasted register fragment Tensor) accomplishes the following data movement, going from top to bottom: In Figure 3, each box represents two entries from Figure 1 of the same thread and color combined. Keeping in mind that the entries have been downcasted to FP8, each box is then 16 bits wide. We then accomplish the register data movement in code by invoking a combination of the following two CUDA intrinsics: An added wrinkle: transposing the V matrix offline We’ve been thinking about Q, K, V as matrices for the attention formula, but for the attention layer the Q, K, V are actually 4-dimensional tensors, with dimensions given by the batch size B, sequence length S, numbers of heads H, and head dimension D. For our FP16 FMHA kernel, we conformed to a certain convention regarding how these tensors are arrayed in memory. Namely, we supposed that they are packed in the (B, S, 3, H, D) format. Then when loading tiles from global to shared memory via TMA, this convention forces the 2-dimensional Q, K, V tiles to be contiguous in the head dimension and strided in the sequence length dimension. Given a gemm call to multiply A·BT for an (M×K)-matrix A and an (N×K)-matrix B, we say that the A resp. B operand is mn-major if it is contiguous in the M resp. N dimension (or outer dimension), and k-major if is instead contiguous in the K-dimension (or inner dimension). Then given the (B, S, 3, H, D) convention, for the first GEMM Q·KT the 2nd operand is k-major, whereas for the second GEMM P·V the 2nd operand is mn-major. Fortuitously, for FP16 precision this distinction turns out to be immaterial since WGMMA accepts both k-major or mn-major for its 2nd operand. However, this is no longer the case for FP8 precision. Therefore, we need to either transpose the V tensor as a preprocessing step before calling our FP8 FMHA kernel (but not the FP8 hybrid version!), or otherwise handle the transpose elsewhere, such as fusing it to the epilogue of the projection that creates the V tensor. FLOPS benchmarks Figure 4 shows the FLOPs/s improvements we see from moving to lower precision types. Runs were conducted with synthetic data drawn from a normal distribution with mean 0 and variance 1. To interpret the figure correctly, recall from above that FP8 Hybrid refers to the FMHA kernel in which the Q and K tensors are FP8 and V is FP16, and note that the TFLOPs/s reported for FP8 don’t include the cost of transposing V since that is a pre-processing step that is expected to be accounted for elsewhere. In our experiments, we have tuned the number of pipeline stages as well as the tile sizes used in WGMMA independently for each kernel: we have configurable parameters QBLKSIZE and KBLKSIZE for tiling along the sequence length of Q and K, V respectively (note that we never divide along the head dimension). We see that the largest improvement going from FP8 Hybrid to FP8 occurs in the case of head dimension 256. This happens because keeping V as FP8 reduces pressure on the shared memory, enabling us to set both QBLKSIZE and KBLKSIZE to 128. Apart from the set of parameters chosen for Figure 4, we can also find parameters that showcase over 1 petaflop/s of performance through increasing the sequence length. For example, for sequence length 8448 = 64*132 we have: dgxuser@PM-DGX-H100:~/kernels$ ./fmha_fp8_pipe_128x128xCTA256 \ --batch-size=4 --seq-length=8448 --head-size=256 --iterations=1000 Using device 0: NVIDIA H100 80GB HBM3 (SM90, 132 SMs) M = 8448 N = 8448 K = 256 QBLK = 128 KBLK = 128 L = 32 : 8 * 4 CUTE_FMHA: [1028661.0]Gflop/s (2.2735)ms Or if we let sequence length be 16896 = 2*8448, then we have: dgxuser@PM-DGX-H100:~/kernels$ ./fmha_fp8_pipe_128x128xCTA256 \ --batch-size=2 --seq-length=16896 --head-size=256 --iterations=1000 Using device 0: NVIDIA H100 80GB HBM3 (SM90, 132 SMs) M = 16896 N = 16896 K = 256 QBLK = 128 KBLK = 128 L = 16 : 8 * 2 CUTE_FMHA: [1057474.8]Gflop/s (4.4230)ms For these examples, multiples of 132 were chosen to account for wave quantization effects. Finally, for ease of reproducibility we give our autotuned parameters for the different kernels: In Table 1, NOPIPE refers to replacing the software pipelining by our original copy/gemm overlapping scheme described in section §6 of our paper. Also, with NOPIPE we don’t use the QINRMEM option, while with the default pipelined version (STAGES=3) we do use QINRMEM. Addendum: accuracy loss with FP8 As always with moving to lower precision types, the concomitant effects on accuracy loss have to be understood and managed correctly. We leave this important topic to future work. For now, we can report the root-mean-square error taken between the output matrix and a reference calculation in Table 2 (with the same synthetic data and other parameters fixed as in Figure 4): In Table 2, the reference calculation is done on the Q, K, V tensors generated in the given precision type and then upcasted to FP32. Acknowledgments We would like to thank Tri Dao for helpful comments on an earlier draft and Ying Zhang from the PyTorch team for pointing out the wave quantization optimization to us. Edit (03/06/2024): Included more thorough autotuning for auxiliary parameters and updated FLOPS numbers based on this.Edit (03/10/2024): Added reference to similar work by HippoAttention on FP8 fused attention. FP8-FMHA-blog.pdf Share this: Discover more from Colfax Research Subscribe to get the latest posts sent to your email. Type your email… Subscribe Posted in Comments Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name Email Website Save my name, email, and website in this browser for the next time I comment. Δ Copyright © 2023-2024 Colfax International. All rights reserved.
Developing CUDA Kernels for Accelerated Matrix Multiplication on NVIDIA Hopper Architecture using the CUTLASS Library GANESH BIKSHANDI† and JAY SHAH† We explain how to develop NVIDIA CUDA kernels for optimized general matrix multiplication (GEMM) on NVIDIA Hopper architecture using the template collection CUTLASS and its core library CuTe. Our main contribution is to provide an implementation of a GEMM kernel that uses the Tensor Memory Accelerator (TMA) and Warp Group Matrix-Multiply- Accumulate (WGMMA) operations introduced with NVIDIA Hopper architecture. 1 INTRODUCTION The problem of devising computationally efficient algorithms for matrix multiplication is of fundamental importance due to the basic role of this algebraic operation in scientific computing and machine learning. Indeed, it is the ability to massively parallelize matrix multiplication on the GPU that has in large part enabled the rise of AI in the modern world (in the form of ever-more sophisticated neural networks). As a closed-source and broadly scoped solution, NVIDIA already delivers high-performance implementations of general matrix multiplication (GEMM) in the cuBLAS library and convolutions (CONV) in the cuDNN library. However, in view of the ongoing rapid evolution of GPU architectures and powerful new AI models, it is becoming increasingly important for end-users to be able to implement GEMM and CONV algorithms for their specific applications by writing their own CUDA code. Custom code is mostly needed in the epilogue of a GEMM kernel, where another operation has to be fused before writing out the result matrix. For example, Flash Multi-Head Attention (FMHA) is a recent optimized implementation of attention layers in transformer deep learning models such as Large Language Models (LLMs) [1]. As part of that proposed attention algorithm, one wants to fuse the matrix multiplications with the softmax nonlinear operations in order to exploit data locality and thereby achieve better performance, as opposed to a straightforward implementation without fusion. Implementing FMHA using cuBLAS is not possible as cuBLAS does not expose the thread block or thread-level operations, where the fusion would need to happen.1 To address these needs, NVIDIA has provided a suite of CUDA C++ template abstractions for implementing GEMM and related computations through their open-source library CUTLASS, integrated with the CuTe core library and backend since version 3.0. CUTLASS eases the development of optimized kernels by bringing to bear modern software engineering practices otherwise lacking in plain CUDA, such as OOPs, C++ templates (generics), abstract data types (e.g., Tensors, Layouts, and Shapes), and re-usable design patterns. Nonetheless, it can be challenging for the CUDA programmer to fully leverage the capabilities afforded to them by CUTLASS. In this publication and the accompanying code, we explain and implement a CuTe-based GEMM kernel for Hopper architecture that in particular exploits the new Tensor Memory Accelerator (TMA) and Warp Group MMA (WGMMA) operations to improve performance. To make this publication more self-contained, we also include a larger analysis of the GEMM problem; other insightful sources on this include [3] and [5]. In future work, we plan to integrate our GEMM kernel into FMHA and other LLM kernels. 1NVIDIA has announced the cuBLASDx API as a version of cuBLAS that will provide custom fusion support [2], but this isn’t widely available yet. †Colfax Research. A copy of this paper is available at https://research.colfax-intl.com/nvidia-hopper-gemm-cutlass/. Date: December 14, 2023. Email: [email protected]. 2 • Ganesh Bikshandi and Jay Shah Complete working code for the GEMM kernel explained in this paper is available at: https://github.com/ ColfaxResearch/cutlass-kernels. 2 INITIAL RUNDOWN ON GEMM PERFORMANCE The H100 PCIe GPU provides a maximum compute performance of 51 Teraflops for FP32 execution and 378 Teraflops for TF32 execution [6].2 The goal for a performant GEMM implementation is to achieve as high a percentage of this theoretical maximum as possible. With reference to the functionality exposed by CUTLASS, performance is sensitive to many parameters such as: 0 50000 100000 150000 200000 250000 300000 Performance (GFLOPS) K=32768 K=4096 K=512 K=256 K=128 K=64 Fig. 1. CUTLASS SGEMM (TF32) Performance on Hopper (H100 PCIe) GPU for 𝑀=𝑁=4096 and 𝐾∈{64,128,256,512,4096,32768} given different tiling shapes, warp counts and stage counts. Sorted in decreasing order of performance. • Shape of matrices (tall, skinny, or square); • Layout of matrices (row or column); • Number of warps per thread block; • Thread block shape; • Thread cluster shape; • Number of pipeline stages in the software pipelining optimization; • Chosen precision (TF32 vs FP32 vs FP16 vs FP8); • Usage of special MMA instructions like WGMMA or TMA. Figure 1 shows the performance variation with different values chosen for TF32 precision.3 The results were obtained using cutlass_profiler, a tool provided by CUTLASS that generates different GEMM kernels off of a base kernel by varying parameters and then measures the resulting running time and GFLOPS (among other metrics). From our empirical studies, we discovered that the best GEMM kernels try to optimize for maximum Tensor Core utilization and maximum possible GPU occupancy using software pipelining. In Figure 2, we display the 2The NVIDIA datasheet lists 756 TFlops for TF32 with sparsity. 3Performance is evaluated with respect to the matrix-multiply operation𝐶= 𝐴·𝐵for 𝐴and 𝐵generic 𝑀×𝐾and 𝐾×𝑁matrices, respectively. In general, the variables 𝑀, 𝑁, and 𝐾always refer to these dimensions. Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 3 performance of four such versions of GEMM for single-precision TF32 on H100 PCIe GPU. To the best of our knowledge, these kernels have been optimally tuned given the available specifications. 0 50 100 150 200 250 300 cuBLAS CuTe CUTLASS, TMA+WGMMA CUTLASS, TMA+WGMMA+WS 0 20 40 60 80 100 Performance (TF32 TFLOPS) Efficiency (%) Performance 215.6 170.0 228.5 249.5 Efficiency 54 42 57 62 Fig. 2. SGEMM (TF32) Performance for 𝑀=𝑁=𝐾=4096. The four versions listed here are: (1) cuBLAS: A version from the cuBLAS library in TF32 execution mode (set using NVIDIA_TF32_OVERRIDE=1). (2) CuTe: A hand-implemented version (presented in this paper) using CuTe, a collection of C++ CUDA template abstractions for defining and operating on hierarchically multidimensional layouts of threads and data. This implementation uses TMA for loads and WGMMA for matrix operations. Software pipelining (cf. §5) is not used in this version. (3) CUTLASS, TMA+WGMMA: A version shipped with the CUTLASS library that uses TMA loads and WGMMA instructions along with software pipelining optimization. This version served as our guide in implementing GEMM kernels using CuTe. (4) CUTLASS, TMA+WGMMA+WS: An improved version of (3) that also uses Warp Specialization (WS) and Thread Clusters. This version, while still not the best possible, utilizes other key differentiators of Hopper architecture.4 We have chosen to include this to showcase deeper optimizations available for Hopper. Discussion on using cuBLAS versus CUTLASS has sometimes been framed as trading off the superior general performance of cuBLAS for the customizability of CUTLASS.5 However, Figure 2 shows that CUTLASS is now more than competitive with cuBLAS; even our custom version, which implements only a small subset of all possible optimizations, comes close in performance. We also note that the best CUTLASS kernel we studied achieves close to 280 TeraFlops in performance. This observed discrepancy in performance between cuBLAS and CUTLASS is consistent with NVIDIA’s own benchmarking [4]. At any rate, the upshot is that good performance for a custom lightweight GEMM kernel is achievable thanks to CUTLASS given some effort put in by the CUDA programmer, which bodes well for the performance of fused 4The idea of warp specialization itself is of course not new to Hopper, but it is only implemented in CUTLASS starting with Hopper [11]. 5See for example https://github.com/NVIDIA/cutlass/issues/109 for public comment. 4 • Ganesh Bikshandi and Jay Shah kernels down the line. Furthermore, in §6 we will see that our CuTe program outperforms both cuBLAS and CUTLASS kernels for Batched-GEMM with 𝐾= 64 and 𝐿= 96 for batch_count. 3 MATRIX MULTIPLICATION ON THE GPU To orient the reader, we give a rapid review of some of the basic ideas involved in writing a GEMM kernel in CUDA. 3.1 The GPU Memory Hierarchy and CUDA Thread Hierarchy To begin with, understanding the GPU memory hierarchy is crucial for optimizing GEMM kernels. The GPU has three distinct levels in its memory hierarchy, proceeding from larger and slower to smaller and faster memory: (1) HBM (High Bandwidth Memory) or Global Memory (GMEM). (2) Shared memory (on-chip) (SMEM). (3) Register memory (RMEM). On the other hand, the CUDA programming model has the thread hierarchy. This is comprised, from coarser to finer groupings, of grids, thread blocks (i.e., cooperative thread arrays or CTAs), and threads.6 GMEM is available across the entire grid; SMEM is per streaming multiprocessor (SM), to which a thread block is assigned; and individual threads have their own RMEM. In this way, CUDA exposes the GPU memory hierarchy to the programmer [8]. Also note that for the H100 GPU, 32 threads are grouped into one warp for parallel execution on a SM; the size of a warp is fixed by the particular GPU architecture and thus not under the programmer’s control. The matrix multiplication algorithms of interest to us are written to be aware of this hierarchical structure. More precisely, they decompose the top-level matrix multiplication into multiple sub-matrix multiplications (or tiled matrix multiplications). Each decomposition step made in the algorithm corresponds to moving across one of the levels in the CUDA thread hierarchy and GPU memory hierarchy. A typical example of how one might assign tiling shapes to different levels of the hierarchy is given in Figure 3 (taken from [3]).7 Fig. 3. A typical tiling hierarchy used by a GEMM kernel optimized for NVIDIA GPU. Figure 3 should be read as follows: 6On top of this, Hopper introduces thread block clusters as intermediate between grids and thread blocks. The thread blocks within a cluster, spread over different SMs, can access each other’s shared memory (distributed shared memory). 7In practice, there are many possible choices for how one might tile the matrices (cf. Figure 1). Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 5 • At the HBM or global memory level, the 𝐴matrix is divided across the rows (𝑀dimension) and the 𝐵 matrix is divided across the columns (𝑁dimension). The result matrix 𝐶is divided along both the 𝑀and 𝑁dimensions. A row panel of 𝐴and a column panel of 𝐵are assigned to a thread block, which computes a tile of output 𝐶. • Each thread block further tiles the row panel of 𝐴and column panel of 𝐵along the 𝐾dimension; this ensures that the tiles fit in shared memory. The corresponding tiles of 𝐴and 𝐵are then multiplied, accumulating the result in the tile of the 𝐶matrix assigned to the thread block. • Further optimization then proceeds recursively by sub-tiling each of the tiles for warp and thread-level computation. Sub-tiles are moved into register memory and the actual element-wise multiplication is always done at the last level of tiling. • Note that the sub-tile chosen in the warp tile to be assigned to a thread does not have a contiguous shape; rather, we have 32 threads assigned to the sub-tiles in each quadrant of the warp tile, so that each thread sub-tile is spread over all four quadrants. This is done to exploit memory coalescing for parallel thread execution. In the remainder of this section, we go over some (pseudo)code that describes both the naive Matmul and tiled Matmul algorithms. 3.2 Naive Matrix Multiplication Recall that a naive matrix multiplication can be written using a triply nested loop in C++ as follows: for (int i = 0; i < M; ++i) for (int j = 0; j < N; ++j) for (int k = 0; k < K; ++k) C[i][j] += A[i][k] * B[k][j]; However, writing in CUDA means we can parallelize one or two or all the loops. CUDA naturally provides 2D parallelism in the form of thread blocks and grids when a CUDA kernel is launched, which makes it easy to parallelize the 𝑖and 𝑗loops. The following code snippet illustrates this principle. __global__ void NaiveGemmKernel(int M, int N, int K, float const *A, int lda, float const *B, int ldb, float *C, int ldc) { int i = threadIdx.x + blockIdx.x * blockDim.x; int j = threadIdx.y + blockIdx.y * blockDim.y; if (i < M && j < N) { for (int k = 0; k < K; ++k) C[i + j * ldc] += A[i + k * lda] * B[k + j * ldb]; } } dim3 block(16, 16); dim3 grid( (M + block.x - 1) / block.x, (N + block.y - 1) / block.y); NaiveGemmKernel<<< grid, block >>>(M, N, K, A, lda, B, ldb, C, ldc); 6 • Ganesh Bikshandi and Jay Shah The 𝑘loop is now the main loop of the matrix multiplication, while 𝑖and 𝑗are not expressed “explicitly” in the CUDA code as they are inferred from the kernel launch parameters given for the thread block and grid dimensions. The 𝑘loop is not parallelized, but CUTLASS provides an option to parallelize along that dimension also, via the 𝑘-split algorithm. For our discussion, we will suppose that the 𝑘loop is not parallelized. 3.3 Outer Product Summation and Tiling The naive matrix multiplication in §3.2 requires the matrices 𝐴and 𝐵to be repeatedly brought inside the cache or shared memory. To obviate this issue, the 𝑘loop can be permuted to be the outside loop, and this leads to the “outer product” version of matrix multiplication in which the result matrix 𝐶is computed as a sum of 𝑘many outer products (of columns of 𝐴and rows of 𝐵). for (int k = 0; k < K; ++k) // K dimension now outer-most loop for (int i = 0; i < M; ++i) for (int j = 0; j < N; ++j) C[i][j] += A[i][k] * B[k][j]; One problem with computing the result matrix 𝐶via outer product summation is that this requires 𝐶to be live all the time, which for large enough dimensions can quickly exceed all available on-chip memory. A way to circumvent this problem is “tiling” or “blocking”. The computation can first be tiled into smaller units that can fit into on-chip memory, after which another set of loops iterates over the tiles: for (int m = 0; m < M; m += MT) // iterate over M dimension for (int n = 0; n < N; n += NT) // iterate over N dimension for (int k = 0; k < K; ++k) for (int i = 0; i < MT; ++i) // compute for one tile for (int j = 0; j < NT; ++j) { int row = m + i; int col = n + j; C[row][col] += A[row][k] * B[k][col]; } Note that we have not yet tiled along the 𝐾dimension; this comes next. 3.4 Hierarchical or Recursive Matrix Multiplication We now refer back to Figure 3 and its multiple levels of tiling. Matrix multiplication optimized for the GPU is implemented in an hierarchical or recursive fashion. At every level of the recursion, the program copies a tile from one memory to another (e.g., global to shared memory). The actual multiply-accumulate-add happens only at the leaf level (the last level), which in Figure 3 is thread-level. At the leaf level, we have a choice to either use the standard multiply-accumulate-add of the SIMT core or specialized Tensor Core instructions. An implementation optimized for Hopper architecture will choose to recurse only till warpgroup-level (instead of thread-level) and use the special WGMMA instructions that use the Tensor Cores more optimally [10, §9.7.14]. Pseudocode illustrating this is displayed below: Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 7 // MT, NT, KT = dimensions at threadblock level // MW, NW, KW = dimensions at warp level // Loop1A: threadblock-level concurrency Loop1A: for each m, n in M, N with step MT, NT Loop1B: for each k in K with step KT Move a chunk of A from GMEM to SMEM (As) Move a chunk of B from GMEM to SMEM (Bs) // Loop2A: warp-level concurrency Loop2A: for each mm, nn in MT, NT with step MW, NW Loop2B: for each kk in KT with step KW Move a chunk of As from SMEM to RMEM (Ar) Move a chunk of Bs from SMEM to RMEM (Br) // run mma and accumulate in registers // further recursion is hidden by mma call mma(Ar, Br, accum) Note that the equivalent of the parallelization step handled by launching the CUDA kernel corresponds to the outermost loop in this pseudocode. Since parallelization occurs over chunks of the 𝐴and 𝐵matrices that now are no longer single rows and columns, converting this pseudocode into working CUDA code is not straightforward. We will see that the CUTLASS API provides a method for doing this via local_tile. 3.5 Fusing Operations with Matrix Multiplication For use in real-life workloads, a GEMM kernel will often involve additional operations to be fused with the above Matmul, such as a linear scaling step. The place where that is done (at the end of the outermost loop) is referred to as the epilogue. Typically, the accumulator will be in registers in the epilogue. To understand the basic reason for kernel fusion, consider linear scaling: even though this could be done separate to the initial Matmul, fusing in the epilogue helps reduce the total bandwidth requirement, since otherwise the linear scaling would necessitate a re-read of the original 𝐶and the resulting 𝐷matrix from global memory. Although the idea is simple, as a software engineering task there are many issues to be aware of when writing a fused kernel. For one, there are multiple elementwise operations that need to be fused with Matmul (e.g., ReLU) in a typical AI-intensive workload. Secondly, the difficulty of fusing various operations can vary greatly with the operation; for example, softmax is harder to handle than linear scaling. 4 DEVELOPING A GEMM KERNEL USING CUTLASS AND CUTE Developing and maintaining a performant Matmul kernel is difficult mainly because the shape of the tiles at each level must be tuned for different GPU architectures, different precision types of the underlying data, and different matrix sizes. When new levels in the CUDA thread hierarchy are introduced (e.g., the thread cluster level introduced with Hopper architecture), the kernel code must also be changed with additional loops and other code to handle that. The unroll parameters need to be tuned by the compiler too. Apart from all this, there are specialized instructions like WGMMA in Hopper that can boost the performance of the leaf Matmul step. Finally, fusing operations like linear scaling or softmax with Matmul introduces more complexity into the equation. 8 • Ganesh Bikshandi and Jay Shah To handle all of this, one has the CUTLASS library. CUTLASS is a C++ templates-based library that provides a very high-level interface for defining the shape of the tiles at each level of the memory hierarchy. It also provides an interface called Epilogue that allows the programmer to fuse operations like linear scaling after the completion of a leaf Matmul. Moreover, CUTLASS defines default leaf level kernels itself and most often selects highly tuned kernels for the GPU architecture based on the tiling parameters supplied during compile time, the specified architecture and the precision. 4.1 CUTLASS API Basics A basic listing of CUTLASS-based Matmul is described in Listing 1. Apart from CuTe, CUTLASS has the following 3 important APIs for GEMM, each corresponding to a distinct level of the GPU memory hierarchy [12]: (1) Device API; (2) Kernel API; (3) Collective API. The Collective API embodies a thread block or a cluster of thread blocks (from Hopper architecture onwards). Collective APIs can be used to construct a GEMM as well as the epilogue to be fused with GEMM. The default epilogue simply writes out the accumulator of GEMM from register memory to global memory. CUTLASS defines several other typical operations such as linear scaling and clamping; other device-side function call operators may also be used to perform custom operations. The Kernel API embodies the entire grid. It thus schedules the collectives and is responsible for tiling the input matrices into row and column panels, loading the references to them and invoking the GEMM and the epilogues. Fusion of epilogues with GEMM happens at the Kernel API level. The Device API is the highest-level API. It is invoked from the Host (i.e., CPU) and does not have any detail about the specifics of the Matmul implementation. This API is used by host-side .cu code to invoke CUTLASS’s GEMM kernels, much like cuBLAS API. The CUTLASS API relies on C++ templates and meta-programming. Many compile-time values can be set to default (or auto) values, while the optimal values are chosen to be best possible by the CUTLASS implementation. Most of the compile-time parameters described in the listing are self-explanatory. The constructed GEMM kernel can be executed with input and output arguments just like any C++ functor objects. using ElementA = float; // Element type for A matrix operand using ElementB = float; // Element type for B matrix operand using ElementC = float; // Element type for C and D matrix operands using ArchTag = cutlass::arch::Sm90; // Tag indicating the SM using OperatorClass = cutlass::arch::OpClassTensorOp; // Operator class tag using TileShape = Shape<_128,_128,_32>; // Threadblock-level tile size using ClusterShape = Shape<_1,_2,_1>; // Shape of the threadblocks in a cluster Collective API using CollectiveMainloop = typename cutlass::gemm::collective::CollectiveBuilder< ArchTag, OperatorClass, ElementA, RowMajor, 4, Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 9 ElementB, ColumnMajor, 4, ElementAccumulator, TileShape, ClusterShape, cutlass::gemm::collective::StageCountAuto, cutlass::gemm::collective::KernelScheduleAuto >::CollectiveOp; using CollectiveEpilogue = typename cutlass::epilogue::collective:: CollectiveBuilder< cutlass::arch::Sm90, cutlass::arch::OpClassTensorOp, TileShape, ClusterShape, cutlass::epilogue::collective::EpilogueTileAuto, ElementC, ElementC, ElementC, ColumnMajor, 4, ElementC, ColumnMajor, 4, cutlass::epilogue::collective::EpilogueScheduleAuto >::CollectiveOp; Kernel API using GemmKernel = cutlass::gemm::kernel::GemmUniversal< Shape<int,int,int>, // Indicates ProblemShape CollectiveMainloop, CollectiveEpilogue >; Device API using Gemm = cutlass::gemm::device::GemmUniversalAdapter<GemmKernel>; Listing 1. Constructing a GEMM Kernel for SM90 (Hopper) Architecture 4.2 Tiled Matrix Multiplication Using CuTe Even though CUTLASS provides APIs for optimized GEMM and fusing operations with GEMM, developing fused kernels like FMHA requires lower level APIs of CUTLASS, as the fusion is not straightforward. Such a custom kernel has been shipped as part of CUTLASS [9]. However, that kernel does not use the new Hopper features and is mostly customized for SM80 architecture. It is somewhat cumbersome to redevelop this for SM90. CuTe is another API within the CUTLASS API that provides even more flexibility to develop GEMM kernels. It specifically introduces the concept of Shapes and Layouts, using which programmers can define the different levels of tiling explicitly. Additionally, it provide APIs to: (a) Convert matrices in to tensors and partition them; (b) Access the tiles of a tensor that belong to a thread block (local_tiles); (c) Make a local partition of a tensor that belongs to a thread within a thread block (local_partition); (d) Copy between GEMM, SMEM and RMEM (copy); (e) Multiply tensors with special Matmul instructions like WGMMA (gemm); 10 • Ganesh Bikshandi and Jay Shah (f) Synchronize between thread clusters; (g) Make special swizzle layouts for shared memory. Listing 2 shows a basic CuTe based Matmul CUDA kernel, obtained from CUTLASS repository [14].8 template <class MShape, class NShape, class KShape, class TA, class AStride, class ABlockLayout, class AThreadLayout, class TB, class BStride, class BBlockLayout, class BThreadLayout, class TC, class CStride, class CBlockLayout, class CThreadLayout, class Alpha, class Beta> __global__ static void gemm_device(MShape M, NShape N, KShape K, TA const* A, AStride dA, ABlockLayout blockA, AThreadLayout tA, TB const* B, BStride dB, BBlockLayout blockB, BThreadLayout tB, TC * C, CStride dC, CBlockLayout blockC, CThreadLayout tC, Alpha alpha, Beta beta) { using namespace cute; using X = Underscore; // Shared memory buffers. __shared__ TA smemA[cosize_v<ABlockLayout>]; __shared__ TB smemB[cosize_v<BBlockLayout>]; auto sA = make_tensor(make_smem_ptr(smemA), blockA); auto sB = make_tensor(make_smem_ptr(smemB), blockB); // Represent the full tensors. auto mA = make_tensor(make_gmem_ptr(A), make_shape(M,K), dA); auto mB = make_tensor(make_gmem_ptr(B), make_shape(N,K), dB); auto mC = make_tensor(make_gmem_ptr(C), make_shape(M,N), dC); // Get the appropriate blocks for this thread block. auto MT = size<0>(sA); auto NT = size<0>(sB); auto KT = size<1>(sB); auto gA = local_tile(mA, make_shape(MT, KT), make_coord(blockIdx.x, _)); auto gB = local_tile(mB, make_shape(NT, KT), make_coord(blockIdx.y, _)); auto gC = local_tile(mC, make_shape(MT, NT), make_coord(blockIdx.x, blockIdx.y); // Define partitioned views of GMEM and SMEM for COPY 8The code has been adjusted in places for clarity. We also display the kernel only, not the main program that launches the kernel. Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 11 auto tAgA = local_partition(gA, tA, threadIdx.x); auto tAsA = local_partition(sA, tA, threadIdx.x); auto tBgB = local_partition(gB, tB, threadIdx.x); auto tBsB = local_partition(sB, tB, threadIdx.x); // Define partitioned views of SMEM for GEMM. // Partition sA (M,K) by the rows of tC. auto tCsA = local_partition(sA, tC, threadIdx.x, Step<_1, X>{}); // Partition sB (N,K) by the cols of tC. auto tCsB = local_partition(sB, tC, threadIdx.x, Step< X,_1>{}); // Partition gC (M,N) by the tile of tC. auto tCgC = local_partition(gC, tC, threadIdx.x, Step<_1,_1>{}); // Allocate the accumulators (RMEM). auto tCrC = make_fragment_like(tCgC); // Clear the accumulators clear(tCrC); Matmul Begin // Data is copied from GMEM to SMEM using the COPY views. // gemm(.) operates on the GEMM views. auto k_max = size<2>(tAgA); for (int k = 0; k < k_max; ++k) { // Copy GMEM to SMEM. copy(tAgA(_,_,k), tAsA); copy(tBgB(_,_,k), tBsB); cp_async_fence(); cp_async_wait<0>(); __syncthreads(); // Compute GEMM on SMEM. // Accumulate to registers. gemm(tCsA, tCsB, tCrC); __syncthreads(); } Matmul End // Epilogue fusion goes here. for (int i = 0; i < size(tCgC); ++i) { tCgC(i) = tCrC(i); } 12 • Ganesh Bikshandi and Jay Shah } Listing 2. Basic GEMM using CuTe The key part of the Matmul computation is listed in the loop. The kernel computes the matrix multiplication of 𝐴and 𝐵𝑇resulting in 𝐶. The matrices 𝐴and 𝐵are tiled as shown in Figure 4. The tiling of 𝐶was already discussed earlier in the naive Matmul implementation. The key differences between naive Matmul and this Matmul are: (1) Matrix elements are brought from GMEM to SMEM first using an async copy operation; (2) The result matrix 𝐶is stored in RMEM and finally written back to GMEM in the epilogue; (3) The computation is tiled along the 𝐾dimension also. This enables step (1), as 𝐴and 𝐵tiles are small enough to fit in shared memory compared to the full row or column panel. The two key APIs of CuTe to be emphasized in Listing 2 are: (1) local_tile: extracts the tiles local to a thread block into a tensor. (2) local_partition: extracts the elements local to a thread in a thread block into a tensor. Once local tiles are obtained using the CuTe API, the corresponding tiles of 𝐴and 𝐵are multiplied using the GEMM API and the result is accumulated in the 𝐶matrix (in registers). After the last tile along the 𝐾dimension is processed, the result 𝐶is then written to GMEM. An important feature in CuTe is the view of a tensor (in the sense of the C++ concept). During the copy operation, where the data is read from global to shared memory, a view based on AThreadLayoutA (tA) and BThreadLayout (tB) is used for input tensors. Such a view is created to improve coalescing for global memory loads, for example. However, during the gemm operation, a view based on CThreadLayout (tC) is used. Such a thread-to-data mapping improves the performance of matrix-multiply computation but may not result in coalesced stores to global memory. The original shared memory can be read and written using the different views. Thus, the thread layouts of the copy and gemm operations are decoupled so that the best choice for each operation can be chosen by the user. Fig. 4. Graphical representation of Tiled CuTe GEMM kernel with SIMT Core. The optimized implementation in §4.3 will instead use TMA for loading tiles of 𝐴and 𝐵from GMEM to SMEM, and WGMMA for computing the tiles of 𝐶. Note that we have used no-transpose for 𝐴and transpose for 𝐵(commonly referred to as NT layout), as that is easy to implement using SIMT mul-add. In contrast, the versions shown in Figure 2 all use TN layout. Nevertheless, this version will serve as a basis for the version of Matmul that we present next. Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 13 4.3 Incorporating TMA and WGMMA instructions from NVIDIA Hopper Architecture The listing 2 in the preceding section uses a plain SIMT mul-add operation as the atom to compute the product of two tiles. On the other hand, modern NVIDIA GPUs such as the H100 provide Tensor Cores that accelerate mixed-precision computation several-fold. Additionally, Hopper architecture also introduces the Tensor Memory Accelerator (TMA), which can transfer large blocks of data efficiently between global memory and shared memory. To utilize TMA and Tensor Cores, two important changes are required: • The copy API call should be changed to include the TMA copy atom; • The gemm API call should be changed to include the MMA atom – for Hopper, we choose WGMMA. The WGMMA atom schedules data of size 64×8 (other sizes are possible too) split across 128 threads atomically as one operation on a tensor core. The mapping of rows and columns of the resultant 𝑀𝑇×𝑁𝑇tile of 𝐶to threads is more complex than those listed in the preceeding sections. CuTe provides utilities that define the mapping, so users need not understand the details of the complex thread layout. At the same time, while implementing complex fused kernels in the epilogue, programmers can use CuTe APIs to unravel the thread layout so as to obtain the correct row and column of individual elements of the tile. Typically, while using WGMMAs the loads of the 𝐴and 𝐵tiles are executed using TMA. TMA loads offer better performance than cp.async. WGMMA operations require that the shared memory allocated for the tiles of the 𝐴 and 𝐵matrices be in a certain “swizzled” format [10], which is supported by TMA. TMA loads are asynchronous and are invoked from one thread (typically thread 0), while the other threads wait on a cuda::barrier for the operation to be completed. Thus, TMA loads require producer-consumer style synchronization between the threads of a warp. Listing 3 shows the relevant part of the new GEMM kernel that uses TMA and WGMMA. The copy and gemm views are not shown in the new listing for brevity.9 .... for (int k = 0; k < size<1>(tAgA); ++k) { ..... //copy A and B from GMEM to SMEM using COPY views. if (threadIdx.x == 0) { /// Initialize shared memory barrier .... copy(tma_copy_a, tAgA(_,k), tAsA); copy(tma_copy_b, tBgB(_,k), tBsB); } __syncthreads(); /// Wait on the shared memory barrier. .... __syncthreads(); 9CuTe provides APIs to get the correct views for TMA copy and WGMMA gemm operations. 14 • Ganesh Bikshandi and Jay Shah warpgroup_fence_operand(tCrC); cute::gemm(wmma_atom, tCrA, tCrB, tCrC); warpgroup_commit_batch(); warpgroup_wait<1>(); __syncthreads(); } ..... Listing 3. GEMM using TMA+WGMMA As is shown in Figure 5, the TMA+WGMMA version provides almost a sevenfold improvement compared to the CuTe basic version, since it uses the Tensor Cores. However, there is clear room for improvement in terms of Tensor Core utilization. 0 100 200 300 400 500 TF32 FP16 0 20 40 60 80 100 0 2 4 6 8 10 Performance (TFLOPS) Tenscore Utilization (%) cuBLAS CuTe CuTe-Basic Tensor Core Utilization Fig. 5. cuBLAS vs CuTe vs CuTe-Basic (TF32 and FP16 precision). 5 ADDITIONAL OPTIMIZATIONS Recall from §2 that the best CUTLASS kernel for SGEMM delivers around 280 TFLOPS, while cuBLAS delivers around 215 TFLOPS. CUTLASS implements many more optimizations to achieve this superior level of performance. To list a few from the documentation [11], we have: (1) Software Pipelining – Software pipelining is a technique to hide memory latency where memory accesses and math instructions are executed concurrently, while always accounting for the dependencies between these steps. The CUTLASS implementation uses multiple buffers at both the thread block and warp level. (2) Warp Specialization – With optimizations like software pipelining, different threads or groups of threads naturally have distinct roles. Some are producers that load data, while others are consumers that run the MMA instructions. The idea of warp specialization is to spatially partition the warps in a thread block into two groupings of producers and consumers. Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 15 (3) Persistent Kernels – Persistent kernels is a CUDA design pattern that aims to avoid kernel launch and configuration overhead by keeping the kernel persistent on the GPU across multiple calls. In CUTLASS, this involves having persistent thread blocks compute multiple output tiles over their lifetime. (4) Two co-operative consumer warp groups – WGMMA allows the operand 𝐴tile to be in register memory too instead of shared memory. However, that restricts the tile size of 𝐴due to limited register space. Splitting the tile size across the 𝑀dimension into two and assigning to two different consumer warp groups allows for larger tile sizes and eases register pressure. (5) Warp-Specialized Persistent Ping-Pong kernel – The two consumer warp groups from (4) are each assigned to a different output tile. This allows for the epilogue of one consumer warp group to be overlapped with the math operations of the other consumer warp group, thus maximizing tensor core utilization. There is also synchronization on the side of the producer warp groups. From our empirical studies, point (5) in particular is largely responsible for the gap between the fourth column in Figure 2 and the indicated 280 TFLOPS number for the best measured CUTLASS kernel. 6 BATCHED-GEMM The AI workflow that we are targeting does not involve multiplying large square matrices. Instead, it involves large square matrices decomposed as products of matrices with small 𝐾(e.g., 64 or 128), and with batch count 𝐿> 1 (e.g., 64 or 96); cf. [1, §2.2]. Such a scheme is popularly known as Batched-GEMM. Our CuTe program can be extended to handle Batched-GEMM by simply setting the third dimension of the grid to be 𝐿. We then use blockIdx.z when using the local_tile operation inside the CUDA kernel, as shown in listing 4. .... auto gA = local_tile(mA, make_shape(MT, KT), make_coord(blockIdx.x, _, blockIdx. z)); auto gB = local_tile(mB, make_shape(NT, KT), make_coord(blockIdx.y, _, blockIdx. z)); auto gC = local_tile(mC, make_shape(MT, NT), make_coord(blockIdx.x, blockIdx.y, blockIdx.z); .... Listing 4. Batched-GEMM kernel using CuTe Performance of such a Batched-GEMM using CuTe is shown in Figure 6. Surprisingly, the CuTe program outperforms both cuBLAS and CUTLASS, even though it does not use any of the additional optimizations that CUTLASS uses as listed in §5. At the same time, all of the programs deliver sub-optimal performance for Batched-GEMM. From Figure 1, we can infer that, for small 𝐾(64 or 128), the performance of GEMM is sub-optimal in general. In particular, it appears that the optimizations listed in §5 need to be adapted to the case of smaller matrices and Batched-GEMM, or else different methods should be brought to bear. 7 CONCLUSION AND FUTURE WORK To summarize, we discussed how to develop a CuTe-based GEMM kernel for NVIDIA Hopper architecture that uses the Tensor Memory Accelerator (TMA) and Warp Group MMA (WGMMA) operations. Our CuTe program 16 • Ganesh Bikshandi and Jay Shah 0 20 40 60 80 100 cuBLAS CuTe CUTLASS Performance (TF32 TFLOPS) 47.9 53.9 51.1 Fig. 6. Batched-SGEMM with 𝑀=𝑁=4096, 𝐾=64 and 𝐿=96. achieved close to 80% of the performance of the standard cuBLAS GEMM kernel with this one single optimization. For Batched-GEMM, our CuTe program outperformed both cuBLAS and CUTLASS. Currently, we are working to integrate our GEMM kernel with Flash Multi-Head Attention (FMHA) [1], a popular attention layer used in Large Language Models (LLMs). Some important challenges to be addressed are: • FMHA necessitates changes to the basic GEMM kernel described in this paper. The most significant change is that all panels of the 𝐵matrix are assigned to the same thread block, nullifying any parallelism along that dimension in the grid. On the other hand, LLMs use batched GEMM, introducing parallelism along the third dimension 𝐿of the grid. • The online-softmax [13] computation has to be fused with the result of the GEMM (tCrC) in registers, which involves atomic operations across threads for computing the maximum and sum of each row. • FMHA uses a smaller 𝐾dimension (64 or 128) in comparison to 𝑀and 𝑁. From Figure 1, we know that GEMM performance for small 𝐾values tends to be very sub-optimal. We plan to adapt the optimizations listed in §5 to the setting of matrices with small 𝐾and Batched-GEMM. REFERENCES [1] FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. Tri Dao. July 17, 2023. https://arxiv.org/abs/2307.08691. [2] New cuBLAS 12.0 Features and Matrix Multiplication Performance on NVIDIA Hopper GPUs. Roman Dubtsov, Evarist Fomenko and Babak Hejazi. February 1, 2023. https://developer.nvidia.com/blog/new-cublas-12-0-features-and-matrix-multiplication-performance-on- nvidia-hopper-gpus/ [3] CUTLASS: Fast Linear Algebra in CUDA C++. Andrew Kerr, Duane Merrill, Julien Demouth and John Tran. December 5, 2017. https://developer.nvidia.com/blog/cutlass-linear-algebra-cuda/. [4] CUTLASS 3.2 – Performance. https://github.com/NVIDIA/cutlass#performance. [5] CuTe dense matrix-matrix multiply tutorial. https://github.com/NVIDIA/cutlass/blob/main/media/docs/cute/0x_gemm_tutorial.md. [6] NVIDIA H100 Tensor Core GPU Datasheet. https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet. [7] NVIDIA Hopper Architecture In-Depth. Michael Andersch, Greg Palmer, Ronny Krashinsky, Nick Stam, Vishal Mehta, Gonzalo Brito and Sridhar Ramaswamy. March 22, 2022. https://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/ Developing CUDA Kernels for GEMM on Hopper using CUTLASS • 17 [8] CUDA Refresher: The CUDA Programming Model. Pradeep Gupta. https://developer.nvidia.com/blog/cuda-refresher-cuda-programming- model/ [9] xFormers: A modular and hackable Transformer modelling library. Benjamin Lefaudeux, Francisco Massa, Diana Liskovich, Wenhan Xiong, Vittorio Caggiano, Sean Naren, Min Xu, Jieru Hu, Marta Tintore, Susan Zhang, Patrick Labatut, Daniel Haziza. 2022. https: //github.com/facebookresearch/xformers. [10] Parallel Thread Execution ISA Version 8.2. https://docs.nvidia.com/cuda/parallel-thread-execution/index.html. [11] Efficient GEMM in CUDA. https://github.com/NVIDIA/cutlass/blob/main/media/docs/efficient_gemm.md. [12] CUTLASS 3.0 GEMM API. https://github.com/NVIDIA/cutlass/blob/main/media/docs/gemm_api_3x.md [13] Online normalizer calculation for softmax. Maxim Milakov and Natalia Gimelshein. 2018. https://doi.org/10.48550/arXiv.1805.02867. [14] https://github.com/NVIDIA/cutlass/blob/main/examples/cute/tutorial/sgemm_nt_1.cu.
Sign up Sign in Sign up Sign in An almost pointless exercise in GPU optimization Andrew Innes Follow -- 2 Listen Share Unlike various of my colleagues, I’m not an ML expert who is comfortable writing funky fused operators to make big ML models run faster on GPUs in conjunction with clever quantisation tricks. Even so, I want to better understand the potential for us to make more effective use of GPUs in our products, and so I decided to revisit the first problem I ever considered trying to accelerate with a GPU. My attempt then was foiled by looking only at shader languages, which are too primitive to be effective for my problem, but this time I was aware of CUDA programming in C++ which is much more general purpose and accessible. What is unusual about my chosen problem is that it is officially pointless. You ought not to be able to find any library that will accelerate this algorithm, because it isn’t worth writing one! But as you will see, it feels like it should be possible to make it run significantly faster by making use of the many cores in a GPU compared with the relatively few cores in a CPU. That’s what I thought then, and wanted to test now as a learning exercise. That also makes it an interesting proxy for lots of specific algorithms you might have in your own programs, which aren’t catered for by high-performance libraries written by experts, but feel like they should be possible to accelerate because GPUs will run thousands of threads in parallel. If that is your situation, this article might help you think about what to expect and what might be involved. TL;DR Getting an existing C++ algorithm running on GPU is pretty easy, so it is a low bar to get started. What I learned is the importance of minimizing thread divergence and maximizing effective memory access speed. To do that effectively, I had to transform my algorithm into a state machine structure so that every thread is operating mostly in lock-step, just with different data values. My starting, interim and final code are open to see, along with a summary of the steps I took, and the corresponding improvements or regressions at each stage. I want to focus in this article on the thought process for deciding each step, mostly by explaining the Nvidia Nsight Compute analysis which helped guide me. In the end I managed to make my program run about 30x faster on my laptop using its GeForce GTX 1650 GPU, compared with its Core i7–9750H CPU. Only in the last two steps did it get meaningfully better than with CPU though, so be prepared for early and frequent disappointment. If you want just the summary of what worked, jump to Progression History. A Pointless Program Years ago, a colleague invited me to take on his Christmas programming challenge, which was to write the fastest program he could to continuously play the card game Beggar My Neighbour. The aim, noted by John Conway as definitely not worth solving, is to try to find the longest game — with the possibility that there might be a game which never ends. The longest game found so far has 7972 turns — a rainy afternoon diversion perhaps, if you can sustain playing a card every 2.7095 seconds for six hours straight! You can see a history of new records, with a Python program that verifies them, here: https://github.com/matthewmayer/beggarmypython The game play algorithm is almost trivial, but it has some notable features which turn out to be relevant to the challenge of effectively leveraging a GPU. CPU Starting Point My initial C++ program to search for long games is here. It is just a port of the Python program with a search loop that runs continuously, shuffling the deck randomly between games. I implemented two simple optimizations that seemed obvious: The program can use multiple CPU cores by just running separate copies of the search loop in different threads, starting with different RNG seeds. Each search loop does its own random walk through potential deals. On my laptop, throughput peaks at 2.9 million deals per second, using 12 threads to match the number of logical CPU cores. Initial GPU Port The beauty of General Purpose GPU programming is that you can often get started quickly with the code you already have. There is a good chance that only a few adaptations are needed to convert a program that uses multiple threads on CPU to one that runs with even more threads on GPU, as long as you don’t have extreme memory requirements (for code or data or both) which force breaking up the work into smaller pieces. GPU cores have a similar range of machine instructions to those of CPUs, so plain algorithms will compile readily enough to give reasonable single core efficiency; you don’t need to transform your program to use a subset of variable types or data structures or code constructs, or special parallelization primitives or libraries. You mainly just need to cope with some changes to library functions (such as the random number generator in my case), and finessing of class structures to graft in the global functions needed to launch work on the GPU. You do need to be ready for disappointment though. If your code is similar to mine, founded on nested branching logic, you may find that the GPU won’t even be able to match the CPU performance at first. This is to be expected: CPUs are designed for running unrelated complex branching logic in each thread, dedicating significant chip area on circuitry to predict branches, speculatively execute multiple paths, and do lots of caching — 1 MB per logical CPU core in my laptop for instance. CPU cores also run fast, with 3–4GHz max speed being fairly typical. By comparison, GPUs are much lighter on hardware per core for caching, branch prediction etc, and top out at perhaps 1.5–2GHz (unless you have an extreme cooling gaming GPU). They come with faster memory though, to help compensate. But the net effect is that an algorithm will probably run 2–3 times faster on a single CPU core than it will on a single GPU core, from a combination of the clock speed ratio and more aggressive single thread acceleration. You need to figure out how to make good use of the thousands of slow GPU cores in order to outperform a few fast CPU cores. My initial port to GPU ran at about 1.4M deals per second (once thermal throttling kicks in), using 2048 threads (128 blocks of 16 threads each). Not an encouraging start, but in hindsight about what I should have expected. Learn to use Nsight Compute Early on, I decided to get comfortable using the Nsight Compute tool to analyse the GPU portion of my code in spectacular detail, rather than trying to rely on intuition and the high-level utilization figures from nvidia-smi. The approach I settled on was to step through the execution of the first couple of GPU kernel launches (the kernel here being my global function which runs the core search and game play loop for a predefined number of iterations), and then profile the next kernel run to see how much of the hardware capability was being used effectively. The first, unflattering, report that the tool shows is the “Speed of Light” summary: ie. the percentage of theoretical peak performance on compute and memory bandwidth utilization. My first program scored around 12% for compute, and 28% for memory. (For the same program, nvidia-smi would report 88% and 28% utilization respectively, underscoring how misleading its information can be when you are trying to optimize your algorithm in the early stages.) There were many detailed metrics underneath the headline figures which could be examined, and various analysis warnings that point out potential problem areas. Many of these sounded esoteric, but there were two reasonably clear actionable warnings: The first one was easy to address: assign at least 32 threads per block (not 16), so we don’t leave warp capacity unused for not even trying. I ended up settling on 32 blocks of 32 threads, which increased throughput to 2.3M deals per second. nvidia-smi now reported 95% and 13% for compute and memory utilization. Nsight Compute reports we have improved average active threads per warp from 2.4 to 3.6. That left Thread Divergence as the key warning to address. Thread Divergence If you have learned about CUDA programming before, you may recall that thread divergence occurs when not all threads in the same warp are executing the same instruction. (Each group of 32 consecutive threads in a block will always execute together as a unit, known as a warp, and GPU hardware is heavily optimized around running these warps of threads very efficiently on the assumption that the threads are normally executing instructions in unison, working on closely related pieces of the same computation.) It is part of the beauty of General Purpose GPU programming that thread divergence is allowed, and is handled automatically by the hardware as well as it can be, by simply letting the threads in the warp take turns to run their next instruction, when they are different. Subsets of the warp that do share the next instruction will execute together, so degradation in performance is proportional to how much the threads have diverged from each other. In the worst case, where every thread is at its own unique point in the code, the threads are effectively time sliced on the hardware, and so running at 1/32 of the hardware’s potential. From the Nsight Compute warning, I could see we were quite close to that point: [Warning] Instructions are executed in warps, which are groups of 32 threads. Optimal instruction throughput is achieved if all 32 threads of a warp execute the same instruction. The chosen launch configuration, early thread completion, and divergent flow control can significantly lower the number of active threads in a warp per cycle. This kernel achieves an average of 3.6 threads being active per cycle. This is further reduced to 3.1 threads per warp due to predication. The compiler may use predication to avoid an actual branch. Instead, all instructions are scheduled, but a per-thread condition code or predicate controls which threads execute the instructions. Try to avoid different execution paths within a warp when possible. In addition, ensure your kernel makes use of Independent Thread Scheduling, which allows a warp to reconverge after a data-dependent conditional block by explicitly calling __syncwarp(). Getting about 3 active threads per warp means I’m only tapping into about 1/10th of the GPU’s compute capacity, at best. Not exactly what you would assume from the 95% compute utilization figure reported by nvidia-smi — that figure really just means I’m doing something on all of the GPU’s Streaming Multiprocessor units about 95% of the time — even if that something is very inefficient at using each SM, as in this case. I highlighted the most relevant part of the remediation advice above, which is essentially to remove the nested branching logic as much as I can. To do that in my program, where each thread is working on a different game, I realised I would have to rewrite the core game play function. The original version uses position within the code’s nested branching structure to encode part of the game state — I needed to replace that with an explicit data representation of all pieces of game state. Then every step of the inner loop would be executed in unison across all threads playing their own games, just with different data values. Rewrite to use a lookup table To make the inner loop purely data driven, I chose to introduce a state transition table. Game state is now tracked explicitly in variables which are treated as inputs to the transition lookup table, followed by a set of actions that is always performed on every iteration using data values from the lookup table. A critical realization for implementing this cleanly in my case was to notice that the discard pile can be treated as a third player in the game, with some extra states to track when the discard pile is “playing” its cards to the player who won the last trick. With that mental twist in place, a fairly straightforward lookup table became possible to write by hand. The code for this version is here; it works on both CPU and GPU, but it is slower on both — about half the speed of the baseline version on CPU, and two-thirds on GPU. ☹ Being slower on CPU was expected, since it requires more instructions to manipulate more state variables, and there is more memory access to lookup state transitions. We also have forced the adding of the discard pile to the winning player’s hand on every trick to be done slowly, as individual turns with all the overhead needed for that. However, on GPU I had hoped to see gains because now more threads in each warp should be able to execute in unison. In fact Speed-of-Light compute and memory utilization figures did improve, to 17% and 38% respectively, but Thread Divergence was still listed as a remediation warning. We are at least up from an average of 3.6 active threads per warp to 5.3, but that is small comfort given the actual speed is now about back to where I first started on GPU, which is well behind the CPU performance. Ah, be careful about function exits What I had forgotten is that thread divergence will also occur when threads are exiting from a nested function inside the kernel. My game play loop would always play one game (inside the function play), then return to the search loop to swap two cards before calling play again. It basically didn’t matter that inside the play inner loop thread divergence has been largely solved, because each thread in a warp will still finish at different times, depending on how many turns are needed for their respective games. So they usually diverge at game end, and the whole warp of threads must wait for the longest game amongst them to finish. Stats on the range of game lengths shows a minimum of about 33 turns, an average of about 250 turns, and a max somewhere in the many thousands of turns at least. That’s a lot of variation. With that realization, my next program refactoring was to include the logic for game completion book-keeping and switching to a new game as another (necessarily conditional) step in the inner game playing loop. This slows down the inner loop even further, but allows the threads to stay mostly converged across multiple games. To support this, I pre-create a backlog of games to play, ready for the next game to be picked very quickly from inside the inner loop by whichever thread is available first. (This introduces the only inter-thread synchronization mechanism needed so far, which is the use of a CUDA atomic operation to read and increment the next_deal_to_play index, which barely affects speed but solves the race condition.) In order to allow the threads to run as long as possible in this synchronized inner loop, I decided to use a big chunk of main GPU memory to hold a large backlog, which so far has barely been used (just for the search loop’s best-game-so-far records). We now get to a near-final version of the program, which can be recreated from the latest version using some conditional compilation definitions. This version fills a large (eg. 1M entry) backlog of deals to play in GPU memory (using another kernel of cooperating GPU threads working in parallel), which is then processed in parallel by 1024 threads (32 blocks of 32 threads). Performance did indeed improve over the initial version with the lookup table, but only to about the same speed as the baseline version achieved once I had tweaked the number of blocks and threads. What gives?! Ah, memory speed really matters Nsight Compute reveals Speed-of-Light is about the same as before (12% compute and 37% memory utilization), and again lists a number of warnings. This time, some other warnings now seem more relevant and actionable (as I’ve highlighted below): “[Warning] All pipelines are under-utilized. Either this kernel is very small or it doesn’t issue enough warps per scheduler. Check the Launch Statistics and Scheduler Statistics sections for further details.” “[Warning] Every scheduler is capable of issuing one instruction per cycle, but for this kernel each scheduler only issues an instruction every 6.0 cycles. This might leave hardware resources underutilized and may lead to less optimal performance. Out of the maximum of 8 warps per scheduler, this kernel allocates an average of 1.00 active warps per scheduler, but only an average of 0.17 warps were eligible per cycle. Eligible warps are the subset of active warps that are ready to issue their next instruction. Every cycle with no eligible warp results in no instruction being issued and the issue slot remains unused. To increase the number of eligible warps either increase the number of active warps or reduce the time the active warps are stalled.” “[Warning] On average each warp of this kernel spends 2.4 cycles being stalled waiting for a scoreboard dependency on a L1TEX (local, global, surface, texture) operation. This represents about 39.7% of the total average of 6.0 cycles between issuing two instructions. To reduce the number of cycles waiting on L1TEX data accesses verify the memory access patterns are optimal for the target architecture, attempt to increase cache hit rates by increasing data locality or by changing the cache configuration, and consider moving frequently used data to shared memory.” Thread Divergence is still listed as a warning, though the average active threads per warp is now at about 9. Looking at the “source” view of the kernel profile, which shows per-instruction info such as the execution counts, average active threads, distribution of instruction stall reasons etc, it looks like there are lots of times when the inner loop instructions are stalled waiting for access to results from memory. So it looks like there are two clear problems we can address: · Ensure there are more (warps of) threads available to schedule, so they can hide the micro-latencies that are naturally experienced by any one warp. · Reduce the times when the inner loop is stalled accessing GPU memory. The first issue can in theory be addressed by playing with the blocks and threads configuration. However, changing that doesn’t really make a difference: we are primarily bottlenecked on accessing data from GPU memory. Use Shared Memory as much as possible The best improvement we can make now is ironically to stop using main GPU memory as much as we can. In my case, that means replacing the single large backlog of deals with a very small backlog per block, held in Shared Memory. As you may recall from CUDA Programming primers, Shared Memory is the fastest memory area (other than registers allocated by the compiler) available to programmers, but the most limited in size. On my laptop’s GPU, it is limited to 48KB — the same amount of memory as my first home computer (from the Apple II era). This memory is per-block and only available while each specific block is executing — its contents will be lost when that block ends, so relevant info needs to be copied to main memory as final output. Thankfully for my program, using this only involves a modest code change, and basically means the inner game loop runs for fewer iterations before the local backlog is exhausted and has to be replenished. Also each small backlog is shared by fewer threads, so there is more opportunity for some threads to finish early and increase thread divergence. With this change (which included changing blocks and threads to 16 and 128, in order to try and provide more warps to the scheduler as noted above), performance on GPU finally moved ahead of CPU for the first time, and by a fairly impressive margin, reaching about 40M deals per second (vs. 3M on CPU)! Nsight Compute is still saying we are memory bottlenecked though. Final chance to squeeze further! Make Shared Memory stretch as far as possible My final change is to recognise that the core data structures are using more memory than they need to, since I’m using an enum to represent 5 possible card values, and enum by default is represented as an int in C++, which is treated as a 32-bit word in CUDA. Similarly, the lookup table data values are all defined as int, but far fewer bits would suffice. I had initially wondered whether native word sized values were more efficient at the individual instruction level, but actually GPUs (like most CPUs) efficiently support sub-word data sizes and even arbitrary bitfield operations at the instruction level, so that isn’t an important concern. With a change to specify uint8 as my enum base type, add appropriate bitfield declarations in the lookup table struct, and use a more compact representation of deals in the backlog (not the 64 entry circular buffer representation used for playing fast), I am able to squeeze a longer backlog into the 48KB of shared memory, and also reduce the memory bandwidth needed for lookup operations and other game play steps. The effect is rather gratifying: my final version now hits over 100M deals per second, at least until thermal throttling brings it back down to 95M or so. 😊 Nsight Compute is still saying I’m memory bound, but at this stage I’m thinking that may be just how it is. The algorithm is so light on computation that it will typically be waiting on memory (even the super-fast Shared Memory) no matter what. The next step, if it were possible to see how to restructure the game playing loop in a suitable way, would be to try and ensure we created coalesced memory access patterns (where threads in a warp all reference directly adjacent memory locations that allow the hardware to make them one read/write operation). But that seems very unlikely to be possible with each thread still working on its own deal. Maybe there is still some potential for improvement from tweaking block and thread counts, and paying attention to memory layout and other micro-details. I’m not holding my breath! · Final Speed-of-Light score: 18% compute, 70% memory utilization. · nvidia-smi reports 90% and 1% for compute and memory utilization respectively. · Thread divergence is reduced, with average active threads per warp of 20 (vs. ideal of 32). Progression History From initial CPU version to final decent GPU version, here was the history of changes and results. All performance numbers are in millions of deals per second. -- -- 2 Written by Andrew Innes Chief Architect at Speechmatics Responses (2) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Introduction to CUDA Optimization with Practical Examples FreaxRuby Follow -- Listen Share Creating a CUDA Project (first_cuda.cu) To start, you need to include the necessary libraries. For using the runtime API, include cuda_runtime.h. #include <stdio.h>#include <cuda_runtime.h> Main Function: First, write a main function to check if CUDA is available. int main() { if (!InitCUDA()) { return 0; } printf("CUDA initialized.\n");​ return 0;} Checking CUDA Availability: bool InitCUDA() { int count; cudaGetDeviceCount(&count); // Get the number of available devices if (count == 0) { fprintf(stderr, "There is no device.\n"); return false; }​ int i; for (i = 0; i < count; i++) { cudaDeviceProp prop; if (cudaGetDeviceProperties(&prop, i) == cudaSuccess) { if (prop.major >= 1) { break; } } } if (i == count) { fprintf(stderr, "There is no device supporting CUDA 1.x.\n"); return false; }​ cudaSetDevice(i);​ return true;} After completing these steps, you can compile the file using nvcc, the CUDA compiler tool that separates the GPU and host parts of the code. GPU parts are compiled into intermediate code using NVIDIA's compiler, while host parts are compiled with the system's C++ compiler. Simple Addition with CUDA (cuda_sum1.cu) This section builds upon the initial CUDA project to calculate the sum of squares of a set of numbers. Initial Setup: Modify the beginning of your code to include necessary headers and define the size of the data. #include <stdio.h>#include <stdlib.h>#include <cuda_runtime.h>​#define DATA_SIZE 1048576​int data[DATA_SIZE]; Generating Random Numbers: Implement a function to generate random numbers. void GenerateNumbers(int *number, int size) { for (int i = 0; i < size; i++) { number[i] = rand() % 10; }} Preparing Data for CUDA: Before performing computations with CUDA, you need to copy the data from the host memory to the device memory. GenerateNumbers(data, DATA_SIZE);int* gpudata, *result;​cudaMalloc((void**) &gpudata, sizeof(int) * DATA_SIZE);cudaMalloc((void**) &result, sizeof(int));// Copy from host to device memorycudaMemcpy(gpudata, data, sizeof(int) * DATA_SIZE, cudaMemcpyHostToDevice); Writing a Kernel Function: In CUDA, functions executed on the device are annotated with __global__. Let's write a kernel function to compute the sum of squares. __global__ static void sumOfSquares(int *num, int* result) { int sum = 0; int i; for (i = 0; i < DATA_SIZE; i++) { sum += num[i] * num[i]; }​ *result = sum;} Device functions have restrictions, such as not being able to return values directly. Executing the Kernel: Execute the function on CUDA using the following syntax: sumOfSquares<<<1, 1, 0>>>(gpudata, result); This configuration uses a single thread, so both the block and thread counts are set to 1, and no shared memory is used. After execution, copy the result back to host memory, free device memory, and print the result: int sum;cudaMemcpy(&sum, result, sizeof(int), cudaMemcpyDeviceToHost);cudaFree(gpudata);cudaFree(result);​printf("sum: %d\n", sum); To validate the correctness, you might compare the GPU result with a CPU computation. Optimizing CUDA Addition (Parallelization — Optimization 1.0) The initial implementation of the sum of squares did not utilize parallelization, with the entire program running on a single thread. This is not ideal for GPU architecture, which is designed for parallel execution. To improve performance, let’s parallelize the computation. Parallelization Approach: Split the computation into multiple parts, with each part calculating a segment of the sum of squares. Initially, sum these partial results on the CPU. Modifications: Define constants for data size and the number of threads. #define DATA_SIZE 1048576#define THREAD_NUM 256 Optimized Kernel Function: Modify the sumOfSquares function to support parallel execution. __global__ static void sumOfSquares(int *num, int* result, clock_t* time) { const int tid = threadIdx.x; const int size = DATA_SIZE / THREAD_NUM; int sum = 0; int i; clock_t start; if (tid == 0) start = clock(); for (i = tid * size; i < (tid + 1) * size; i++) { sum += num[i] * num[i]; }​ result[tid] = sum; if (tid == 0) *time = clock() - start;} This version divides the data among multiple threads, each calculating a portion of the sum. The start time is recorded for performance measurement. Main Function Adjustments: Allocate memory for results and timing, then execute the optimized kernel. int* gpudata, *result;clock_t* time;cudaMalloc((void**) &gpudata, sizeof(int) * DATA_SIZE);cudaMalloc((void**) &result, sizeof(int) * THREAD_NUM);cudaMalloc((void**) &time, sizeof(clock_t));cudaMemcpy(gpudata, data, sizeof(int) * DATA_SIZE, cudaMemcpyHostToDevice);​sumOfSquares<<<1, THREAD_NUM, 0>>>(gpudata, result, time);​int sum[THREAD_NUM];clock_t time_used;cudaMemcpy(&sum, result, sizeof(int) * THREAD_NUM, cudaMemcpyDeviceToHost);cudaMemcpy(&time_used, time, sizeof(clock_t), cudaMemcpyDeviceToHost);cudaFree(gpudata);cudaFree(result);cudaFree(time); After execution, sum the partial results on the CPU for the final sum. int final_sum = 0;for (int i = 0; i < THREAD_NUM; i++) { final_sum += sum[i];}​printf("sum: %d time: %d\n", final_sum, time_used); This approach significantly improves performance by utilizing parallel execution on the GPU. Memory Access Pattern Optimization (Optimization 2.0) The efficiency of memory access on GPUs greatly impacts performance. GPUs use DRAM, which is most efficient when accessed in a continuous manner. The previous version’s access pattern did not fully exploit this due to the way threads accessed memory. Optimizing Memory Access: Adjust the sumOfSquares function to ensure that threads access memory in a more efficient, continuous pattern. __global__ static void sumOfSquares(int *num, int* result, clock_t* time) { const int tid = threadIdx.x; int sum = 0; int i; clock_t start; if (tid == 0) start = clock(); for (i = tid; i < DATA_SIZE; i += THREAD_NUM) { sum += num[i] * num[i]; }​ result[tid] = sum; if (tid == 0) *time = clock() - start;} By adjusting the loop to increment by THREAD_NUM, each thread accesses continuous memory locations, improving the efficiency of memory reads and significantly boosting performance. These optimizations demonstrate how leveraging CUDA’s parallel processing capabilities and understanding the GPU’s memory access patterns can lead to substantial performance gains in computational tasks. Further Parallelization with Blocks (Optimization 3.0) In CUDA, threads can be organized into blocks, where threads within the same block can share memory and synchronize their execution. This structure allows for more sophisticated parallelization strategies. Adjusting for Block Usage: Let’s refine our approach by using multiple blocks and threads to increase the number of parallel computations. Define Block and Thread Numbers: #define DATA_SIZE 1048576#define BLOCK_NUM 32#define THREAD_NUM 256 This configuration uses 32 blocks, each with 256 threads, totaling 8192 threads for computation. Kernel Function with Blocks: Modify the sumOfSquares kernel to utilize both blocks and threads. __global__ static void sumOfSquares(int *num, int* result, clock_t* time) { const int tid = threadIdx.x; const int bid = blockIdx.x; int sum = 0; int i; if (tid == 0) time[bid] = clock(); for (i = bid * THREAD_NUM + tid; i < DATA_SIZE; i += BLOCK_NUM * THREAD_NUM) { sum += num[i] * num[i]; } result[bid * THREAD_NUM + tid] = sum; if (tid == 0) time[bid + BLOCK_NUM] = clock();} This version calculates a segment of the sum per thread, with each block covering different segments. The start and end times are recorded for each block to measure performance. Main Function Adjustments for Blocks: Allocate memory and launch the kernel with blocks and threads. int* gpudata, *result;clock_t* time;cudaMalloc((void**) &gpudata, sizeof(int) * DATA_SIZE);cudaMalloc((void**) &result, sizeof(int) * THREAD_NUM * BLOCK_NUM);cudaMalloc((void**) &time, sizeof(clock_t) * BLOCK_NUM * 2);cudaMemcpy(gpudata, data, sizeof(int) * DATA_SIZE, cudaMemcpyHostToDevice);sumOfSquares<<<BLOCK_NUM, THREAD_NUM, 0>>>(gpudata, result, time);int sum[THREAD_NUM * BLOCK_NUM];clock_t time_used[BLOCK_NUM * 2];cudaMemcpy(&sum, result, sizeof(int) * THREAD_NUM * BLOCK_NUM, cudaMemcpyDeviceToHost);cudaMemcpy(&time_used, time, sizeof(clock_t) * BLOCK_NUM * 2, cudaMemcpyDeviceToHost);cudaFree(gpudata);cudaFree(result);cudaFree(time); Sum the partial results on the CPU and calculate the total execution time by finding the earliest start and the latest end time across all blocks. int final_sum = 0;for (int i = 0; i < THREAD_NUM * BLOCK_NUM; i++) { final_sum += sum[i];}clock_t min_start = time_used[0], max_end = time_used[BLOCK_NUM];for (int i = 1; i < BLOCK_NUM; i++) { if (min_start > time_used[i]) min_start = time_used[i]; if (max_end < time_used[i + BLOCK_NUM]) max_end = time_used[i + BLOCK_NUM];}printf("sum: %d time: %d\n", final_sum, max_end - min_start); This approach further improves the performance by utilizing more threads and blocks, significantly increasing the parallelization of the computation. Synchronization with Threads (Optimization 4.0) To optimize further, let’s have each block calculate the sum of its threads’ results, reducing the workload on the CPU. Using Shared Memory for Block-Wide Summation: __global__ static void sumOfSquares(int *num, int* result, clock_t* time) { extern __shared__ int shared[]; const int tid = threadIdx.x; const int bid = blockIdx.x; int i; if (tid == 0) time[bid] = clock(); shared[tid] = 0; for (i = bid * THREAD_NUM + tid; i < DATA_SIZE; i += BLOCK_NUM * THREAD_NUM) { shared[tid] += num[i] * num[i]; }​ __syncthreads(); // Synchronize threads in the block​ // Sum the results within the block if (tid == 0) { for (i = 1; i < THREAD_NUM; i++) { shared[0] += shared[i]; } result[bid] = shared[0]; }​ if (tid == 0) time[bid + BLOCK_NUM] = clock();} This version uses shared memory within each block for intermediate results, reducing the need for global memory access and improving performance. Main Function Adjustments for Synchronization: Update the main function to allocate shared memory for the kernel and adjust memory allocations and kernel invocation accordingly. This method significantly reduces the amount of data the CPU needs to sum, leading to further performance gains. Further Optimization with Tree-Based Reduction (Optimization 5.0) Tree-based reduction is a technique that can improve the efficiency of parallel summation within blocks by structuring the addition in a tree-like manner, reducing the depth of dependency and improving parallelism. Implementing Tree-Based Reduction: Modify the kernel to use a tree-based approach for summing the results within a block, which involves halving the number of threads involved in each step of the summation. This approach minimizes synchronization and memory access overhead, further accelerating the computation. Conclusion Through these optimizations, starting from simple parallelization to more advanced techniques like using blocks, shared memory, synchronization, and tree-based reduction, we’ve demonstrated how to leverage CUDA’s capabilities to achieve substantial performance improvements in computations. Each step introduces new CUDA concepts and optimization strategies, showing the potential for significant speedups in GPU-accelerated applications. -- -- Written by FreaxRuby 🐱🐶🐯🦁🦋🤖 No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in TDS Archive Home Newsletter About An archive of data science, data analytics, data engineering, machine learning, and artificial… Accelerating AI/ML Model Training with Custom Operators On the potential benefits of creating model-specific GPU kernels and their application to optimizing the use of dynamically shaped tensors Chaim Rand Follow TDS Archive -- 1 Listen Share This post continues a long series of posts on the topic of analyzing and optimizing the runtime performance of training AI/ML models. The post could easily have been titled “PyTorch Model Performance Analysis and Optimization — Part 7”, but due to the weight of the topic at hand, we decided that a dedicated post (or series of posts) was warranted. In our previous posts, we have spoken at length about the importance of analyzing and optimizing your AI/ML workloads and the potentially significant impact it can have on the speed and costs of AI/ML model development. We have advocated for having multiple tools and techniques for profiling and optimizing training performance and have demonstrated many of these in practice. In this post we will discuss one of the more advanced optimization techniques — one that sets apart the true rock stars from the simple amateurs — creating a custom PyTorch operator in C++ and CUDA. Popular ML frameworks, such as PyTorch, TensorFlow, and JAX are typically built using SW components that are optimized for the underlying hardware that the AI/ML workload is run on, be it a CPU, a GPU, or an AI-specific ASIC such as a Google TPU. However, inevitably, you may find the performance of certain computation blocks that comprise your model to be unsatisfactory or in-optimal. Oftentimes, tuning the low-level code blocks — often referred to as kernels — to the specific needs of the AI/ML model, can result in significant speed-ups to the runtime performance of model training and inference. Such speed-ups can be accomplished by implementing functionalities that were previously unsupported (e.g., an advanced attention block), fusing together individual operations (e.g., as in PyTorch’s tutorial on multiply-add fusion), and/or optimizing existing kernels based on the specific properties of the model at hand. Importantly, the ability to perform such customization depends on the support of both the AI HW and the ML framework. Although our focus on this post will be on NVIDIA GPUs and the PyTorch framework, it should be noted that other AI ASICs and ML frameworks enable similar capabilities for custom kernel customization. NVIDIA enables the development of custom kernels for its GPUs through its CUDA toolkit. And PyTorch includes dedicated APIs and tutorials for exposing this functionality and integrating it into the design of your model. Our intention in this post is to draw attention to the power and potential of kernel customization and demonstrate its application to the unique challenge of training models with dynamically shaped tensors. Our intention is not — by any means — to replace the official documentation on developing custom operations. Furthermore, the examples we will share were chosen for demonstrative purposes only. We have made no effort to optimize these or verify their robustness, durability, or accuracy. If, based on this post, you choose to invest in AI/ML optimization via custom CUDA kernel development, you should be sure to undergo the appropriate training. Toy Model — The Challenge of Dynamically Shaped Tensors The prevalence of tensors with dynamic shapes in AI models can pose unique and exciting challenges with regards to performance optimization. We have already seen one example of this in a previous post in which we demonstrated how the use of boolean masks can trigger an undesired CPU-GPU sync event and advocated against their use. Generally speaking, AI accelerators tend to prefer tensors with fixed shapes over ones with dynamic shapes. Not only does it simplify the management of memory resources, but it also enables greater opportunity for performance optimization (e.g., using torch.compile). The toy example that follows demonstrates this challenge. Suppose we are tasked with creating a face detection model for a next-generation digital camera. To train this model we are provided with a dataset of one million 256x256 grayscale images and associated ground-truth bounding boxes for each image. Naturally, the number of faces in each image can vary greatly, with the vast majority of images containing five or fewer faces, and just a few containing dozens or even hundreds. The requirement from our model is to support all variations. Specifically, our model needs to support the detection of up to 256 faces in an image. To address this challenge, we define the following naïve model that generates bounding boxes and an accompanying loss function. In particular, we naïvely truncate the model outputs based on the number of target boxes rather than perform some form of assignment algorithm for matching between the bounding box predictions and ground truth targets. We (somewhat arbitrarily) choose the Generalized Intersection Over Union (GIOU) loss. A real-world solution would likely be far more sophisticated (e.g., it would include a loss component that includes a penalty for false positives). import torchimport torch.nn as nnimport torch.nn.functional as Fclass Net(nn.Module): def __init__(self): super().__init__() conv_layers = [] for i in range(4): conv_layers.append(nn.Conv2d(4 ** i, 4 ** (i + 1), 3, padding='same')) conv_layers.append(nn.MaxPool2d(2, 2)) conv_layers.append(nn.ReLU()) self.conv_layers = nn.Sequential(*conv_layers) self.lin1 = nn.Linear(256 * 256, 256 * 64) self.lin2 = nn.Linear(256 * 64, 256 * 4) def forward(self, x): x = self.conv_layers(x.float()) x = self.lin2(F.relu(self.lin1(x.view((-1, 256 * 256))))) return x.view((-1, 256, 4))def generalized_box_iou(boxes1, boxes2): # loosly based on torchvision generalized_box_iou_loss code epsilon = 1e-5 area1 = (boxes1[..., 2]-boxes1[..., 0])*(boxes1[..., 3]-boxes1[..., 1]) area2 = (boxes2[..., 2]-boxes2[..., 0])*(boxes2[..., 3]-boxes2[..., 1]) lt = torch.max(boxes1[..., :2], boxes2[..., :2]) rb = torch.min(boxes1[..., 2:], boxes2[..., 2:]) wh = (rb - lt).clamp(min=0) inter = wh[..., 0] * wh[..., 1] union = area1 + area2 - inter iou = inter / union.clamp(epsilon) lti = torch.min(boxes1[..., :2], boxes2[..., :2]) rbi = torch.max(boxes1[..., 2:], boxes2[..., 2:]) whi = (rbi - lti).clamp(min=0) areai = (whi[..., 0] * whi[..., 1]).clamp(epsilon) return iou - (areai - union) / areaidef loss_fn(pred, targets_list): batch_size = len(targets_list) total_boxes = 0 loss_sum = 0. for i in range(batch_size): targets = targets_list[i] num_targets = targets.shape[0] if num_targets > 0: sample_preds = pred[i, :num_targets] total_boxes += num_targets loss_sum += generalized_box_iou(sample_preds, targets).sum() return loss_sum / max(total_boxes, 1) Due to the varying number of faces per image, the loss is calculated separately for each individual sample rather than a single time (for the entire batch). In particular, the CPU will launch each of the GPU kernels associated with the loss function B times, where B is the chosen batch size. Depending on the size of the batch, this could entail a significant overhead, as we will see below. In the following block we define a dataset that generates random images and associated bounding boxes. Since the number of faces varies per image, we require a custom collate function for grouping samples into batches: from torch.utils.data import Dataset, DataLoaderimport numpy as np# A dataset with random images and gt boxesclass FakeDataset(Dataset): def __init__(self): super().__init__() self.size = 256 self.img_size = [256, 256] def __len__(self): return 1000000 def __getitem__(self, index): rand_image = torch.randint(low=0, high=256, size=[1]+self.img_size, dtype=torch.uint8) # set the distribution over the number of boxes to reflect the fact # that the vast majority of images have fewer than 10 faces n_boxes = np.clip(np.floor(np.abs(np.random.normal(0, 3))) .astype(np.int32), 0, 255) box_sizes = torch.randint(low=1, high=self.size, size=(n_boxes,2)) top_left = torch.randint(low=0, high=self.size-1, size=(n_boxes, 2)) bottom_right = torch.clamp(top_left + box_sizes, 0, self.size -1) rand_boxes = torch.concat((top_left,bottom_right), dim = 1) return rand_image, rand_boxes.to(torch.uint8)def collate_fn(batch): images = torch.stack([b[0] for b in batch],dim=0) boxes = [b[1] for b in batch] return images, boxestrain_loader = DataLoader( dataset = FakeDataset(), batch_size=1024, pin_memory=True, num_workers=16, collate_fn=collate_fn) Typically, each training step starts with copying the training batch from the host (CPU) to the device (GPU). When our data samples are of fixed size, they are copied in batches. However, one of the implications of the varying number of faces per image is that the bounding box targets of each sample is copied separately requiring many more individual copy operations. def data_to_device(data, device): if isinstance(data, (list, tuple)): return type(data)( data_to_device(val, device) for val in data ) elif isinstance(data, torch.Tensor): return data.to(device=device, non_blocking=True) Lastly, we define our training/evaluation loop. For the purposes of our discussion, we have chosen to focus just on the forward pass of our training loop. Note the inclusion of a PyTorch profiler object and our use of explicit synchronization events (to facilitate performance evaluation of different portions of the forward pass). device = torch.device("cuda:0")model = torch.compile(Net()).to(device).train()# forward portion of training loop wrapped with profiler objectwith torch.profiler.profile( schedule=torch.profiler.schedule(wait=5, warmup=5, active=10, repeat=1), on_trace_ready=torch.profiler.tensorboard_trace_handler('/tmp/perf/'), profile_memory=True) as prof: for step, data in enumerate(train_loader): with torch.profiler.record_function('copy data'): images, boxes = data_to_device(data, device) torch.cuda.synchronize(device) with torch.profiler.record_function('forward'): with torch.autocast(device_type='cuda', dtype=torch.bfloat16): outputs = model(images) torch.cuda.synchronize(device) with torch.profiler.record_function('calc loss'): loss = loss_fn(outputs, boxes) torch.cuda.synchronize(device) prof.step() if step > 30: break # filter and print profiler results event_list = prof.key_averages() for i in range(len(event_list) - 1, -1, -1): if event_list[i].key not in ['forward', 'calc loss', 'copy data']: del event_list[i] print(event_list.table()) Performance Analysis Running our script on a Google Cloud g2-standard-16 VM (with a single L4 GPU), a dedicated deep learning VM image, and PyTorch 2.4.0, generates the output below (which we trimmed for readability). ------------- ------------ ------------ Name CPU total CPU time avg------------- ------------ ------------ copy data 288.164ms 28.816ms forward 1.192s 119.221ms calc loss 9.381s 938.067ms------------- ------------ ------------Self CPU time total: 4.018sSelf CUDA time total: 10.107s Despite the fact that the loss function contains far fewer operations, it completely dominates the overall step time. The overhead of the repeated invocations of the underlying GPU kernels (for each sample in the batch) is clearly evident in the Trace view in TensorBoard: Optimization Through Concatenation One way to reduce the number of calls to the loss function is to combine together all of the valid boxes each batch using concatenation, as shown in the following block. def loss_with_concat(pred, targets_list): bs = len(targets_list) all_targets = torch.concat(targets_list, dim = 0) num_boxes = [targets_list[i].shape[0] for i in range(bs)] all_preds = torch.concat([pred[i,: num_boxes[i]] for i in range(bs)], dim=0) total_boxes = sum(num_boxes) loss_sum = generalized_box_iou(all_targets, all_preds).sum() return loss_sum/max(total_boxes, 1) The results of this optimization are captured below. ------------- ------------ ------------ Name CPU total CPU time avg------------- ------------ ------------ copy data 522.326ms 52.233ms forward 1.187s 118.715ms calc loss 254.047ms 25.405ms------------- ------------ ------------Self CPU time total: 396.674msSelf CUDA time total: 1.871s The concatenation optimization resulted in a 37X (!!) speed-up of the loss function. Note, however, that it did not address the overhead of the individual host-to-device copies of the sample ground-truth data. This overhead is captured in the screenshot below from TensorBoard’s Trace view: Optimization Through Padding A common approach to avoiding the use of dynamically shaped tensors is padding. In the following code block, we modify the collate function to pad (with zeros) the ground-truth bounding-boxes of each data sample to the maximum number of supported boxes, 256. (Note that the padding could also have been performed in the Dataset class.) def collate_with_padding(batch): images = torch.stack([b[0] for b in batch],dim=0) padded_boxes = [] for b in batch: p = torch.nn.functional.pad( b[1], (0, 0, 0, 256 - b[1].shape[0]), value = 0) padded_boxes.append(p) boxes = torch.stack(padded_boxes,dim=0) return images, boxes Padding the samples to fixed sized tensors enables us to copy the ground truth of the batch with a single call. It also allows us to compute the loss with a single invocation of the loss function. Note, that this method requires masking the resultant loss, as shown below, so that only the valid boxes are taken into consideration. def loss_with_padding(pred, targets): mask = (targets[...,3] > 0).to(pred.dtype) total_boxes = mask.sum() loss = generalized_box_iou(targets, pred) masked_loss = loss*mask loss_sum = masked_loss.sum() return loss_sum/torch.clamp(total_boxes, 1) The resultant runtime performance is captured below: ------------- ------------ ------------ Name CPU total CPU time avg------------- ------------ ------------ copy data 57.125ms 5.713ms forward 1.315s 131.503ms calc loss 18.438ms 1.844ms------------- ------------ ------------Self CPU time total: 11.723msSelf CUDA time total: 1.378s Note the nearly 10X boost in the data copy and the additional 14X boost in the loss function performance. Keep in mind that padding may increase the use of the GPU memory. In our case, this increase is less than 1%. While the runtime of our loss function has improved dramatically, we note that the vast majority of the calculations that are performed in the loss functions are immediately masked away. We can’t help but wonder whether there is a way to further improve the performance by avoiding these redundant operations. In the next section, we will explore the opportunities provided by using custom CUDA kernels. Creating a Custom CUDA Kernel Many tutorials will highlight the difficulty of creating CUDA kernels and the high entrance barrier. While mastering CUDA development and tuning kernels to maximize the utilization of the GPU could indeed require years of experience as well as an intimate understanding of the GPU architecture, we strongly believe that even a novice (but ambitious) CUDA enthusiast/ML developer can succeed at — and greatly benefit from — building custom CUDA kernels. In this section we will take PyTorch’s (relatively simple) example of a C++/CUDA extension for PyTorch and enhance it with a GIOU kernel. We will do this in two stages: First we will naïvely carry over all of the GIOU logic to C++/CUDA to assess the performance impact of kernel fusion. Then, we will take advantage of our new-found low-level control to add conditional logic and reduce unneeded arithmetic operations. Developing CUDA kernels allows you to determine the core logic that is performed in each of the GPU threads and how these are distributed onto the underlying GPU streaming multiprocessors (SMs). Doing this in the most optimal manner requires an expert understanding of the GPU architecture including the different levels of GPU memory, memory bandwidth, the on-chip acceleration engines (e.g., TensorCores), the supported number of concurrent threads per SM and how they are scheduled, and much much more. What makes things even more complicated is that these properties can vary between GPU generations and flavors. See this blog for a very basic, but very easy, introduction to CUDA. Step 1 — Kernel Fusion Looking back at the Trace view of our last experiment, you may notice that the forward pass of our loss calculation includes roughly thirty independent arithmetic operations which translate to launching and running an independent CUDA kernel (as can be seen by simply counting the number of cudaLaunchKernel events). This can negatively impact performance in a number of ways. For example: Optimization through kernel fusion attempts to reduce this overhead by combining these operations into a lower number of kernels so as to reduce the overhead of multiple kernels. In the code block below, we define a kernel that performs our GIOU on a single bounding-box prediction-target pair. We use a 1-D grid to allocate thread blocks of size 256 each where each block corresponds to one sample in the training batch and each thread corresponds to one bounding box in the sample. Thus, each thread — uniquely identified by a combination of the block and thread IDs — receives the predictions (boxes1) and targets (boxes2) and performs the GIOU calculation on the single bounding box determined by the IDs. As before, the “validity” of the bounding box is controlled by the value of the target boxes. In particular, the GIOU is explicitly zeroed wherever the corresponding box is invalid. #include <torch/extension.h>#include <cuda.h>#include <cuda_runtime.h>namespace extension_cpp {__global__ void giou_kernel(const float* boxes1, const float* boxes2, float* giou, bool* mask) { int idx = blockIdx.x * blockDim.x + threadIdx.x; bool valid = boxes2[4*idx+3] != 0; mask[idx] = valid; const float epsilon = 1e-5; const float* box1 = &boxes1[idx * 4]; const float* box2 = &boxes2[idx * 4]; // Compute area of each box float area1 = (box1[2] - box1[0]) * (box1[3] - box1[1]); float area2 = (box2[2] - box2[0]) * (box2[3] - box2[1]); // Compute the intersection float left = max(box1[0], box2[0]); float top = max(box1[1], box2[1]); float right = min(box1[2], box2[2]); float bottom = min(box1[3], box2[3]); float inter_w = max(right - left, 0); float inter_h = max(bottom - top, 0); float inter_area = inter_w * inter_h; // Compute the union area float union_area = area1 + area2 - inter_area; // IoU float iou_val = inter_area / max(union_area, epsilon); // Compute the smallest enclosing box float enclose_left = min(box1[0], box2[0]); float enclose_top = min(box1[1], box2[1]); float enclose_right = max(box1[2], box2[2]); float enclose_bottom = max(box1[3], box2[3]); float enclose_w = max(enclose_right - enclose_left, 0); float enclose_h = max(enclose_bottom - enclose_top, 0); float enclose_area = enclose_w * enclose_h; float result = iou_val - (enclose_area-union_area)/max(enclose_area, epsilon); // Generalized IoU giou[idx] = result * valid;}at::Tensor giou_loss_cuda(const at::Tensor& a, const at::Tensor& b) { TORCH_CHECK(a.sizes() == b.sizes()); TORCH_CHECK(a.dtype() == at::kFloat); TORCH_CHECK(b.dtype() == at::kFloat); TORCH_INTERNAL_ASSERT(a.device().type() == at::DeviceType::CUDA); TORCH_INTERNAL_ASSERT(b.device().type() == at::DeviceType::CUDA); int bs = a.sizes()[0]; at::Tensor a_contig = a.contiguous(); at::Tensor b_contig = b.contiguous(); at::Tensor giou = torch::empty({a_contig.sizes()[0], a_contig.sizes()[1]}, a_contig.options()); at::Tensor mask = torch::empty({a_contig.sizes()[0], a_contig.sizes()[1]}, a_contig.options().dtype(at::kBool)); const float* a_ptr = a_contig.data_ptr<float>(); const float* b_ptr = b_contig.data_ptr<float>(); float* giou_ptr = giou.data_ptr<float>(); bool* mask_ptr = mask.data_ptr<bool>(); // Launch the kernel // The number of blocks is set according to the batch size. // Each block has 256 threads corresponding to the number of boxes per sample giou_kernel<<<bs, 256>>>(a_ptr, b_ptr, giou_ptr, mask_ptr); at::Tensor total_boxes = torch::clamp(mask.sum(), 1); torch::Tensor loss_sum = giou.sum(); return loss_sum/total_boxes;}// Registers CUDA implementations for giou_lossTORCH_LIBRARY_IMPL(extension_cpp, CUDA, m) { m.impl("giou_loss", &giou_loss_cuda);}} To complete the kernel creation, we need to add the appropriate C++ and Python operator definitions (see muladd.cpp and ops.py) // Add the C++ definitionm.def(“giou_loss(Tensor a, Tensor b) -> Tensor”); # define the Python operatordef giou_loss(a: Tensor, b: Tensor) -> Tensor: return torch.ops.extension_cpp.giou_loss.default(a, b) To compile our kernel, run the installation script (pip install .) from the base directory. The following block uses our newly defined GIOU CUDA kernel: def loss_with_kernel(pred, targets): pred = pred.to(torch.float32) targets = targets.to(torch.float32) import extension_cpp return extension_cpp.ops.giou_loss(pred, targets) Note the explicit casting to torch.float32. This is a rather expensive operation that could be easily avoided by enhancing our CUDA kernel support. We leave this as an exercise to the reader :). The results of running our script with our custom kernel are displayed below. ------------- ------------ ------------ Name CPU total CPU time avg ------------- ------------ ------------ copy data 56.901ms 5.690ms forward 1.327s 132.704ms calc loss 6.287ms 628.743us ------------- ------------ ------------Self CPU time total: 6.907msSelf CUDA time total: 1.380s Despite the naïveté of our kernel (and our inexperience at CUDA), we have boosted the loss function performance by an additional ~3X over our previous experiment (628 microseconds compare to 1.8 milliseconds). As noted above, this can be improved even further without much effort. Step 2 — Conditional Execution The thread-level control that CUDA provides us allows us to add a conditional statement that avoids computation on the invalid bounding boxes: __global__ void giou_kernel(const float* boxes1, const float* boxes2, float* giou, bool* mask) { int idx = blockIdx.x * blockDim.x + threadIdx.x; bool valid = boxes2[4*idx+3] != 0; mask[idx] = valid; if (valid) { const float* box1 = &boxes1[idx * 4]; const float* box2 = &boxes2[idx * 4]; giou[idx] = compute_giou(box1, box2); } else { giou[idx] = 0; }} In the case of our kernel, the impact on runtime performance is negligible. The reason for this (presumably) is that our kernel is relatively small to the point that its runtime is negligible compared to the time required to load and instantiate it. The impact of our conditional execution might become apparent for larger kernels only. (The impact, as a function of the kernel size can be assessed by making our GIOU output dependent on a for loop that we run for a varying number of fixed steps. This, too, we leave as an exercise :).) It is also important to take into consideration how a conditional execution flow behaves on CUDA’s SIMT architecture, particularly, the potential performance penalty when threads belonging to the same warp diverge. ------------- ------------ ------------ Name CPU total CPU time avg------------- ------------ ------------ copy data 57.008ms 5.701ms forward 1.318s 131.850ms calc loss 6.234ms 623.426us------------- ------------ ------------Self CPU time total: 7.139msSelf CUDA time total: 1.371s Results and Next Steps We summarize the results of our experiments in the table below. Importantly, our work is not done. Admittedly, we have taken some shortcuts in the example we have shared: Summary In this post we demonstrated the potential of the use of a custom CUDA kernel on the runtime performance of AI/ML applications. We attempted, in particular, to utilize the low-level control enabled by CUDA to introduce a conditional flow to limit the number of redundant arithmetic operations in the case of dynamically shaped inputs. While the performance boost resulting from the fusion of multiple kernel operations was significant, we found the size of our kernel to be too small to benefit from the conditional execution flow. Throughout many of our posts we have emphasized the importance of having multiple tools and techniques for optimizing ML and reducing its costs. Custom kernel development is one of the most powerful techniques at our disposal. However, for many AI/ML engineers, it is also one of the most intimidating techniques. We hope that we have succeeded in convincing you that this opportunity is within reach of any ML developer and that it does not require major specialization in CUDA. In recent years, new frameworks have been introduced with the goal of making custom kernel development and optimization more accessible to AI/ML developers. One of the most popular of these frameworks is Triton. In our next post we will continue our exploration of the topic of custom kernel development by assessing the capabilities and potential impact of developing Triton kernels. -- -- 1 Published in TDS Archive An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication. Written by Chaim Rand I am a Machine Learning Algorithm Developer working on Autonomous Vehicle technologies at Mobileye. The views expressed in my posts are my own. Responses (1) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Member-only story CUDA 3: Your Checklist for Optimizing CUDA Kernels Rimika Dhara Follow -- Share Hi! This is the fourth article in the series I have been writing about Programming in CUDA. For this post, I want to talk about how to optimize CUDA kernels and how we can build intuition behind kernel optimizations. I am not going to provide code here, just simple concepts and strategies that we can use to optimize the kernels. If you’re catching up, feel free to check out the previous three articles I’ve written on this topic: Let’s get started! 1. Understanding the Issue So, you have your CUDA kernel built; the logic is sound, and it’s giving you accurate results. But, it seems to be taking wayyyy too long. Whether it’s part of a deep learning model or a standalone kernel, slow CUDA kernels can become major performance bottlenecks. Just like when wanting to fix a broken car, it’s better to know which part of the model to start fixing. When we talk about performance, we often aim to maximize the GPU’s peak performance, which hinges on three things: compute, memory bandwidth, and overhead. Let’s discuss those: -- -- Written by Rimika Dhara Aspiring SWE and ML Engineer. Current SWE Intern at Oracle. No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Unlock Warp‑Level Performance: DeepSeek’s Practical Techniques for Specialized GPU Tasks Amin Sedaghat Follow -- Listen Share Introduction Amid the push toward trillion‐parameter models, the DeepSeek team has showcased astonishing engineering feats — like partitioning Streaming Multiprocessors (SMs) into specialized warp‐level communication tasks, weaving in lower‐level PTX instructions, and auto‐tuning “communication chunks” for concurrency. Their approach elegantly circumvents conventional limitations. In fact, DeepSeek achieves a level of warp specialization that NVIDIA itself hasn’t fully tackled. Orchestrating InfiniBand (IB) ↔ NVLink communication pipelines at warp granularity, minimizing overhead and pushing performance even further. Reflecting this focus on warp‐level allocations, the DeepSeek paper notes: [W]e employ the warp specialization technique (Bauer et al., 2014) and partition 20 SMs into 10 communication channels… In addition, both dispatching and combining kernels overlap with the computation stream, so we also consider their impact on other SM computation kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and auto-tune the communication chunk size, which significantly reduces the use of the L2 cache and the interference to other SMs. We’ll begin with a simple 4×4 tile transpose, demonstrating how warp shuffle intrinsics can remove unneeded block‐wide synchronization. Then we’ll grow that idea into advanced warp‐level partitioning for real‐world tasks like all‐to‐all communication in multi‐node MoE workloads. Even if your own applications aren’t at DeepSeek’s hyper‐scale, you’ll still find these warp‐specialization ideas invaluable for high‐performance GPU work. 1. The Challenge: Rearranging Data at Warp Scope Sometimes, you only need 32 threads (or even fewer) to rearrange data. Yet the default CUDA approach forces an entire block to synchronize (via __syncthreads()), which can stall many threads that aren’t involved. DeepSeek’s large‐scale multi‐node MoE training pipeline demonstrates how such optimizations compound. Individual warps might handle IB sends, NVLink receives, or matrix‐multiplication tasks — enabling partial concurrency and better GPU utilization. By working at the warp scope, DeepSeek avoids the bottlenecks of block‐wide synchronization. Here’s a simple diagram illustrating the difference between full block sync vs. focusing only on warp‐level coordination: 2. A Naive 4×4 Transpose (Block-Level Sync) Below is a minimal kernel that uses shared memory and `__syncthreads()` to transpose a 4×4 chunk, causing an entire block to wait even though only 16 threads are actually active: // Naive 4×4 Transpose (Block-Level Sync)__global__ void transpose_naive(float* output, const float* input){ __shared__ float tile[16]; // 4×4 tile int tid = threadIdx.x; // Only first 16 threads do the transfer if (tid < 16) { tile[tid] = input[tid]; } __syncthreads(); // Block-wide barrier if (tid < 16) { int row = tid / 4; int col = tid % 4; int transposedId = col * 4 + row; output[transposedId] = tile[tid]; }} Explanation: 3. Warp-Level Optimization with Shuffle Intrinsics Now let’s refine the approach: if data exchange is restricted to a single warp, we can avoid shared memory altogether by using warp shuffle intrinsics (__shfl_sync, etc.). This drastically cuts latency and removes block‐wide synchronization, key enablers in DeepSeek’s method of assigning different communication tasks to separate warps. Below is the warp shuffle version. Notice that we only use half a warp (16 threads), so we manage “inactive” lanes with __activemask(). /////////////////////////////////////////////////////////////////////// // Warp-Level 4×4 Transpose with __shfl_sync (Partial Warp Example) ///////////////////////////////////////////////////////////////////////__global__ void transpose_warp_shfl(float* output, const float* input){ int tid = threadIdx.x; // Contains the set of lanes actually active in the warp unsigned int mask = __activemask(); if (tid < 16) { float val = input[tid]; int row = tid / 4; int col = tid % 4; // Identify which lane originally held the transposed element int srcTid = col * 4 + row; // Use the valid lane mask to avoid undefined data from inactive lanes float transposedVal = __shfl_sync(mask, val, srcTid, 32); output[tid] = transposedVal; }} Explanation: 4. PTX and Warp Specialization: The Broader Context DeepSeek’s large‐scale approach extends this same strategy beyond small tiles. Specialized warps handle IB sending, NVLink forwarding, or MoE gating without major stalls. This is done by carefully constructing kernels (and sometimes diving into PTX) to: NVIDIA doesn’t provide a direct “pin warp to IB/NVLink” feature, yet DeepSeek’s team discovered and fine‐tuned these warp partitions manually. They also demonstrated that with customized PTX, they could push concurrency and reduce L2 traffic to levels not typically seen in standard CUDA pipelines. 5. Practical Pitfalls in Real Scenarios If you only use part of a warp (say 16 out of 32 lanes), it’s crucial to manage all remaining “inactive” lanes properly. Otherwise, unintended reads or writes could occur. The CUDA intrinsic __activemask() helps you safely identify which lanes are active, ensuring data exchange remains correct. Meanwhile, if your data requirements exceed a single warp’s scope, you can’t rely on warp shuffle intrinsics alone. Larger tiles or computations typically call for shared memory, block‐level synchronization, or more intricate multi‐warp patterns. Another important detail is memory visibility. Even though you’ve sidestepped block‐wide synchronization in warp‐only tasks, once another warp (or block) needs to read or write partial results, the proper fences and synchronization come back into the picture. Within‐warp data remains invisible to the wider block or grid unless you employ standard barriers. Finally, specialized warps might coordinate both IB/NVLink traffic and local computation in theory, but making this overlap work in practice requires careful tuning and real‐world profiling. You’ll likely need to refine kernel launches, stream priorities, and chunk sizes, among other parameters. 6. When (and When Not) To Use Inline PTX Even though __shfl_sync compiles into efficient instructions, sometimes you need super‐fine control , like specifying certain registers or caching modes that the compiler doesn’t expose. That’s where inline PTX steps in. But it’s a double‐edged sword: Inline PTX should be your last resort. Nevertheless, DeepSeek’s customized instructions highlight how effective it can be. They overcame cross‐node communication bottlenecks by reducing L2 usage and carefully scheduling instructions at warp granularity, going far beyond what the default CUDA runtime typically offers. 7. Key Takeaways When your operation fits within the confines of a single warp, shuffle intrinsics are your best friend, eliminating the need for block‐wide barriers and extra memory transfers. Just remember that partial warps demand careful handling: inactive lanes can wreak havoc if they’re not properly managed. With the help of __activemask(), you can safeguard your data against unintended reads or writes, ensuring your intra‐warp tasks stay predictable. As you scale up toward multi‐warp processes, shared memory and block‐level sync steps inevitably rejoin the picture. This doesn’t mean you should fear them; rather, treat them as useful tools within more intricate concurrency patterns. Borrowing a page from DeepSeek’s playbook, you can dedicate entire warps to IB or NVLink communication and let other warps handle compute, resulting in better resource utilization and near‐continuous workstreams when concurrency is tuned correctly. Lastly, if even the most aggressive compiler flags and CUDA intrinsics can’t hit your performance target, inline PTX stands as your final frontier. It grants you unrivaled control over registers, caching, and instruction scheduling, albeit at the cost of greater development complexity and maintenance. Just as DeepSeek did, tread carefully in this realm: successes here can be monumental, but so too can the engineering overhead. Conclusion DeepSeek’s mastery of warp‐level transposition, communication overlap, and custom PTX optimizations proves that GPUs are more malleable than most developers including myself realize. By restricting data exchange to the warp and dynamically allocating warp roles for communication or compute, the DeepSeek team achieves concurrency levels that standard tools seldom provide. While everyone’s chasing the latest GPU that costs more than a brand new car, some crafty engineers at DeepSeek were like “Hold my coffee (or most likely tea)” and achieved what NVIDIA themselves hadn’t tackled yet. Sometimes all it takes is being clever with the threads you’ve got. References: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf -- -- Written by Amin Sedaghat http://aminsed.github.io/Portfolio/ https://github.com/Aminsed No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Mirage : multi-level superoptimizer for faster tensor computation in LLMs SACHIN KUMAR Follow -- Listen Share Current deep neural network (DNN) frameworks generally specify DNN computation using tensor programs, which are directed acyclic graphs whose nodes and edges represent tensor algebra operators and tensors. For optimizing an input tensor program, existing frameworks like Pytorch and Tensorflow ) use manually designed rules to map the tensor program to expert-written GPU kernels, which requires extensive engineering efforts to design and implement optimization rules. In this paper [1], authors present Mirage, the first multi-level superoptimizer for tensor programs . Mirage is able to automatically discover and verify sophisticated tensor program optimizations that require joint optimization of algebraic transformations, schedule transformations, and discovery of new custom kernels. Mirage Workflow and Key idea i) Key idea ii) Workflow a) Program Partitioning b) Expression-guided 𝜇Graph generator c) Probabilistic equivalence verifier d) 𝜇Graph optimizer Mirage outperforms existing systems by up to 3.5×, by exploiting subtle custom kernels and optimizations missing in existing systems. Multi-Level Graph Representation a) GPU hierarchy b) Kernel graph c) Block graph d) Grid dimensions e) For-loop dimensions f) Thread graph g) Tensor layout Case Study: Group-Query Attention Expression-Guided 𝜇Graph Generator Mirage 𝜇Graph generator automatically discovers potential 𝜇Graphs for an input tensor program, by finding 𝜇Graphs that capture optimizations at all of the kernel, block, and thread levels. i) Kernel and Block Graph Generation ii) Thread Graph Construction iii) Pruning via Abstract Expressions Probabilistic Equivalence Verifier 𝜇Graph Optimizer Evaluation i) Implementation ii) Experimental Setup iii) Performance Results Conclusion Paper discussed: https://www.cs.cmu.edu/~zhihaoj2/papers/mirage.pdf Github link of this paper: https://github.com/mirage-project/mirage References: -- -- Written by SACHIN KUMAR Staff MLE@Chegg/Applied Generative AI Researcher | https://www.linkedin.com/in/techsachinkumar/ No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Optimizing Vector Dot Product in CUDA: Exploring Shared Memory and Reduction Techniques Dhanush Follow -- Listen Share Introduction In the realm of parallel computing, CUDA (Compute Unified Device Architecture) provides a powerful platform to harness the full potential of GPUs. This blog post dives into the intricacies of implementing a vector dot product using CUDA, focusing on the efficient use of shared memory, handling race conditions, and applying reduction techniques to optimize performance. Understanding the Vector Dot Product The dot product of two vectors is a fundamental operation in many scientific and engineering applications. For vectors a\mathbf{a}a and b\mathbf{b}b of size nnn, the dot product is calculated as: dot_product= ∑i=(0 to n−1)​a[i]×b[i] In this post, we’ll explore how to perform this operation on a GPU, leveraging CUDA for parallel computation. 1. Code Overview The primary goal of this CUDA program is to compute the dot product of two vectors a\mathbf{a}a and b\mathbf{b}b using GPU parallelism. The result of the dot product is stored in an array c\mathbf{c}c, where each element corresponds to a partial dot product computed by a block of threads. 2. Code Breakdown Here’s a step-by-step explanation of the code: #include<iostream>#define n 10 using namespace std;const int threadperblock = 256; __global__ void dot_product(int *a, int *b, int *c) { __shared__ float cache[threadperblock]; int tid = threadIdx.x + blockIdx.x * blockDim.x; int cacheIndex = threadIdx.x; float temp = 0; while (tid < n) { temp += a[tid] * b[tid]; tid += blockDim.x * gridDim.x; } cache[cacheIndex] = temp; __syncthreads(); int i = blockDim.x / 2; while (i != 0) { if (cacheIndex < i) cache[cacheIndex] += cache[cacheIndex + i]; __syncthreads(); i /= 2; } if (cacheIndex == 0) { c[blockIdx.x] = cache[0]; }} 3. Host Code int main() { int a[n], b[n], c[n], cpu_stored, stored_dot_val; int *dev_a, *dev_b, *dev_c; for (int i = 0; i < n; i++) { a[i] = i + 2; b[i] = i + 1; } cpu_stored = 0; for (int i = 0; i < n; i++) { cpu_stored += a[i] * b[i]; } cout << "CPU Stored Value: " << cpu_stored << endl; cudaMalloc((void **) &dev_a, n * sizeof(int)); cudaMalloc((void **) &dev_b, n * sizeof(int)); cudaMalloc((void **) &dev_c, n * sizeof(int)); cudaMemcpy(dev_a, a, n * sizeof(int), cudaMemcpyHostToDevice); cudaMemcpy(dev_b, b, n * sizeof(int), cudaMemcpyHostToDevice); int blockpergrid = (n + threadperblock - 1) / threadperblock; dot_product<<<blockpergrid, threadperblock>>>(dev_a, dev_b, dev_c); cudaMemcpy(&c, dev_c, blockpergrid * sizeof(int), cudaMemcpyDeviceToHost); stored_dot_val = 0; for (int i = 0; i < blockpergrid; i++) { stored_dot_val += c[i]; } cout << "Stored Value is: " << stored_dot_val << endl; cudaFree(dev_a); cudaFree(dev_b); cudaFree(dev_c); return 0;} Conclusion In this blog post, we’ve explored an efficient CUDA implementation for computing the dot product of vectors, emphasizing the use of shared memory and the reduction technique to enhance performance. By leveraging shared memory, we minimized global memory accesses, which significantly improves computational speed. The reduction technique, applied within each block, ensures that partial results are combined effectively, making the algorithm more efficient. Handling race conditions and using synchronization barriers like __syncthreads() is crucial for ensuring the correctness of parallel computations. This approach allows multiple threads to safely collaborate and achieve accurate results. For a complete view of the CUDA code and further exploration, visit my GitHub repository. There, you can find the full implementation and experiment with the code to better understand the concepts discussed. Happy coding, and feel free to reach out with any questions or feedback! -- -- Written by Dhanush Graduate student from CSU- Dominguez Hills. Skilled in CUDA C/C++, parallel programming, and data structures. Proven expertise in React, ASP.NET Core & Docker. No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Hardware Low-Level: CUDA Kernel Optimization from Scratch Nova Follow -- Listen Share Reading notes for https://siboehm.com/articles/22/CUDA-MMM and https://www.xiaohongshu.com/explore/66d6bf4a000000001d015511?app_platform=ios&app_version=8.55.8&share_from_user_hidden=true&xsec_source=app_share&type=normal&xsec_token=CBBbLBO8SY12Z54WFDil8dNRGqxlIGrO42iILYCU8gsKA=&author_share=1&xhsshare=CopyLink&shareRedId=Nz9ERDY2Ok86Ozg4PEo0Rzw7OUo3SkpB&apptime=1729026758&share_id=7eb4d12349894b4085d7b02ed3465cb0 Matrix multiplication is one of the most important algorithms in deep learning using GPUs, accounting for nearly all FLOPs. The author wrote a CUDA matrix multiplication kernel from scratch and progressively optimized it, eventually achieving performance close to cuBLAS. The goal is to understand the performance characteristics of modern GPUs through this process, including global memory access coalescing, shared memory caching, and utilization optimization. Optimization Process Initial Kernel The simplest implementation involves each thread calculating a single element in matrix C. The performance of this method is low due to inefficient memory access. Performance: 309 GFLOPs, accounting for 1.3% of cuBLAS. Global Memory Access Coalescing This optimization step adjusts the data access pattern so that threads within the same warp can sequentially read data, achieving coalesced global memory accesses. Performance improves to 1986.5 GFLOPs, accounting for 8.5% of cuBLAS. Shared Memory Cache By using shared memory to cache blocks of matrices A and B in shared memory, the number of accesses to global memory is significantly reduced, which greatly decreases the overhead of memory accesses and thus improves computing efficiency. Performance reaches 2980.3 GFLOPs, accounting for 12.8% of cuBLAS. 1D Block Division Each thread calculates multiple elements in matrix C, which reduces the frequency of accessing shared memory. The performance significantly improves to 8474.7 GFLOPs, accounting for 36.5% of cuBLAS. 2D Block Division By further dividing the work, each thread calculates larger blocks of matrix C, increasing computational density. Performance reaches 15971.7 GFLOPs, accounting for 68.7% of cuBLAS. Vectorized Memory Access Vectorization is a method that leverages hardware parallelism by using data types with a width of 4 (e.g., float4) for memory read and write operations, processing more data per operation. This strategy greatly reduces the number of memory instructions, improving memory bandwidth utilization. Through automatic tuning To find the optimal parameters, performance further improves to 19721.0 GFLOPS, accounting for 84.8% of cuBLAS. Warp blocking optimization It increases parallelism and improves the locality of register caching. Performance reaches 21779.3 GFLOPS, accounting for 93.7% of cuBLAS. Currently: Using tensor memory accelerator (TMA) for memory I/O async operations and warp group MMA for MMA async operations. Quoted from Nvidia’s Hopper webpage: TMA allows applications to transfer 1D and up to 5D tensors between global memory and shared memory, in both directions, as well as between the shared memory regions of different SMs in the same cluster (refer to [Thread Block Clusters](https://docs.nvidia.com/cuda/hopper-tuning-guide/index.html#thread-block-clusters)). Additionally, for writes from shared memory to global memory, it allows specifying element wise reduction operations such as add/min/max as well as bitwise and/or for most common data types. This has several advantages: - Avoids using registers for moving data between the different memory spaces. - Avoids using SM instructions for moving data: a single thread can issue large data movement instructions to the TMA unit. The whole block can then continue working on other instructions while the data is in flight and only wait for the data to be consumed when actually necessary. - Enables users to write warp specialized codes, where specific warps specialize on data movement between the different memory spaces while other warps only work on local data within the SM. Extra comments - This implementation is not necessary in nowadays with tensor cores, when we have cuBLAS on Hopper.- Testing matrix is small, cuBL calls a relatively simple kernel while it has many more complex ones.- Kernel here used TN-layout where matrix A is transposed, but cuBLAS uses NN-layout, so the comparison may not be properly utilized. -- -- Written by Nova No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in 7 Step Optimization of Parallel Reduction with CUDA Rimika Dhara Follow -- Listen Share In this post, I aim to take a simple yet popular algorithm — Parallel Reduction — and optimize its performance as much as possible. This effort is inspired by an NVIDIA webinar hosted by Mark Harris, from which I’m not only recreating the optimizations but also attempting to simplify the concepts for better understanding. Alongside this article, I have provided a GitHub implementation of these methods. While here I only showcase the final kernels with their methods, my GitHub repository includes more detailed information that will assist you in recreating this work. Let’s get started! Understanding the Algorithm Let’s start by exploring what the parallel reduction algorithm entails. It’s a data-parallel primitive that’s straightforward to implement in CUDA. To keep it simple, parallel reduction aims to reduce a vector, matrix, or tensor in parallel by leveraging a GPU’s thread hierarchy. This reduction is achieved through operations like sum(), min(), max(), or avg() to aggregate and reduce the data. We will be using sum() to reduce our dataset. Despite their simplicity, these operations are versatile and crucial for many applications, requiring high optimizations to avoid becoming bottlenecks. Although these computations may seem simple, they can be time-consuming if not efficiently handled. When parallelizing, the algorithm can be thought of as a tree-based approach, spreading across each thread block of our GPU. A critical question arises: “How can we communicate partial results between thread blocks?” The most straightforward solution might seem to be Global Synchronization — allow each block to compute, then synchronize them all and continue recursively. However, CUDA does not support global synchronization because it is costly in terms of hardware (HW) and would constrain the programmer to using only a few blocks to avoid deadlocks, thus reducing overall efficiency. A more practical approach to communicating partial results while computing in each thread block is to decompose our kernel into multiple kernels. Kernel decomposition involves breaking down a large kernel task into smaller, manageable sub-tasks, which can be executed independently across different threads or blocks. This method minimizes hardware and software overhead. This allows for more flexible and efficient use of GPU resources, reducing the need for synchronization and improving overall computational speed. Our Metrics Our algorithm’s performance hinges on two critical metrics: time and bandwidth. These metrics gauge whether our GPU is being fully utilized, essentially measuring if it’s achieving peak performance. We aim for GPU peak performance with our metrics reflected in terms of compute (GFLOP/s) and memory (GB/s). To optimize these metrics, we need to focus on two main aspects: data access and computational bottlenecks. This translates to assessing REDUCE-0: Interleaved Addressing This method serves as our base. A very naive approach to parallelizing reduction is to decide on a pattern for accessing address spaces where our elements are stored, retrieving those elements, combining those elements by summing them, and recursively repeating this process on different threads to parallelize our operation. In fact, this is what we will do for the first 3 methods of optimization. Interleaved Addressing refers to accessing and combining address spaces that are positioned halfway to the segment that our current thread is dealing with. Consider an array of 1024 integers. If we use 256 threads per block, each thread starts with a different point and skips every 256 elements. For instance, thread 0 would process elements 0, 256, 512, and 768, each time combining its current element with another positioned halfway to the end of the array segment it’s responsible for. So, thread 0 would combine element 0 with element 128, element 256 with 384, element 512 with 640, and element 768 with 896. Then, this would continue recursively until a final result is reached. This method not only simplifies synchronization among threads but also ensures that all threads are actively reducing data in parallel, leading to a more balanced load and efficient reduction. // REDUCTION 0 – Interleaved Addressing__global__ void reduce0(int *g_in_data, int *g_out_data){ extern __shared__ int sdata[]; // stored in the shared memory // Each thread loading one element from global onto shared memory unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; sdata[tid] = g_in_data[i]; __syncthreads(); // Reduction method -- occurs in shared memory because that's where sdata is stored for(unsigned int s = 1; s < blockDim.x; s *= 2){ if (tid % (2 * s) == 0) { sdata[tid] += sdata[tid + s]; } __syncthreads(); } if (tid == 0){ g_out_data[blockIdx.x] = sdata[0]; }} Let’s go through our first implementation step by step: Results Problems with this method Although this method is a great foundation for parallel programming, it still has its issues. Let’s bring back our metrics and assess where our code might be inefficient in terms of compute and memory. First, let’s focus on compute-related problems with our next optimization. REDUCE-1 : Interleaved Addressing 2.0 This method doesn’t change much from our previous method. The addressing is the same, but this time we construct our reduction function without the use of % operator or the divergent condition. By restructuring the index calculation (int index = 2 * s * tid;), REDUCE-1 ensures that each thread consistently performs its operation without checking the position relative to its stride, thereby removing divergence within the warp. This adjustment means all threads in a warp follow the same execution path, significantly improving the warp efficiency. The removal of the % operator further enhances performance by avoiding costly modulo operations, which are slow on GPUs due to their reliance on division. // REDUCTION 1 – Interleaved Addressing without branch divergence and % operation__global__ void reduce1(int *g_in_data, int *g_out_data){ extern __shared__ int sdata[]; // stored in the shared memory // Each thread loading one element from global onto shared memory unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; sdata[tid] = g_in_data[i]; __syncthreads(); // Reduction method -- occurs in shared memory for(unsigned int s = 1; s < blockDim.x; s *= 2){ // note the stride as s *= 2 : this causes the interleaving addressing int index = 2 * s * tid; // now we don't need a diverging branch from the if condition if (index + s < blockDim.x) { sdata[index] += sdata[index + s]; // s is used to denote the offset that will be combined } __syncthreads(); } if (tid == 0){ g_out_data[blockIdx.x] = sdata[0]; }} Results Problems with this method While REDUCE-1 improves on the computational efficiency and execution coherence over REDUCE-0, it introduces a new problem: shared memory bank conflicts. These conflicts occur when multiple threads attempt to access data from the same memory bank simultaneously. These bank conflicts lead to serialization of what could otherwise be parallel memory accesses. From REDUCE-0 to REDUCE-1, we increased the computational efficiency of our algorithm. However, we did not solve the memory-related issues. In fact, we caused more memory-related issues by switching to strides. It is a bit hard to visualize, but essentially the stride method causes the threads to try and access the same shared memory addresses. REDUCE-0 spread out threads in intervals that acted like “boundaries” and kept thread accesses within those boundaries, reducing the chances of conflicts. But REDUCE-1 relies on strides and removes these boundaries, causing bank conflicts and serialization of processes. Each bank can only service one access per cycle, so when multiple accesses are directed to the same bank, they must be serialized, effectively reducing the throughput of memory operations. This serialization negates some of the performance gains achieved by eliminating warp divergence and can become a significant bottleneck, especially in larger blocks where the probability of bank conflicts increases. Let’s try to solve this problem now. REDUCE-2: Sequential Addressing This method employs a different addressing technique that is more efficient. Instead of threads accessing elements spaced widely apart (interleaved addressing), this method employs sequential addressing where each thread deals with consecutive elements. Let’s break that down. Bringing back our 1024 element example with 256 threads per block, thread 0 would try and access elements 0, 1, 2, 3 instead of 0, 256, 512, 768 which are spaced far apart. Thread 0 combines elements 0 and 1, then element 2, and so on recursively. What this does is take advantage of spatial locality and avoids bank conflicts by being cache efficient. The algorithm is also linear and minimizes the need for synchronization that increases wait times. This change significantly enhances memory access patterns by aligning them more closely with the GPU’s preference for coalesced memory accesses. By accessing adjacent memory locations, REDUCE-2 reduces the likelihood of cache misses and memory bank conflicts, making the memory bandwidth usage more efficient and improving overall performance of the reduction operation. // REDUCTION 2 – Sequence Addressing__global__ void reduce2(int *g_in_data, int *g_out_data){ extern __shared__ int sdata[]; // stored in the shared memory // Each thread loading one element from global onto shared memory unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; sdata[tid] = g_in_data[i]; __syncthreads(); // Reduction method -- occurs in shared memory for(unsigned int s = blockDim.x/2; s > 0; s >>= 1){ // REDUCE2 -- check out the reverse loop above if (tid < s){ // then, we check threadID to do our computation sdata[tid] += sdata[tid + s]; } __syncthreads(); } if (tid == 0){ g_out_data[blockIdx.x] = sdata[0]; }} Let’s dive into the algorithm a bit. The major changes in this method include the replacement of strided indexing with a reversed loop structure coupled with threadID-based indexing. This fundamentally modifies how data is handled during reduction. Results Problems with this method This method is mostly conflict free. At this point, we have employed obvious changes to resolve compute and memory issues. We should now try to make our algorithm smarter and find ways to construct it so we can make it faster. One major problem is that half of the threads are idle in the first loop iteration which is wasteful and underutilizes our GPU’s compute. Although this was the case in our previous techniques, we had bigger fish to fry before getting to idle threads. Going with our 1024 element example, in the first iteration of the loop where s=blockDim.x/2, or s=512 in this case, the condition if (tid < s) restricts active computation to only the first 512 threads of the block. This condition means that while these 512 threads are actively summing pairs of elements (for example, sdata[tid] with sdata[tid + 512]), the remaining 512 threads are idle, contributing nothing to the computation. This pattern of halving the number of active threads in each subsequent iteration continues until the reduction completes; from 512, to 256, then 128, 64, 32 and so on. This rapid halving leads to a significant underutilization of the GPU's capabilities, especially in the initial iterations where only a fraction of the available threads are used. Let’s solve this problem by doing our first computations when we load our data onto shared memory. REDUCE-3: First Add During Load To make use of our idle threads and make our computation smarter, we will do our first computation while we are loading our elements from global memory to shared memory. This will help us load and reduce two elements to one and halve the number of blocks we need to deal with. More concretely put, in our array of 1024 elements with 256 threads, each thread would load the sum of their first two elements onto shared memory (e.g., thread 0 processes elements 0 and 1, thread 1 processes elements 2 and 3, and so forth). Meaning, that we would halve the number of blocks and the length of our shared memory — in this example, to 512. The rest of the code works exactly as it did before in REDUCE-2. This means that our first iteration would still activate 512 threads to start reducing our elements because of s=blockDim.x/2 = 512. Evidently, this would put more threads to work and avoid any slackers! // REDUCTION 3 – First Add During Load__global__ void reduce3(int *g_in_data, int *g_out_data){ extern __shared__ int sdata[]; // stored in the shared memory // Each thread loading one element from global onto shared memory unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x; sdata[tid] = g_in_data[i] + g_in_data[i+blockDim.x]; __syncthreads(); // Reduction method -- occurs in shared memory for(unsigned int s = blockDim.x/2; s > 0; s >>= 1){ // check out the reverse loop above if (tid < s){ // then, we check tid to do our computation sdata[tid] += sdata[tid + s]; } __syncthreads(); } if (tid == 0){ g_out_data[blockIdx.x] = sdata[0]; }} During implementation, we see this method unveil as three major changes to our prior code (changes highlighted in bold). We first do our initial reduction step while loading the elements from global memory to shared memory: sdata[tid] = g_in_data[i] + g_in_data[i+blockDim.x] . Then, we make a two changes to accommodate this change: Result Problems with this method Our current approach works great! But, can we make it faster and smarter? Let’s check in with our two metrics. With around 41 GB/s bandwidth usage on Tesla T4, we are definitely not reaching or exhausting our bandwidth. On the other hand, reduction has low arithmetic intensity, meaning we are not compute bound either. Introducing our new villain… Because we are not bandwidth bound or compute bound, there is one more bottleneck we can still check for: Instruction Overhead. This includes all the operations, or ancillary instructions, the GPU performs that are not directly related to loading data, storing data, or executing the primary arithmetic operations of the reduction. In other words, these include address arithmetic (calculating with address space to load next) and loop overhead (handling loops, loop conditions, and loop iterations). Our strategy for this bottleneck would be Loop Unrolling. REDUCE-4: Unroll Last Warp Let’s discuss what’s happening in REDUCE-3 first to understand the need for this. With 1024 elements example, After the initial loading where each thread loads and adds pairs of elements, 256 threads work on 512 elements. Here, the reduction begins with each thread working on single elements moving forward. This means, // Adding this function to help with unrolling__device__ void warpReduce(volatile int* sdata, int tid){ // the aim is to save all the warps from useless work sdata[tid] += sdata[tid + 32]; sdata[tid] += sdata[tid + 16]; sdata[tid] += sdata[tid + 8]; sdata[tid] += sdata[tid + 4]; sdata[tid] += sdata[tid + 2]; sdata[tid] += sdata[tid + 1];}// REDUCTION 4 – Unroll Last Warp__global__ void reduce4(int *g_in_data, int *g_out_data){ extern __shared__ int sdata[]; // stored in the shared memory // Each thread loading one element from global onto shared memory unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x; sdata[tid] = g_in_data[i] + g_in_data[i+blockDim.x]; __syncthreads(); // only changing the end limit to stop before s = 32 for(unsigned int s = blockDim.x/2; s > 32; s >>= 1){ // check out the reverse loop above if (tid < s){ // then, we check tid to do our computation sdata[tid] += sdata[tid + s]; } __syncthreads(); } // Adding this to use warpReduce when s = 32 if (tid < 32){ warpReduce(sdata, tid); } if (tid == 0){ g_out_data[blockIdx.x] = sdata[0]; }} The implementation is straightforward enough. We stop our loop before s = 32 and call the kernel warpReduce, with our handwritten 6 iterations, that runs only on __device__. We also need to use the keyword volatile in order for our implementation to still be correct. Result Problems with this method There are definitely no problems with this; we’re getting great speedup! But, why stop our unrolling journey here when we have so many more loops to unroll!! REDUCE-5: Completely Unroll In order to continue our unrolling, we would need to know the total number of iterations of our loops at compile time. Luckily for us, the block size is limited to 512 threads by the GPU and we tend to stick to power-of-2 blocks. We know that we can easily unroll for a fixed block size, we just need to be generic. To help with this CUDA supports and provides C++ template parameters in device and host functions. Templates in C++ allow us to write flexible, generic programs by letting us define functions or classes with placeholders that can be later substituted with specific types provided at compile time. We use this to make account for the potential variations in blockSize that would change unrolling requirements. Depending on the block size, different switch cases are prepared to handle the specific unrolling requirements. This complete unrolling eliminates unnecessary loops and conditions for the majority of the reduction phases, minimizing computational overhead. By compiling different versions of the kernel tailored to specific block sizes (such as 512, 256, and 128), we optimize each variant for its particular scenario, stripping away any unnecessary operations and maximizing both memory and compute resource efficiency. In this specific implementation, I’ve chosen to set the blockSize to 256 in the main function, simplifying our approach. However, I’ve included switch cases for block sizes of 512, 256, and 128 to demonstrate this method’s flexibility and to highlight how effectively CUDA can leverage template parameters to enhance performance across different configurations. // Adding this function to help with unrolling and adding the Templatetemplate <unsigned int blockSize>__device__ void warpReduce(volatile int* sdata, int tid){ if(blockSize >= 64) sdata[tid] += sdata[tid + 32]; if(blockSize >= 32) sdata[tid] += sdata[tid + 16]; if(blockSize >= 16) sdata[tid] += sdata[tid + 8]; if(blockSize >= 8) sdata[tid] += sdata[tid + 4]; if(blockSize >= 4) sdata[tid] += sdata[tid + 2]; if(blockSize >= 2) sdata[tid] += sdata[tid + 1];}// REDUCTION 5 – Completely Unrolltemplate <unsigned int blockSize>__global__ void reduce5(int *g_in_data, int *g_out_data){ extern __shared__ int sdata[]; // stored in the shared memory // Each thread loading one element from global onto shared memory unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x*(blockDim.x*2) + threadIdx.x; sdata[tid] = g_in_data[i] + g_in_data[i+blockDim.x]; __syncthreads(); // Perform reductions in steps, reducing thread synchronization if (blockSize >= 512) { if (tid < 256) { sdata[tid] += sdata[tid + 256]; } __syncthreads(); } if (blockSize >= 256) { if (tid < 128) { sdata[tid] += sdata[tid + 128]; } __syncthreads(); } if (blockSize >= 128) { if (tid < 64) { sdata[tid] += sdata[tid + 64]; } __syncthreads(); } if (tid < 32) warpReduce<blockSize>(sdata, tid); if (tid == 0){ g_out_data[blockIdx.x] = sdata[0]; }} Also, we should change the way our kernel is called to implement unrolling: // Needed for Complete unrolling// Launch Kernel and Synchronize threadsswitch (blockSize) { case 512: reduce6<512><<<num_blocks, 512, 512 * sizeof(int)>>>(dev_input_data, dev_output_data, n); break; case 256: reduce6<256><<<num_blocks, 256, 256 * sizeof(int)>>>(dev_input_data, dev_output_data, n); break; case 128: reduce6<128><<<num_blocks, 128, 128 * sizeof(int)>>>(dev_input_data, dev_output_data, n); break;} The implementation doesn’t change much from REDUCE-4; we simply try to now feed in blockSize as a template parameter that is determined at compile time. Like before, we include if statements to tend to different values of blockSize and switch statements to call kernels based on those values. Result Problems with this method While Reduce5 enhances efficiency by fully unrolling loops for known block sizes, we can’t use this method flexibly and scale it up. Specifically, the full unrolling technique relies heavily on compile-time optimizations that restrict the kernel to fixed block sizes. This approach can lead to inefficiencies in scenarios where the data size does not perfectly match the block configurations, potentially underutilizing GPU resources. Additionally, the complexity of managing multiple versions of the kernel for each block size increases the development overhead and limits dynamic adaptability to varying workloads, making it less practical for general-purpose applications where input sizes can vary greatly. So, let’s try to get inspiration from First-Add-During-Load from REDUCE-3 and try to do as many Adds as possible instead of just the first one. REDUCE-6: Multiple Adds / Threads Transitioning to Reduce6 addresses the rigidity and scalability issues seen in Reduce5 by introducing a more dynamic approach termed “algorithm cascading”. In this method, each thread performs multiple additions within a broader range of block sizes, effectively reducing the dependency on specific block configurations. This flexibility allows the algorithm to adapt more fluidly to varying data sizes, optimizing resource utilization across a wider array of scenarios. By combining both sequential and parallel reductions, Reduce6 minimizes latency and maximizes throughput, particularly in environments with high kernel launch overheads and diverse workload sizes. The strategic distribution of work across threads, as per Brent’s theorem, ensures that each thread contributes optimally throughout the reduction process, maintaining cost-efficiency while scaling effectively with the hardware capabilities. For example, rather than each thread processing a single pair of elements, it might process multiple pairs before any synchronization barrier, thereby amortizing the cost of synchronization across more computation and improving the overall performance. Final Optimized Kernel // Adding this function to help with unrolling and adding the Templatetemplate <unsigned int blockSize>__device__ void warpReduce(volatile int* sdata, unsigned int tid){ if(blockSize >= 64) sdata[tid] += sdata[tid + 32]; if(blockSize >= 32) sdata[tid] += sdata[tid + 16]; if(blockSize >= 16) sdata[tid] += sdata[tid + 8]; if(blockSize >= 8) sdata[tid] += sdata[tid + 4]; if(blockSize >= 4) sdata[tid] += sdata[tid + 2]; if(blockSize >= 2) sdata[tid] += sdata[tid + 1];}// REDUCTION 6 – Multiple Adds / Threadstemplate <int blockSize>__global__ void reduce6(int *g_in_data, int *g_out_data, unsigned int n){ extern __shared__ int sdata[]; // stored in the shared memory // Each thread loading one element from global onto shared memory unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x*(blockSize*2) + tid; unsigned int gridSize = blockDim.x * 2 * gridDim.x; sdata[tid] = 0; while(i < n) { sdata[tid] += g_in_data[i] + g_in_data[i + blockSize]; i += gridSize; } __syncthreads(); // Perform reductions in steps, reducing thread synchronization if (blockSize >= 512) { if (tid < 256) { sdata[tid] += sdata[tid + 256]; } __syncthreads(); } if (blockSize >= 256) { if (tid < 128) { sdata[tid] += sdata[tid + 128]; } __syncthreads(); } if (blockSize >= 128) { if (tid < 64) { sdata[tid] += sdata[tid + 64]; } __syncthreads(); } if (tid < 32) warpReduce<blockSize>(sdata, tid); if (tid == 0){ g_out_data[blockIdx.x] = sdata[0]; }} Peep the while loop where each thread performs multiple additions directly in shared memory. This loop is designed to aggregate two data elements per thread for each iteration, effectively halving the number of necessary operations and interactions with global memory. The thread loads data from the global memory, adds it to a previously loaded value, and then jumps forward by the total number of threads times two, ensuring that it processes another pair of elements on the next iteration. This pattern significantly reduces the total amount of data each thread needs to handle at any one time, maximizing the use of available bandwidth and minimizing latency. FINAL RESULTS Comparing it with NVIDIA’s Performance metrics One of the main differences between my implementation and NVIDIA’s is in GPU. For the webinar, they use GeForce 8800, while I used Tesla T4. This made my initial implementation a lot better right away than theirs, because of a more optimized architecture. However, it also left very little space for improvement in speedup. While I am not able to match the dramatic speedups, I am able to showcase continuous optimization and increasing GPU peak performance. Generalizing Optimization Techniques I’m simply going to list down my key takeaways while optimizing a CUDA kernel: I hope this was helpful! Source: https://developer.download.nvidia.com/assets/cuda/files/reduction.pdf My GitHub Implementation: https://github.com/rimikadhara67/Parallel-Reduction?tab=readme-ov-file -- -- Written by Rimika Dhara Aspiring SWE and ML Engineer. Current SWE Intern at Oracle. No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Understanding CUDA Memory Usage: A Practical Guide Hey Amit Follow -- Listen Share If you think you need to spend $2,000 on a 180-day program to become a data scientist, then listen to me for a minute. I understand that learning data science can be really challenging, especially when you’re just starting out, because you don’t know what you need to know. But it doesn’t have to be this way. That’s why I spent weeks creating the perfect roadmap to help you land your first data science job. Here’s what it contains: If this sounds exciting, you can grab it right now by clicking here. Now let’s get back to the blog: Introduction Purpose: Why Understanding CUDA Memory Usage Matters Let’s face it: when it comes to deep learning or scientific computing, GPUs are like rocket fuel. But here’s the thing — no matter how powerful your GPU is, poor memory management can leave you stuck in first gear. CUDA memory is more than just a tool for moving data back and forth; it’s the backbone of your computational performance. Efficient memory usage doesn’t just save milliseconds; it can save hours, dollars, and even your sanity on a tight project deadline. If you’ve ever run into out-of-memory errors or struggled to squeeze more speed out of your GPU, this guide is for you. I’m not going to hand you surface-level tips that you’ve heard a hundred times. Instead, we’ll dive into the nuances of CUDA memory types, debugging, and optimization — things you can actually apply to your real-world projects to gain measurable improvements. Reader Takeaway By the end of this guide, you’ll not only understand CUDA’s memory architecture but also how to use it to unlock the full potential of your GPU. Whether you’re optimizing neural networks or accelerating scientific simulations, the insights here will help you avoid costly mistakes and make smarter design choices. Quick Overview Here’s what we’re diving into: Pro Tip: This might surprise you: inefficient memory usage can make a high-end GPU perform worse than a mid-tier one. Studies have shown that optimizing memory access patterns can reduce training times by up to 30%. Imagine what that means for your projects — whether it’s faster results or slashed cloud bills. 2. CUDA Memory Types and Architecture Overview Memory Types: What You Need to Know Think of CUDA memory like a multi-tiered storage system. Each tier has its own strengths and weaknesses, and understanding these differences is key to designing efficient kernels. Let’s break it down: 1. Global Memory Code Example:Here’s a basic kernel where threads read from global memory: __global__ void globalMemoryExample(float *input, float *output, int size) { int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx < size) { output[idx] = input[idx] * 2.0f; // Simple operation }} Why this matters: If threads access global memory inefficiently (e.g., uncoalesced access), you’re leaving performance on the table. 2. Shared Memory Real-World Application: Accelerating matrix multiplication by tiling submatrices into shared memory. Code Example: __global__ void sharedMemoryExample(float *input, float *output, int N) { __shared__ float tile[BLOCK_SIZE][BLOCK_SIZE]; // Shared memory array int tx = threadIdx.x, ty = threadIdx.y; int row = blockIdx.y * blockDim.y + ty; int col = blockIdx.x * blockDim.x + tx; if (row < N && col < N) { tile[ty][tx] = input[row * N + col]; __syncthreads(); // Synchronize all threads in the block output[row * N + col] = tile[ty][tx] * 2.0f; }} Pro Tip: Always use __syncthreads() when accessing shared memory to prevent race conditions. 3. Local Memory 4. Constant Memory Code Example: __constant__ float kernel[16]; // Constant memory array__global__ void constantMemoryExample(float *input, float *output, int size) { int idx = threadIdx.x + blockIdx.x * blockDim.x; if (idx < size) { output[idx] = input[idx] * kernel[threadIdx.x % 16]; }} 5. Texture and Surface Memory 6. Unified Memory Memory Hierarchy Visualization Here’s the deal: the farther the memory is from your threads, the slower it is. A diagram comparing latencies and bandwidths (e.g., shared memory vs. global memory) will drive this point home. Code Snippet: Performance Comparison To truly understand the difference between memory types, let’s measure their performance in a simple kernel: __global__ void memoryComparison(float *globalData, float *sharedData, int size) { __shared__ float sharedBuffer[BLOCK_SIZE]; int idx = threadIdx.x + blockIdx.x * blockDim.x; // Global memory access if (idx < size) { globalData[idx] *= 2.0f; } // Shared memory access if (threadIdx.x < BLOCK_SIZE) { sharedBuffer[threadIdx.x] = sharedData[threadIdx.x] * 2.0f; __syncthreads(); }} 3. Real-World Project: Optimizing CUDA Memory in Image Processing Context Let’s optimize a real-world problem: image convolution. It’s a common operation in computer vision but often suffers from slow execution due to poor memory handling. Step 1: Set Up the Problem Suppose you’re processing a high-resolution image for edge detection. Each pixel involves a weighted sum of its neighbors using a kernel matrix. Challenges: Step 2: Naive CUDA Implementation Here’s a basic kernel: __global__ void naiveConvolution(float *input, float *output, int width, int height, float *kernel, int kernelSize) { int x = blockIdx.x * blockDim.x + threadIdx.x; int y = blockIdx.y * blockDim.y + threadIdx.y; int halfK = kernelSize / 2; if (x >= halfK && x < width - halfK && y >= halfK && y < height - halfK) { float sum = 0.0f; for (int i = -halfK; i <= halfK; i++) { for (int j = -halfK; j <= halfK; j++) { sum += input[(y + i) * width + (x + j)] * kernel[(i + halfK) * kernelSize + (j + halfK)]; } } output[y * width + x] = sum; }} Profiling the Naive Kernel Use tools like Nsight Systems to measure memory usage and identify inefficiencies. Expect to see: In the next section, we’ll optimize this kernel with shared memory to reduce global memory traffic and improve performance. Stay tuned! 4. Debugging and Profiling CUDA Memory When it comes to CUDA programming, debugging memory issues and profiling performance can feel like untangling a stubborn knot. But let me tell you — if you don’t, that knot can strangle your performance. Here, I’ll show you how to tackle CUDA memory debugging and profiling like a pro. Profiling Tools: Finding Bottlenecks with Nsight Systems and nvprof Before we fix any issues, we need to find them. This is where profiling tools shine. Nsight Systems and Nsight ComputeThese tools are your best friends when profiling CUDA memory. They let you analyze memory transfers, kernel execution, and thread synchronization. Let’s walk through a basic workflow: Run your application with Nsight Systems: nsys profile --output=my_profile ./my_cuda_app This command generates a detailed report showing kernel execution timelines and memory transfer patterns. Dive deeper with Nsight Compute:Once you identify a problematic kernel, analyze its memory usage: ncu --set full ./my_cuda_app Nsight Compute provides metrics like memory throughput, shared memory utilization, and L2 cache hit rates. nvprofIf you’re working on older GPUs or prefer a CLI-first approach, nvprof is another powerful tool. nvprof --print-gpu-trace ./my_cuda_app This command outputs a timeline of kernel launches and memory transfers. To isolate memory operations: nvprof — metrics dram_read_throughput,dram_write_throughput ./my_cuda_app Pro Tip: Use these tools early and often during development to catch bottlenecks before they snowball. Debugging Memory Issues Let’s move from profiling to debugging common CUDA memory issues. 1. Out-of-Memory (OOM) Errors You’ve probably hit an OOM error if your training crashes mid-run. These errors often occur when VRAM is exhausted due to excessive memory allocations. Solution: Monitor memory usage with cudaMemGetInfo: size_t freeMem, totalMem;cudaMemGetInfo(&freeMem, &totalMem);std::cout << "Free memory: " << freeMem << ", Total memory: " << totalMem << std::endl; 2. Misaligned Memory Access Misaligned access happens when threads access non-contiguous memory addresses. It can kill performance and lead to subtle bugs. Solution: Ensure data structures are aligned using cudaMallocPitch for 2D arrays: float *d_array;size_t pitch;cudaMallocPitch(&d_array, &pitch, width * sizeof(float), height); 3. Debugging Shared Memory Bank Conflicts Bank conflicts occur when multiple threads access the same memory bank simultaneously, causing serialization. Example of a Problematic Kernel: __global__ void sharedMemoryBankConflict(float *input, float *output, int N) { __shared__ float sharedArray[32]; int idx = threadIdx.x + blockIdx.x * blockDim.x; if (idx < N) { sharedArray[threadIdx.x] = input[idx]; __syncthreads(); output[idx] = sharedArray[threadIdx.x]; }} Here’s the issue: all threads in a warp access consecutive elements in sharedArray, leading to conflicts. How to Debug and Fix It: Use cuda-memcheck to detect conflicts cuda-memcheck ./my_cuda_app Fix the kernel by padding shared memory: __global__ void sharedMemoryNoConflict(float *input, float *output, int N) { __shared__ float sharedArray[32 + 1]; // Add padding to avoid conflicts int idx = threadIdx.x + blockIdx.x * blockDim.x; if (idx < N) { sharedArray[threadIdx.x] = input[idx]; __syncthreads(); output[idx] = sharedArray[threadIdx.x]; }} Padding prevents threads from hitting the same memory bank, resolving conflicts and boosting performance. 5. Optimization Strategies for CUDA Memory Usage Optimizing CUDA memory usage is like fine-tuning a race car. Each tweak shaves off precious milliseconds. Let’s explore some advanced strategies. 1. Memory Coalescing: The Key to Speed Why it matters: Coalesced memory access ensures that threads within a warp access consecutive memory addresses, minimizing latency. Example of Uncoalesced Access: __global__ void uncoalescedAccess(float *input, float *output, int N) { int idx = threadIdx.x * blockDim.x + blockIdx.x; if (idx < N) output[idx] = input[idx]; // Poor access pattern} Fixed Coalesced Access: __global__ void coalescedAccess(float *input, float *output, int N) { int idx = threadIdx.x + blockIdx.x * blockDim.x; if (idx < N) output[idx] = input[idx]; // Consecutive access} Benchmark Both Approaches: Use Nsight Compute to compare throughput. You’ll see a stark difference in performance. 2. Shared Memory Optimization with Tiling Tiling involves loading chunks of data into shared memory to reduce redundant global memory accesses. Example: Matrix Multiplication __global__ void tiledMatrixMul(float *A, float *B, float *C, int N) { __shared__ float tileA[BLOCK_SIZE][BLOCK_SIZE]; __shared__ float tileB[BLOCK_SIZE][BLOCK_SIZE]; int tx = threadIdx.x, ty = threadIdx.y; int row = blockIdx.y * blockDim.y + ty; int col = blockIdx.x * blockDim.x + tx; float sum = 0.0f; for (int k = 0; k < N / BLOCK_SIZE; ++k) { tileA[ty][tx] = A[row * N + k * BLOCK_SIZE + tx]; tileB[ty][tx] = B[(k * BLOCK_SIZE + ty) * N + col]; __syncthreads(); for (int i = 0; i < BLOCK_SIZE; ++i) { sum += tileA[ty][i] * tileB[i][tx]; } __syncthreads(); } C[row * N + col] = sum;} Tiling significantly improves memory efficiency by reusing loaded data within the block. 3. Dynamic Parallelism Dynamic parallelism allows child kernels to handle localized computations, optimizing memory usage. Example: Recursive matrix subdivision. Launch child kernels for smaller submatrices, reducing memory overhead. 6. Case Study: Optimizing ResNet Training When training a ResNet, VRAM is your most precious resource. Let’s address common challenges and solutions. Challenges Solutions torch.cuda.empty_cache() # Free unused memory 2. Pinned Memory: Use pinned host memory for faster host-to-device transfers: cudaHostAlloc((void**)&hostBuffer, size, cudaHostAllocDefault);cudaMemcpyAsync(deviceBuffer, hostBuffer, size, cudaMemcpyHostToDevice, stream); 7. Advanced Topics in CUDA Memory As you master CUDA’s memory model, you’ll start craving more advanced tools and techniques to squeeze every ounce of performance from your GPU. This section explores some high-level topics that can elevate your CUDA skills from proficient to expert. Unified Memory in Multi-GPU Systems Real-World Scenario: Imagine you’re training a massive transformer model across multiple GPUs. Each GPU holds a subset of the model and needs to share intermediate results with others. Manually managing data movement can quickly turn into a logistical nightmare. Here’s where Unified Memory shines. What is Unified Memory?Unified Memory provides a single memory address space accessible by both the CPU and GPUs. In multi-GPU setups, it simplifies data sharing because you don’t need to explicitly transfer data between devices. Example: __global__ void unifiedMemoryKernel(float *data, int size) { int idx = threadIdx.x + blockIdx.x * blockDim.x; if (idx < size) { data[idx] *= 2.0f; }}int main() { float *data; size_t size = 1024 * sizeof(float); // Allocate Unified Memory cudaMallocManaged(&data, size); // Initialize data for (int i = 0; i < 1024; ++i) { data[i] = i; } // Launch kernel unifiedMemoryKernel<<<32, 32>>>(data, 1024); cudaDeviceSynchronize(); // Free Unified Memory cudaFree(data); return 0;} Caveats:While Unified Memory simplifies programming, it comes with trade-offs: Pro Tip: Use cudaMemAdvise to provide memory access hints, reducing page thrashing in multi-GPU systems. cudaMemAdvise(data, size, cudaMemAdviseSetPreferredLocation, device_id); Asynchronous Memory Transfers Here’s the deal: In CUDA, memory transfers between the host and device can often block computation. But by overlapping memory transfers with computation, you can dramatically speed up your pipeline. How It Works: CUDA streams allow you to execute memory transfers asynchronously, so data is being moved while the GPU crunches numbers. Code Example: cudaStream_t stream;cudaStreamCreate(&stream);float *hostData, *deviceData;cudaMallocHost(&hostData, size); // Pinned memorycudaMalloc(&deviceData, size);// Asynchronous memory copycudaMemcpyAsync(deviceData, hostData, size, cudaMemcpyHostToDevice, stream);// Launch kernel on the same streammyKernel<<<grid, block, 0, stream>>>(deviceData);// Synchronize the streamcudaStreamSynchronize(stream);// Free resourcescudaStreamDestroy(stream);cudaFree(deviceData);cudaFreeHost(hostData); Why It Matters: By overlapping transfers and computation, you can achieve a significant performance boost, especially in data-intensive workflows. Persistent Memory (CUDA 11.x+) Persistent memory buffers were introduced in CUDA 11 to address scenarios where you need long-lived data storage for kernels. Think of it as a way to keep your data “warm” across kernel launches, reducing initialization costs. Use Case: Storing intermediate results for iterative algorithms like conjugate gradient solvers. Code Example: cudaDeviceEnablePeerAccess(1, 0); // Enable peer access between devicesvoid *persistentBuffer;cudaMallocAsync(&persistentBuffer, size, 0);// Use persistentBuffer in multiple kernel launcheskernelA<<<grid, block>>>(persistentBuffer);kernelB<<<grid, block>>>(persistentBuffer);cudaFreeAsync(persistentBuffer, 0); // Free when done Pro Tip: Combine persistent memory with asynchronous operations for high-efficiency pipelines. 8. Best Practices for CUDA Memory Management Managing CUDA memory is like managing your finances — it’s all about discipline and avoiding costly mistakes. Here’s a checklist to keep your memory usage efficient and error-free. Checklist for Efficient Memory Usage Common Pitfalls to Avoid 9. Conclusion Let’s wrap this up. CUDA memory isn’t just a technical detail — it’s the foundation of high-performance GPU computing. By understanding the nuances of memory types, debugging tools, and optimization strategies, you can transform your code into a lean, efficient powerhouse. Key Insights: Call-to-Action: Now it’s your turn. Apply these techniques to your projects and measure the difference. I’d love to hear about the performance gains you achieve — drop a comment or share your benchmarks! -- -- Written by Hey Amit Data Scientist & Founder at: oneiszero.com No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Decoding Attention-LLM Inference Optimization How to optimize MHA in the decoding stage of LLM inference? Bruce-Lee-LY Follow -- Listen Share 1 Background Thanks to flash-attention’s accelerated optimization of MHA (Multi Head Attention) calculations, the performance of LLM training and inference has been greatly improved. Tri Dao specially wrote Flash Decoding based on Flash Attention v2 to optimize inference. It mainly modified the block division method, used multiple blocks to process the same attention head, realized parallel loading of KV Cache, and finally launched a single kernel to rescale and merge the results. Therefore, in the case of small batches and very long sequences, the inference performance of Flash Decoding is greatly improved. However, when Flash Attention is applied to LLM inference, there are still the following problems: Since seq_q in the decoding stage of LLM inference is always 1, the calculation process of each head attention of MHA is simplified to two HGEMV and one softmax, as shown in the schematic diagram below. It is noted that in the case of RTX3090 in FP32 multiplication and accumulation, the computing power of Tensor Core FP16 is only 2 times that of CUDA Core FP32. In this case, the handwritten CUDA Core kernel is used to calculate the decoding MHA, while ensuring the accuracy, regardless of the delay or hardware utilization will achieve certain benefits. On the other hand, the handwritten decoding MHA kernel not only facilitates the further use of KV cache quantification methods to improve the throughput performance of LLM inference, thereby reducing the cost of inference, but can also support more variants of attention (such as ALiBi, etc.). 2 Result This article mainly refers to ppl.llm.kernel.cuda and flash-attention, and uses CUDA Core to optimize the performance of MHA in the decoding stage of LLM inference. Currently, in some inference decoding scenarios, the performance of Decoding Attention is better than Flash Decoding (Flash Attention) and FlashInfer. Decoding Attention supports GQA (Group Query Attention) / MQA (Multi Query Attention) and ALiBi (Attention with Linear Biases) inference scenarios and supports both FP16 and BF16 data types. Decoding Attention provides C++ API and Python API and the code is open source in decoding_attention. 2.1 Test Conditions 2.2 Equipment Specifications The device specifications of RTX3090 are as follows. 2.3 RTX3090 (1)Seq Len The performance of Decoding Attention is better when the sequence length is below 1536, while the performance of Flash Decoding (Flash Attention) and FlashInfer is better when the sequence length is above 1536. (2)Batch Size Regardless of bacth size, Decoding Attention has better performance than Flash Decoding (Flash Attention) and FlashInfer. 3 Decoding Attention As mentioned above, seq_q in the decoding stage of LLM inference is always 1, so the calculation process of each head attention of MHA is simplified to two HGEMV and one softmax. Decoding Attention divides blocks according to batch and head. Kernel is mainly divided into three parts, namely HGEMV (S = Q * K^T), softmax (P = Softmax(S)) and HGEMV (O = P * V). The source code is in decoding_attention. 3.1 HGEMV(S = Q * K^T) Each group processes one or more seqlen_k, and the calculation process is consistent with HGEMV, refer to cuda_hgemv. // S = Q * K^T T RQ[thread_elem_nums];#pragma unroll for (size_t i = 0; i < thread_iters; ++i) { *(int4 *)(&RQ[i * thread_copy_elem_nums]) = *(int4 *)(&q_ptr[binfo.q_offset(params.q_row_stride, params.q_head_stride, (i * threads_per_group + group_lane_id) * thread_copy_elem_nums)]); } extern __shared__ float S_smem[]; float S_max = -std::numeric_limits<float>::max();#pragma unroll for (size_t base_seq_k = warp_id * groups_per_warp; base_seq_k < binfo.actual_seq_k; base_seq_k += groups_per_block) { size_t seq_k = base_seq_k + group_id; T RK[thread_elem_nums]; float acc = 0.0; if (seq_k < binfo.actual_seq_k) {#pragma unroll for (size_t i = 0; i < thread_iters; ++i) { *(int4 *)(&RK[i * thread_copy_elem_nums]) = *(int4 *)(&k_ptr[binfo.k_offset(seq_k, params.k_row_stride, params.k_head_stride, (i * threads_per_group + group_lane_id) * thread_copy_elem_nums)]); }#pragma unroll for (size_t i = 0; i < thread_elem_nums; ++i) { if constexpr (std::is_same_v<T, half>) { acc += (__half2float(RQ[i]) * __half2float(RK[i])); } else { acc += (__bfloat162float(RQ[i]) * __bfloat162float(RK[i])); } } }#pragma unroll for (size_t i = threads_per_group / 2; i >= 1; i /= 2) { acc += __shfl_xor_sync(shfl_mask, acc, i); } if (group_lane_id == 0 && seq_k < binfo.actual_seq_k) { acc *= params.scale_softmax; if (IsAlibi) { acc += (binfo.h_slope * (static_cast<int>(seq_k) - binfo.actual_seq_q - binfo.row_shift)); } S_smem[seq_k] = acc; S_max = fmaxf(acc, S_max); } } 3.2 Softmax(P = Softmax(S)) First, reduce is performed based on the maximum value of S in each group calculated in the previous step to obtain the maximum value of S in a row, and then the softmax corresponding to each seqlen_k is calculated on S_smem. // P = Softmax(S) __shared__ float softmax_smem[warps_per_block];#pragma unroll for (size_t i = warp_size / 2; i >= 1; i /= 2) { S_max = fmaxf(S_max, __shfl_xor_sync(shfl_mask, S_max, i)); } if (lane_id == 0) { softmax_smem[warp_id] = S_max; } __syncthreads(); if (lane_id < warps_per_block) { S_max = softmax_smem[lane_id]; } else { S_max = -std::numeric_limits<float>::max(); }#pragma unroll for (size_t i = warps_per_block / 2; i >= 1; i /= 2) { S_max = fmaxf(S_max, __shfl_xor_sync(shfl_mask, S_max, i)); } S_max = __shfl_sync(shfl_mask, S_max, 0); float exp_sum = 0.0;#pragma unroll for (size_t seq_k = threadIdx.x; seq_k < binfo.actual_seq_k; seq_k += threads_per_block) { S_smem[seq_k] -= S_max; S_smem[seq_k] = exp(S_smem[seq_k]); exp_sum += S_smem[seq_k]; }#pragma unroll for (size_t i = warp_size / 2; i >= 1; i /= 2) { exp_sum += __shfl_xor_sync(shfl_mask, exp_sum, i); } if (lane_id == 0) { softmax_smem[warp_id] = exp_sum; } __syncthreads(); if (lane_id < warps_per_block) { exp_sum = softmax_smem[lane_id]; }#pragma unroll for (size_t i = warps_per_block / 2; i >= 1; i /= 2) { exp_sum += __shfl_xor_sync(shfl_mask, exp_sum, i); } exp_sum = __shfl_sync(shfl_mask, exp_sum, 0);#pragma unroll for (size_t seq_k = threadIdx.x; seq_k < binfo.actual_seq_k; seq_k += threads_per_block) { S_smem[seq_k] /= exp_sum; } __syncthreads(); 3.3 HGEMV(O = P * V) Due to the particularity of V matrix storage, each group here calculates the outer product of each row or multiple rows in V, and then Reduce Sum gets the final result. // O = P * V T RV[thread_elem_nums]; float RO[thread_elem_nums]; memset(RO, 0, sizeof(RO));#pragma unroll for (size_t base_seq_k = warp_id * groups_per_warp; base_seq_k < binfo.actual_seq_k; base_seq_k += groups_per_block) { size_t seq_k = base_seq_k + group_id; if (seq_k < binfo.actual_seq_k) {#pragma unroll for (size_t i = 0; i < thread_iters; ++i) { *(int4 *)(&RV[i * thread_copy_elem_nums]) = *(int4 *)(&v_ptr[binfo.k_offset(seq_k, params.v_row_stride, params.v_head_stride, (i * threads_per_group + group_lane_id) * thread_copy_elem_nums)]); }#pragma unroll for (size_t i = 0; i < thread_elem_nums; ++i) { if constexpr (std::is_same_v<T, half>) { RO[i] += (S_smem[seq_k] * __half2float(RV[i])); } else { RO[i] += (S_smem[seq_k] * __bfloat162float(RV[i])); } } } }#pragma unroll for (size_t i = 0; i < thread_elem_nums; ++i) {#pragma unroll for (size_t j = threads_per_group; j <= warp_size / 2; j *= 2) { RO[i] += __shfl_xor_sync(shfl_mask, RO[i], j); } } __syncthreads();#pragma unroll for (size_t i = threadIdx.x; i < head_dim; i += threads_per_block) { S_smem[i] = 0.0; } __syncthreads(); if (lane_id < threads_per_group) {#pragma unroll for (size_t i = 0; i < thread_iters; ++i) {#pragma unroll for (size_t j = 0; j < thread_copy_elem_nums; ++j) { atomicAdd(S_smem + (i * threads_per_group + lane_id) * thread_copy_elem_nums + j, RO[i * thread_copy_elem_nums + j]); } } } __syncthreads();#pragma unroll for (size_t i = threadIdx.x; i < head_dim; i += threads_per_block) { if constexpr (std::is_same_v<T, half>) { o_ptr[binfo.q_offset(params.o_row_stride, params.o_head_stride, i)] = __float2half(S_smem[i]); } else { o_ptr[binfo.q_offset(params.o_row_stride, params.o_head_stride, i)] = __float2bfloat16(S_smem[i]); } } 4 Other 4.1 Next Plan -- -- Written by Bruce-Lee-LY LLM Infer, AI Infra, CUDA No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Introduction to GPU Programming with Python & CUDA Sequential programming is really hard, parallel programming is a step beyond that — Andrew S. Tanenbaum Geminae Stellae 💫 Follow -- 2 Listen Share Keywords Before diving into the topic, we would like to define some concepts related to parallel computing: CPU: The Central Processing Unit, is the processor installed at the heart of a computer. It receives information, processes it and distributes it to the screen or to the other units connected to it such as graphics card, sound card, printer, scanner, etc. GPU: The Graphics Processing Unit (GPU) is a specialized processing unit, mainly designed to process images and videos. GPUs process data in order to render images on an output device, such as a screen. However, modern GPUs are general purpose computing devices that can be used to perform any kind of computation. CuPy: A GPU array library that implements a subset of the NumPy and SciPy interfaces. It is a convenient tool for those familiar with NumPy to explore the power of GPUs, without the need to write code in a GPU programming language like CUDA & OpenCL. Host & Device: The host is often used to refer to the the CPU, while device is used to refer to the GPU. Thread: The smallest execution unit in a CUDA program. Block: A set of CUDA threads sharing resources. Grid: A set of blocks launched in one kernel. Kernel: A large parallel loop, where each thread executes one iteration. Programming environment The aim of this article is to learn how to write optimized code on GPU using both CUDA & CuPy. For this, we will be using either Jupyter Notebook, a programming environment that runs in a web browser. Or Google Colab, in case you do not have any GPU available on your laptop. Note: Don’t forget to change the runtime type from None to GPU or CPU GPU vs CPU CPUs are known of their capability to perform any general purpose computation in a high speed, so what’s the point behind using GPUs for the same purpose? Well, simply because GPUs are designed to process images & videos. CPUs cannot handle parallel processing, therefore large tasks that require millions of similar operations will choke a CPU’s capacity to process data. GPUs, on the other hand, are massively parallel devices that can execute thousands of threads at the same time. That being said, you might still wonder why anyone would prefer GPU over CPU to compute something that can be easily computed on a CPU. Let’s answer this question with a simple example: Sorting an array We will be creating a large array in python, then sorting it with the sort() function of NumPy on the CPU. import numpy as npsize = 8192 * 8192array = np.random.random(size).astype(np.float32) Tip: To compute the time of the sorting we can run the following command %timeit -n 1 -r 1 result = np.sort(array) So, it took 7.84 s to sort the array. Now let’s perform the same sorting, but this time on a GPU. We will be using the sort() function from CuPy import cupy as cparray_gpu = cp.asarray(array)%timeit -n 7 -r 1 result_gpu = cp.sort(array_gpu) 36.6 ms, that’s faster! Speedup From the results, we noticed that sorting the array with CuPy, i.e. using the GPU, is faster than with NumPy, using the CPU. To quantify the speed, we will compute the speedup of using CuPy over NumPy. The speedup is defined as the ratio between the sequential (NumPy in our case) and parallel (CuPy in our case) execution times. Note: Execution times must be in the same unit (milliseconds or seconds) speedup = 7.84 / 0.0366print(speedup) We can therefore say that by only using the GPU with CuPy to sort an array of size 8192 * 8192 we achieved a performance improvement of 214 times. Convolution in Python We start by generating an image on the host using Python and NumPy. Basically, it’s an image filled with zeros, except for isolated pixels with value one, on a regular grid. The plan is to convolve it with a Bicubic interpolation. Then record the time it takes to execute this convolution on the host. import numpy as np# Construct an image with repeated delta functionsdeltas = np.zeros((4096, 4096))deltas[8::16,8::16] = 1 Visualization import pylab as pyl# Necessary command to render a matplotlib image in a Jupyter notebook.%matplotlib inline# Display the imagepyl.imshow(deltas[0:100, 0:100])pyl.show() The computation we want to perform on this image is a convolution, first on the host then on the device to compare the results and execution times. We can think of an image as a matrix of colour values, so when we convolve that image with a filter, we generate a new matrix with different colour values. In our example, we will convolve our image with a Bicubic interpolation shown in the illustrations: The BICUBIC Interpolation method is known for its ideal combination of processing time and output quality. That’s why it’s often used in image editing programs like Adobe Photoshop. Convolution on a CPU — Scipy Let us first construct the Bicubic, and then display it. x, y = np.meshgrid(np.linspace(-2, 2, 15), np.linspace(-2, 2, 15))dst = np.sqrt(x*x + y*y)sigma = 1muu = 0.000pyl.imshow(dst,interpolation='bicubic')pyl.show() Now we are ready to do the convolution on the host. We do not have to write this convolution function ourselves, as it is very conveniently provided by SciPy. Let us also record the time it takes to perform this convolution and inspect the top left corner of the convolved image. from scipy.signal import convolve2d as convolve2d_cpuconvolved_image_using_CPU = convolve2d_cpu(deltas, dst)pyl.imshow(convolved_image_using_CPU[0:100, 0:100])pyl.show()%timeit -n 1 -r 1 convolve2d_cpu(deltas, dst) Convolution on a GPU — CuPy Although there is a physical connection — cable — between the CPU and the GPU, they do not share the same memory space. This image depicts the different components of CPU and GPU and how they are connected: This means that an array created using NumPy is physically located into the main memory of the host, and thus visible to the CPU but not the GPU. It is not yet in GPU memory, so we need to copy the input image and the convolving function to the GPU, before we can execute any code on it. The arrays deltas and dst are in the host’s RAM. Let’s copy them to GPU memory using CuPy. import cupy as cpdeltas_gpu = cp.asarray(deltas)dst_gpu = cp.asarray(dst) Execution of the convolution: from cupyx.scipy.signal import convolve2d as convolve2d_gpuconvolved_image_using_GPU = convolve2d_gpu(deltas_gpu, dst_gpu)%timeit -n 7 -r 1 convolved_image_using_GPU = convolve2d_gpu(deltas_gpu, dst_gpu) That’s impressive! You see, there is a far cry between the speedup we got when performing a normal computation and the one we got in image redering! That’s the where the power of GPUs comes into play! Performing NumPy routines on the GPU We saw above that we cannot execute routines from the cupyx library directly on NumPy arrays. In fact we need to first transfer the data from host to device memory. convolve2d_gpu(deltas, dst) Vice versa, if we try to execute a regular SciPy routine (i.e. designed to run the CPU) on a CuPy array, we will also encounter an error. convolve2d_cpu(deltas_gpu, dst_gpu) Transfer function This function is mandatory for any CUDA program execution which is divided to 3 main steps: def transfer_compute_transferback(): deltas_gpu = cp.asarray(deltas) dst_gpu = cp.asarray(dst) convolved_image_using_GPU = convolve2d_gpu(deltas_gpu, dst_gpu) convolved_image_using_GPU_copied_to_host = cp.asnumpy(convolved_image_using_GPU) SciPy routines cannot have CuPy arrays as input. Contrary to SciPy, NumPy accepts CuPy arrays, i.e. arrays that exist in GPU memory, as input. Something we can do, is to execute a linear (1D) convolution with NumPy instead of SciPy. To generate input for a linear convolution, we can flatten our image from 2D to 1D (using ravel()), but we also need a 1D kernel. For the latter we will take the diagonal elements of our 2D dst kernel. deltas_1d = deltas.ravel()dst_1d = dst.diagonal()%timeit -n 1 -r 1 np.convolve(deltas_1d, gauss_1d) We have performed a regular linear convolution using CPU, and now, we will transfer the 1D arrays to the GPU and use the NumPy routine to do the convolution. deltas_1d_gpu = cp.asarray(deltas_1d)gaudst_1d_gpu = cp.asarray(dst_1d)%timeit -n 7 -r 1 np.convolve(deltas_1d_gpu, dst_1d_gpu) The linear convolution is actually performed on the GPU, which is shown by a nice speedup! Creating a GPU Kernel def vector_add(A, B, C, size): for item in range(0, size): C[item] = A[item] + B[item] return C In this code, each iteration of the for loop is independent from another. So, even if we reorder the iterations, or even compute each iteration in parallel or on a different device, we will still come up with the same output. This kind of programs is called naturally parallel, and they are the best candidates to be executed on a GPU. We could use CuPy to run something similar to our vector_add function on a GPU. But our aim is to write code that can be executed by GPUs, hence we will be using CUDA. The CUDA-C language is a GPU programming language and API developed by NVIDIA. It is mostly equivalent to C/C++, with some special keywords, built-in variables, and functions. Let’s start coding CUDA by writing a small kernel — It’s a GPU program, that computes the same function that we just wrote in Python. extern "C"__global__ void vector_add(const float * A, const float * B, float * C, const int size){ int item = threadIdx.x; C[item] = A[item] + B[item];} In order to compile the code and manage the GPU in Python, we will use the interface provided by CuPy import cupy# size of the vectorssize = 1024# allocating and populating the vectorsa_gpu = cupy.random.rand(size, dtype=cupy.float32)b_gpu = cupy.random.rand(size, dtype=cupy.float32)c_gpu = cupy.zeros(size, dtype=cupy.float32)# CUDA vector_addvector_add_cuda_code = r'''extern "C"__global__ void vector_add(const float * A, const float * B, float * C, const int size){ int item = threadIdx.x; C[item] = A[item] + B[item];}'''vector_add_gpu = cupy.RawKernel(vector_add_cuda_code, "vector_add")vector_add_gpu((1, 1, 1), (size, 1, 1), (a_gpu, b_gpu, c_gpu, size)) To confirm that the CUDA code does exactly what we want, let’s run the sequential Python code below which compares the results, it will return ‘Correct results!” if they are similar: import numpya_cpu = cupy.asnumpy(a_gpu)b_cpu = cupy.asnumpy(b_gpu)c_cpu = numpy.zeros(size, dtype=numpy.float32)vector_add(a_cpu, b_cpu, c_cpu, size)# testif numpy.allclose(c_cpu, c_gpu): print("Correct results!") Understanding the CUDA Code This is the definition of the CUDA vector_add function: __global__ void vector_add(const float * A, const float * B, float * C, const int size) Where: __global__ is a specefic CUDA keyword that identifies the execution space of our kernel. It allows the function to both run on the GPU, and to be called from the host (in our case the Python interpreter running on the CPU). Other execution space identifiers in CUDA-C are: __host__which identifies a function that can only be run and called from the host — CPU __device__identifies functions that can inly be run and called from the device — GPU Python item vs CUDA item int item = threadIdx.x;C[item] = A[item] + B[item]; In Python, the content of item is the result of the range function. However, in CUDA we are reading a special variable which is threadIdx, it is a triplet that represents the id of a thread inside a 3D CUDA block. In this particular case we are working on a one dimensional vector, and therefore only interested in the first dimension, that is stored in the first field of this variable. Computing Hierarchy in CUDA In the previous example we had a small vector of size 1024, where each of the 1024 generated threads was working on one of the elements. If we change the size of the vector to a larger number, such as 2048, we will get an Error, and the reason behind it is that most of GPUs accept a maximum block size of 1024 threads. Let’s get back to the parameters of our function: vector_add_gpu((1, 1, 1), (size, 1, 1), (a_gpu, b_gpu, c_gpu, size)) The first triplet represents the size of the CUDA grid (i.e., Number of Blocks). The second represents the size of the CUDA block (i.e., Number of Threads). The grid is a three-dimensional structure in the CUDA programming model and it represents the organization of a whole kernel execution. A grid is made of one or more independent blocks. In our case, the grid’s size is (1,1,1) which means it is composed of a single block. The block’s size is (size, 1, 1), where size is the number of threads. Blocks are independent, whereas threads composing a block are not, because they can share resources and communicate with each other. To solve the issue of the limited number of threads per block, we need -in our case- to divide 2048 on 2 so that we get two blocks of 1024 threads in our grid. This means that the new grid size will change from (1,1,1) to (2,1,1), and the block size from (size,1,1) to (size // 2, 1, 1). We already introduced the special variable threadIdx when introducing the vector_add CUDA function, it contains a triplet specifying the coordinates of a thread in a thread block. There are other important CUDA variables that we need to understand. Having the same structure as threadIdx, these variables are: blockDim: The size of a bloc, i.e. the number of threads per dimension blockIdx: The ID of a bloc in the grid gridim: The size of thegrid, i.e. the number of blocks per dimension Before re-running the code again (after increasing the number of threads), we need to modify the kernel code, precisely, the item value. What we need to do is to make grid and block more flexible, so that they can adapt to vectors of arbitrary size. To do that, we can replace the Python code to call vector_add_gpu with the following code: import maththreads_per_block = 1024grid_size = (int(math.ceil(size / threads_per_block)), 1, 1)block_size = (threads_per_block, 1, 1)vector_add_gpu(grid_size, block_size, (a_gpu, b_gpu, c_gpu, size)) Conclusion From curious hardware to dominant processors that power the world’s fastest supercomputers. In the foreseeable future, it is obvious that GPUs and parallel computing will play a huge role in the computational science, and it is important for researchers and developers to consider how to adapt to these kinds of architectures. Link to the code Note: This article is a summary of what we learned from the course GPU Programming Written with ❤ By Geminae Stellae (Ihssene Brahimi & Assala benmalek) -- -- 2 Written by Geminae Stellae 💫 Quantum computing | Data science | Medical Imaging | Healthcare | Parallel computing | Personal experiences Responses (2) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Reproducing GPT-2 (124M): Key Insights and Techniques Tushar Madaan Follow -- 1 Listen Share In this blog, we’ll explore the key details of reproducing the GPT-2 (124M) model, focusing on its architecture, training process, and performance optimizations specifically w.r.t GPU awareness addition to covering a few innovations that add training stability. The goal is to understand how to efficiently implement and train this transformer-based language model in a stable manner while leveraging modern tools like PyTorch and CUDA for better way faster performance on a single GPU. The code is not my original work but a faithful reproduction of Andrej Karpathy’s lets reproduce GPT2 video and some of my own research to understand a few concepts better. Full code to follow soon → here. What this blog is not: This isnt a 101 explainer on transformers. If you’re unfamiliar with the basics of transformer architecture and want a foundational understanding, I recommend watching working through Andrej Karpthy’s excellent Let’s build GPT: from scratch, in code, spelled out and Umar Jamil’s YouTube video on transformers both of which further clarified my understanding and are fantastic resources to go through first. However, If you feel you have a rough understanding of transformers and attention mechanism then continue to read on.. What’s not covered? I am also not going to cover tokenization schemes as it is an elaborate topic probably deserving a blog of its own. Tokenization also does not lend itself well to the current theme of training stability and cuda optimizations, since its a preprocessing CPU native implementation — it does indirectly impact architecture design (vocab size etc.) but still not relevant to my current focus. Why reproduce GPT2? Our focus here is on the specific decisions made in GPT-2’s design and training, especially around ensuring faster convergence, training stability in model training and improving performance from a better understanding of how GPUs and CUDA kernels work. GPT-2 Architecture Overview GPT-2 is a decoder-only transformer with 124 million parameters, specifically the smallest model in the GPT-2 family. GPT2 is not very different from a typical decoder only transformer so we wont go over the architecture core components like position encodings, self attention etc. in a ton of detail but hit a few salient aspects that make GPT2 unique. That said, from an algorithmic point of view, GPT2 does not implement any special optimizations in the attention layer itself to tackle quadratic complexity (V tokens * V token possible edges on a attention graph) and its memory bound nature. There were also later papers like reformer, longformer etc are addressing this bottleneck with sparse attention schemes. Later models like Mistral 7B have implemented sliding window attention as covered in longformer paper. However GPT does leverage Flash Attention with is a form of hardware optimization on GPUs to make the calculation run with faster with a smaller memory footprint — more on that later in the blog. Weight Sharing: GPT-2 ties the token embeddings and the final output logits, which reduces the parameter count by about 30%. This design makes sense as an inductive bias because it enforces consistency between the input embeddings and output probabilities. The similarity between two tokens should be reflected in both how they are embedded and the logits that predict their likelihoods, ensuring that semantically similar tokens are correlated between in teh input layer and the final output logit layer. Adding this inductive bias reduces parameters (about 30% of total) therefore reducing computational cost and leading to faster convergece. By leveraging these architectural choices, GPT-2 achieves a good balance between performance and parameter efficiency, while causal self-attention ensures that the text is generated in a meaningful, sequential manner. The Training Process Training GPT-2 involves careful data preparation, loss function configuration, and optimizer selection. Here’s what the process looks like: A quick GPU architecture & memory hierarchy primer Training a model like GPT-2 on modern GPUs involves understanding the underlying GPU architecture and how memory hierarchies play a role in performance. Here’s an overview of how these elements interact: Performance Improvements in Reference to GPU Architecture Understanding how GPUs manage their memory hierarchy and how Tensor Cores accelerate matrix operations is essential to efficiently training large models like GPT-2. These architectural optimizations are key to achieving better performance without excessive hardware demands. Optimization and Performance Improvements Given the large size of GPT-2, several performance optimizations are essential to train it efficiently, especially on GPUs. Key optimizations include: Reproducing GPT-2 (124M) offers valuable insights not only for model pre-training but also for optimizing models in areas like inference, fine-tuning, and deployment. For AI engineers working on real-world applications, the techniques discussed — such as FlashAttention, mixed-precision training, weight sharing, and learning rate schedulers — are essential for creating efficient, scalable models. These techniques are particularly important in knowledge distillation, where a smaller student model is trained to mimic the behavior of a larger teacher model. For instance, learning rate schedulers ensure that the student model learns smoothly and avoids destabilizing updates, whilemixed-precision training speeds up the distillation process by reducing memory usage and computation time. This allows for faster and more resource-efficient training of the student model. FlashAttention further reduces memory overhead, allowing the student model to handle longer sequences and more data during training without increasing memory requirements. This is crucial in making both training and inference faster and more efficient, which is often the goal of distilling large models for deployment. By leveraging these techniques, AI engineers can optimize models for fine-tuning, inference, and knowledge distillation, ensuring that they not only perform well but also scale efficiently across a variety of hardware environments. Whether you’re focused on co-training models, fine-tuning for specific tasks, or deploying large-scale systems, applying these insights will help you achieve better performance and resource efficiency. Feel free to explore these methods in your own workflows, and reach out if you’d like to dive deeper into specific techniques. Happy optimizing! -- -- 1 Written by Tushar Madaan Responses (1) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in CUDA Memory Management & Use cases Dung Le Follow Distributed Knowledge -- 1 Listen Share In my previous article, Towards Microarchitectural Design of Nvidia GPUs, I have dissected in-depth a sample GPU architectural design, as well as its memory hierarchy. In order for GPGPU applications to achieve the true performance that GPU promises, a throughout understanding and correct use of each memory space in the hierarchy is a must. In this article, let’s discuss on how to optimally utilize different types of GPU memories and cycle through some notable use cases for each memory type. The content of this article will be organized as follows: I. Coalesced & un-coalesced global memory access The global memory of a CUDA device is implemented with DRAMs. Each time a DRAM location is accessed, a range of consecutive locations that includes the requested location is actually accessed. Many sensors are provided in each DRAM chip and they work in parallel. Each senses the content of a bit within these consecutive locations. Once detected by the sensors, the data from all these consecutive locations can be transferred at very high-speed to the processor. These consecutive locations accessed and delivered are referred to as DRAM bursts. Recognizing the burst mechanism, current CUDA devices employ a technique that allows the programmers to achieve high global memory access efficiency by organizing memory access of threads into favorable patterns. This technique takes advantage of the fact that threads in a warp execute the same instruction at any given point in time (SIMT). When all threads in a warp execute a load instruction, the hardware detects whether they access consecutive global memory locations. If they do, the hardware combines, or coalesces, all these accesses into a consolidated access to consecutive DRAM locations. Such coalesced access allows the DRAMs to deliver data as a burst. To optimally utilize global memory, it is important to improve coalescing. There are multiple strategies that can be used. One such strategy is to improve the data access pattern. Let’s take a simple kernel to perform matrix multiplication with a scalar value as an example. In CUDA, multidimensional-array elements are placed into the linearly addressed memory space according to the row-major convention as following: Now, let’s consider two ways of writing kernel with two different access patterns: For the above access pattern, as threads’ indexes on x-dimension increase within a warp, they access consecutive locations on the matrix array m. Therefore, when consecutive threads issue load instructions to global memory to access m, these accesses form a coalesced access pattern. On the other hand, notice that threads within a warp are issuing load instructions in column-major order the above kernel, in which two consecutive accesses will be distant dimx locations away from each other. This results in an un-coalesced access pattern, and henceforth reduce kernel global memory bandwidth. Another strategy is to change the data layout to improve locality. Depends on how the program accesses data, SOA (structure of arrays) and AOS (array of structs) data type can be used for consecutive threads to issue load/store instructions to consecutive memory locations. For example, computer vision algorithms that need to apply filters onto an image requires the image to be stored onto a data structure. With SOA or AOS data structure, the image can be stored as follows: Due to the fact that memory addresses of class data members are placed linearly in memory, in case of algorithms whose kernel threads access r, g, b value of each image pixel at the same time, storing the image as SOA type will allow coalesced memory access. On the other hand, in the scenario that kernel threads need to access the values in image’s channels at the same time, AOS type will allow coalesced memory access. II. Tiling matrix transpose using shared memory In contrast to global memory which resides in DRAM, shared memory is a type of on-chip memory. This allows shared memory to have a significantly low memory access latency for just several instruction cycles per instruction. One can relate shared memory usage to a CPU cache; however, while CPU cache cannot be explicitly managed, shared memory can. Shared memory can be declared by the programmer by using keyword __shared__, with size hardcoded in the kernel code or passed on explicitly to the kernel call using extern keyword. With low latency accessing, shared memory is utilized heavily in programs in which memory bound is a problem. One key usage of shared memory comes from the fact that threads within a block can share memory access. Therefore, different threads can use shared variables to hold the data that was reused many times during the computation phase. In order to maximize memory bandwidth, threads can load this data from global memory in a coalesced manner and store it into declared shared memory variables. Threads then can load or write the data in any order due to the fact that shared memory is not affected by un-coalesced read/write order (corner turning technique). For example, in the problem of optimizing matrix transpose performance, this usage of shared memory comes in handily: In order to have coalesced reads/ writes into global memory, threads load (line 13) and write (line 21) a tile of input data to consecutive memory locations as threadIdx.x varies. At line 13, each block of threads loads a tile of input data having block-sized float numbers from global matrix t into shared memory array. It then makes sure all threads finish loading their element by using __syncthreads, a barrier that synchronizes threads within each block. Each thread then writes into consecutive locations in the global memory as variable to varies in the same amount as threadIdx.x. III. Optimizing convolution with constant memory, shared memory, and memory padding The convolution operation (or filtering) is another common operation in many applications, especially in image and signal processing. It consists of source data and a filter (known as mask). By applying the filter against matrix data, we can obtain the convoluted matrix. Let’s divide our use cases into using convolution operation on 1D array and 2D matrix. 1. 1D Convolution a. Naive convolution kernel Figure 3 is an example of a convolution operation in a 1D array. For elements that are near the array bounds such as c1 and c6, we substitute 0s to make up for the missing cells (or ghost cells) in the sum that constitute them. Hence, c1, for example, equals to 0 * w1 + i1 * w2 + i2 * w3. A naive kernel for 1D convolution may be written like this: We can make two observations about the kernel in the naive implementation. First, there will be a control flow divergence. The threads that calculate the output elements near the left end or the right end of the output array have to handle the ghost cells while the other threads do not. Second and a more serious problem is memory bandwidth. For every two calculations in line 7, two global memory reads are issued, which makes the ratio of floating-point arithmetic calculation to global memory access is only about 1.0 in the kernel. Therefore, this simple kernel can only be expected to run at a small fraction of peak performance. b. Using constant memory for filter Let’s notice the filter used in convolution operation. First, it’s unchanged throughout the convolution computation. Second, the filter size tends to be small in most cases. Hence, these properties make the filter array a great candidate for constant memory, which is defined with keyword __constant__ in CUDA. Like global memory variables, constant memory variables are also located in DRAM. However, since CUDA knows that constant memory variables are not modified during kernel execution, GPU hardware caches the constant memory data aggressively in L1 or constant cache. In case of a cache hit, data will be served from the cache instead of going down to global memory. Furthermore, the design of caches is typically optimized to broadcast a value to a large number of threads. As a result, when all warp access the same constant memory variable, as in the case of convolution masks, the caches can provide a tremendous amount of bandwidth to satisfy the data needs of threads. Also, as previously stated, the size of the masks is typically small so we can assume that all mask elements are effectively always accessed from caches. Now, let’s turn to the updated kernel code using constant memory for the filter array: c. Optimize memory bandwidth with shared memory To further optimize memory bandwidth, notice that each input element is used for more than one output element, and, henceforth, loaded multiple times from global memory. Therefore, to reduce global memory load and improve memory bandwidth, we will pre-load all elements need for the calculation into shared memory. For example, in figure 3, assuming our thread block calculates three output elements c1, c2, c3, the block needs to load all input elements used in c1, c2, c3 calculations, which are i1, i2, i3, and i4. To generalize this idea, let’s look at a sample code snippet: Threads within a block will collaboratively load (tile_size+ mask_width / 2) input elements used for convolution in tile_size output elements into shared memory. The last threads load first (mask_width / 2) elements, the first threads load the last (mask_width / 2), and the other threads load the rest. This helps reduce the number of global memory loads since each element needed in the input for the convolution is loaded once. To be specific, all threads only need to load (array_width + mask_width) input elements from global memory. On the other hand, in naive kernel implementation, for each idx of a thread that maps to a valid idx index in the output array, each thread needs to load mask_width input elements. In total, all threads in naive implementation issue (mask_width * array_width) global memory transactions. 2. 2D Convolution a. Memory Padding Now, let’s move on to 2D matrix convolution implementation. 2D matrix convolution is heavily applied in the realm of computer vision since real-world images are represented as 2D matrices and come in all sizes and shapes. These matrices are generally stored in the row-major layout when reading from files to memory. If the width of the image in terms of bytes is not a multiple of the DRAM burst size, the starting point of row 1 and beyond can be misaligned from the DRAm burst boundaries. Such misalignment can result in poor utilization of DRAM bandwidth as there might be the case where a row is spanned between two DRAM bursts, which requires two DRAM bursts to deliver the whole row data instead of one. Therefore, in order to utilize the strength of DRAM burst to deliver data fastly, we will pad additional bytes into each row of the image matrices so that each row ends at the DRAM burst boundaries: After padding, each row of the image matrix will have a length of pitch units. Therefore, when the image is linearized in row-major order in DRAM memory, we need to use pitch instead of width to access the element at (row, col) coordinate: row * pitch + column. However, when we iterate through a row, width is still needed to be used as the loop bound to calculate the elements that actually exist in the original matrix. That brings us to the kernel code for 2D convolution using memory padding: Each thread of the kernel first calculates the y and x, or col_o,and row_o, indices of its output element (line 5 and 6). It then calculates the y and x indices of the input element it needs to load into the shared memory by substracting (mask_width / 2) from row_o and col_o (line 8 and 9). Applying the same idea of dealing with halo cells in 1D convolution, each thread block loads all halo cells and internal cells needed by all threads in the block for the convolution operation (line 13). Remember that when loading input elements from the padded image, we need to use “pitch” as the row length instead of width (line 14), but when doing convolution operations, we will use “width” as the loop bound (line 26). So how memory-efficient this kernel is compared to the naive 2D kernel? In a naive kernel, each thread in a thread block will perform (mask_width)² accesses to the image array in global memory. Hence, each thread block issues a total (mask_width)² * (tile_size)² accesses to global memory. In the tiled kernel, all threads in a thread block collectively load one input tile. Therefore, the total number of access by a thread block to the image array is (tile_size + mask_width-1)². Hence, memory access speedup is (mask_width)² * (tile_size)² / (tile_size + mask_width-1)². The larger the ratio, the more effective the tiled kernel in reducing the number of memory accesses as compared to naive kernel. IV. Takeaways A throughout understanding and correct use of different memory space in the GPU device will deliver a large speedup for your applications. For data that is unchanged throughout the kernel execution, consider using constant memory to utilize hardware caching for optimized load performance. For data that is accessed repeatedly during the kernel execution, consider using shared memory to reduce global memory loads, and utilize the corner turning technique. When you have to go down to global memory, remember that a coalesced memory read/ write access pattern leads to a much more optimized memory bandwidth than an un-coalesced read/write pattern, as the former can improve its performance based on DRAM bursts mechanism. V. What’s next? For the next article, let’s take a look at another two types of CUDA memory, texture memory and unified memory, and ways that they can help to boost GPGPU applications. Stay tuned for the next one! -- -- 1 Published in Distributed Knowledge Exploring the realm of Parallel & Distributed Systems — by Dung Le Written by Dung Le Previously at @Citadel, @Facebook, @Tesla, @Amazon Responses (1) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in FlashAttention: Understanding GPU Architecture-Part 1 Sachin Kalsi Follow -- Listen Share Introduction In this three-part blog series, we will delve into the intricate world of FlashAttention, a technology that has been making waves in the field of large language models (LLMs). FlashAttention is a novel approach to optimizing the attention mechanism, making it faster and more memory-efficient. Before we explore FlashAttention in detail, it’s crucial to have a solid grasp of the underlying hardware, specifically GPU (Graphics Processing Unit) architecture, as FlashAttention leverages the GPU for efficient execution. Let’s break down the essential concepts: How Data Moves Through a System To understand how data moves within a system, let’s begin with a simple breakdown. Data typically starts on the hard disk (HDD drive). However, for processing, we need this data in the Random Access Memory (RAM), also called main memory. Depending on the system, there may be multiple layers of memory, each with varying speeds and sizes. It is indeed essential to minimize data movement between these storage levels to optimize computation. This is because the time it takes to access data increases as you move further from the CPU, and data transfer can be a bottleneck in computing tasks. Understanding GPU Memory Hierarchy In the case of NVIDIA A100 80GB GPUs, for instance, HBM may be implemented using 6 vertical stacks, where each stack, except one, contains 16GB of memory, summing up to the total GPU memory. Remember, to fully utilize GPU processing power, you need to ensure that data can be moved efficiently between GPU memory, cache, and processing units. FlashAttention and GPU Memory One of the challenges in training large language models is efficient memory usage. As the demand for GPU memory increases, optimizing the memory hierarchy becomes crucial. FlashAttention is designed to maximize the utilization of GPU memory and leverage its high-speed components, such as tensor cores. The process of optimizing involves increasing the data throughput between different levels of memory, such as from GPU memory to L2 cache, and ensuring that data fits into on-chip memory like L1 cache or SRAM. This optimization allows FlashAttention to unlock the full potential of the GPU’s parallel processing power. By fusing kernels, FlashAttention can achieve faster execution and improved memory efficiency, making it a crucial optimization technique. In Part 2 of this series, we will delve deeper into the FlashAttention algorithm and how it leverages GPU architecture to enhance the performance of language models. References -- -- Written by Sachin Kalsi Problem Solver | NLP No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in GPU Architecture and Programming — An Introduction Najeeb Khan Follow -- Listen Share Explore Kernel Grids, Blocks, Warps, and Threads to Accelerate Your Code A general purpose graphics processing unit (GPU) provide parallel cores designed to process data simultaneously. Similar to the single instruction multiple data (SIMD) in CPUs, GPUs use several threads to execute instructions in parallel, the paradigm is known as single-instruction multiple threads (SIMT). Modern GPUs can theoretically run one instruction across thousands of data points in a single cycle. However, practical applications often require data exchange between operations, which can consume hundreds of clock cycles. To address this, GPUs use a hierarchical structure to manage communication latency. Architecture In this section, we explore the architecture of a typical GPU. While each GPU generation introduces unique optimizations, we focus on the core concepts that are common across most GPUs. Warps At the core level, each thread operates on individual scalar values with private registers. While running thousands of threads simultaneously is impractical, individual threads are not efficient on their own either. Instead, threads are organized into small groups called warps or wavefronts, typically consisting of 32 threads. Each warp executes a single instruction across 32 data points. For example, in matrix multiplication, a warp might process a row and column from two matrices, performing multiplication and accumulation to generate results as shown in the figure below. Thread Blocks When operations exceed the warp size of 32 threads, GPUs use tiling to manage larger dimensions. This involves dividing the input into chunks or tiles that fit the warp size, processing these chunks, and then combining the results from all warps. To accumulate partial results, a placeholder is needed, which is where thread blocks come in. A thread block groups multiple warps, allowing them to share memory and synchronize their execution, as illustrated in the figure below. Grid The hierarchy from warps to blocks is repeated one more level: if the matrix is larger than what a single thread block can handle, we use a grid of thread blocks that share global memory. The grid enables the GPU to process large datasets by distributing the workload across multiple thread blocks. All GPU programs, known as kernels, are executed within this grid structure. When you launch a kernel, you specify both the grid size (the number of thread blocks) and the block size (the number of threads per block). This hierarchical approach ensures efficient computation and data management, allowing the GPU to handle extensive and complex tasks effectively. Memory Hierarchy Following the structure of the computations, memory is organized into a hierarchy starting from the small and fast registers with ultra low latency and a few kilobytes in size. Registers are private to threads. Next, warps within a thread block share state using shared memory comprising several hundred kilobytes. Finally, global memory is accessible across the device and provides large capacity on the order of tens of gigabytes with high throughput approaching a terabyte per second. Global memory has higher latency and thus caching is used to reduce latency. The figure below shows the relative scope of each memory type. Programming Programming GPUs are supported by dedicated software libraries in C/C++ depending on the make of the GPU: NVIDIA GPUs can be programmed using Compute Unified Device Architecture (CUDA) interface whereas AMD GPUs offer a similar SDK known as HIP. In this section we will briefly show how to run a hello world program on multiple threads using CUDA and how to multiply two matrices. Hello World! The entry point of a GPU program is called a kernel. The global ID of a thread can be calculated using three compiler intrinsics — blockIdx, blockDim, and threadIdx, representing the id of the block, the total number of threads in a block, and the thread id within the thread block, respectively. A kernel is defined by the __global__ qualifier as shown in the listing below. To launch a kernel the <<<numBlocks, blockSize>>> is used. The kernel is executed asynchronously, i.e., the host code will continue to run right after making the kernel call. To sync memory between the host and the GPU device the cudaDeviceSynchronize function is called, which blocks the execution on the host until the kernel finishes its work. #include <cuda_runtime.h>#include <iostream>__global__ void helloFromGPU() { printf("Hello World from Thread %d, Block %d, BlockDim %d\n", threadIdx.x, blockIdx.x, blockDim.x);}int main() { // Launch the kernel with 2 blocks of 4 threads each helloFromGPU<<<2, 4>>>(); cudaDeviceSynchronize(); // Wait for the GPU to finish return 0;} The above code can be compiled using the NVIDIA compiler and run as follows: > nvcc hello_gpu.cu -o hello_gpu> ./hello_gpuHello World from Thread 0, Block 0, BlockDim 4Hello World from Thread 1, Block 0, BlockDim 4Hello World from Thread 2, Block 0, BlockDim 4Hello World from Thread 3, Block 0, BlockDim 4Hello World from Thread 0, Block 1, BlockDim 4Hello World from Thread 1, Block 1, BlockDim 4Hello World from Thread 2, Block 1, BlockDim 4Hello World from Thread 3, Block 1, BlockDim 4 Matrix Multiplication Now that we know the basic structure of a CUDA program, let’s look at a more involved example of matrix multiplication. The CUDA kernel for matrix multiplication is given in the listing below. CUDA provides block IDs and thread IDs in three dimensions. In our case, since we’re dealing with matrices, we use only two dimensions: x and y for the row and column indices. The kernel calculates the global row and column indices of each thread by combining the block index and thread index. Each thread then performs the dot product of the corresponding row from matrix A and the column from matrix B, storing the result in matrix C. This approach ensures that each element of the output matrix is computed in parallel, leveraging the GPU’s ability to handle many threads simultaneously. __global__ void matrixMul(const float* A, const float* B, float* C, int n){ int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; if (row < n && col < n) { float value = 0.0f; for (int k = 0; k < n; ++k) { value += A[row * n + k] * B[k * n + col]; } C[row * n + col] = value; }} While this example provides a straightforward implementation of matrix multiplication, it is not optimized for performance. In real-world applications, achieving efficient computation requires careful consideration of memory access patterns and cache utilization. Techniques such as tiling and shared memory usage can significantly enhance performance by reducing memory access latency and improving data locality. Proper cache planning and optimization strategies are essential for scaling these algorithms to handle larger datasets and more complex computations efficiently. -- -- Written by Najeeb Khan Life-long Learner, PhD in ML, Applied Scientist at Amazon No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Mastering CUDA Matrix Multiplication: An Introduction to Shared Memory, Tile Memory Coalescing, and Bank Conflicts Dhanush Follow -- Listen Share Introduction In the world of GPU programming with CUDA, optimizing performance is key. One of the most powerful techniques to achieve this is using shared memory. This blog will walk you through a CUDA program that performs matrix multiplication using shared memory, with a particular focus on understanding tile memory coalescing and bank conflicts. By the end of this post, you’ll have a solid grasp of how shared memory can significantly speed up your computations and how to manage potential pitfalls like bank conflicts. Understanding the Basics: Shared Memory and Tiling Shared memory is a special type of memory in CUDA that is much faster than global memory, but with a smaller size, typically a few kilobytes per block. This memory is shared among all threads in a block, making it ideal for optimizing access patterns that involve frequent re-use of data, such as in matrix multiplication. In matrix multiplication, tiling is a technique where the matrix is divided into smaller submatrices (tiles) that fit into shared memory. These tiles are then multiplied together, which reduces the number of global memory accesses, improving performance. Let’s dive into the MatrixMultiShared CUDA kernel to see how this works. The CUDA Kernel: MatrixMultiShared Below is the CUDA kernel that performs matrix multiplication using shared memory: __global__ void MatrixMultiShared(float* A, float* B, float* C, int N){ __shared__ float tile_A[TILE_SIZE][TILE_SIZE]; __shared__ float tile_B[TILE_SIZE][TILE_SIZE];int row = threadIdx.y + blockIdx.y * TILE_SIZE; int col = threadIdx.x + blockIdx.x * TILE_SIZE; float val = 0.0f; for(int i = 0; i < (N + TILE_SIZE -1)/ TILE_SIZE; i++){ if(row < N && (i * TILE_SIZE + threadIdx.x) < N){ tile_A[threadIdx.y][threadIdx.x] = A[row * N + i * TILE_SIZE + threadIdx.x]; } else { tile_A[threadIdx.y][threadIdx.x] = 0.0f; } if(col < N && (i * TILE_SIZE + threadIdx.y) < N){ tile_B[threadIdx.y][threadIdx.x] = B[(i * TILE_SIZE + threadIdx.y) * N + col]; } else { tile_B[threadIdx.y][threadIdx.x] = 0.0f; } __syncthreads(); for(int j = 0; j < TILE_SIZE; j++){ val += tile_A[threadIdx.y][j] * tile_B[j][threadIdx.x]; } __syncthreads(); } if(row < N && col < N){ C[row * N + col] = val; }} Breaking Down the Code __shared__ float tile_A[TILE_SIZE][TILE_SIZE]; __shared__ float tile_B[TILE_SIZE][TILE_SIZE]; int row = threadIdx.y + blockIdx.y * TILE_SIZE;int col = threadIdx.x + blockIdx.x * TILE_SIZE; if(row < N && (i * TILE_SIZE + threadIdx.x) < N){ tile_A[threadIdx.y][threadIdx.x] = A[row * N + i * TILE_SIZE + threadIdx.x]; } else { tile_A[threadIdx.y][threadIdx.x] = 0.0f; } __syncthreads(); for(int j = 0; j < TILE_SIZE; j++){ val += tile_A[threadIdx.y][j] * tile_B[j][threadIdx.x]; } if(row < N && col < N) C[row * N + col] = val; After all tiles have been processed, the result is written to the output matrix C. Tile Memory Coalescing and Bank Conflicts Memory Coalescing: Memory coalescing is crucial for efficient global memory access. In the above kernel, the tiles are loaded into shared memory in a coalesced manner, meaning that consecutive threads access consecutive memory locations, minimizing memory transactions. Bank Conflicts: Shared memory in CUDA is divided into banks, and when multiple threads access the same bank simultaneously, a bank conflict occurs, leading to serialized access. The kernel is designed to minimize bank conflicts by ensuring that threads access different memory banks during the computation, particularly in the inner loop where matrix elements are multiplied. Conclusion By leveraging shared memory and understanding the concepts of memory coalescing and bank conflicts, you can write highly optimized CUDA programs. The MatrixMultiShared kernel demonstrated how tiling can reduce global memory accesses and how careful memory access patterns can avoid bank conflicts. As you continue to explore CUDA, mastering these techniques will be key to achieving maximum performance on your GPU-accelerated applications. For the full code, you can check out my GitHub repository here. Feel free to experiment with different tile sizes and memory configurations to see how they impact performance! -- -- Written by Dhanush Graduate student from CSU- Dominguez Hills. Skilled in CUDA C/C++, parallel programming, and data structures. Proven expertise in React, ASP.NET Core & Docker. No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Understanding Bottlenecks in Multi-GPU AI Training Dorha Szasz Follow -- Listen Share While multi-GPU and multi-node training will always face some overheads compared to single GPU training, ongoing advancements in hardware interconnects, memory architectures, distributed algorithms, and software-level optimizations are steadily chipping away at these bottlenecks to make large-scale AI/ML more efficient. See some examples at the end of the article. 1. The Challenge of Parallelism Limitations of interconnects like NVLink and PCIe For example: 2. Gradient Synchronization Overhead Gradient synchronization is a critical step in multi-GPU training to ensure model consistency. The most common approach involves aggregating gradients from all GPUs after computing them locally. While this ensures the model stays in sync, the time spent in gradient exchange and aggregation often outweighs the benefits of parallel computation. 3. Model Size and Memory Constraints The size of the AI model plays a pivotal role in determining training efficiency. Large models, such as transformers, require substantial memory for tensors, key-value caches, and gradients. This leads to: NVIDIA’s DGX servers, which use chips like the GH200 Grace Hopper Superchip, address these issues by eliminating reliance on PCIe technology and offering a faster, more integrated memory bus connecting CPU and GPU memory. 4. Data Communication Across Nodes When training across multiple nodes, data communication can become the dominant bottleneck. While intra-node communication benefits from fast interconnects like NVLink or NVSwitch, inter-node communication typically relies on Ethernet or Infiniband: 5. Optimizing for Federated Learning In distributed training scenarios like federated learning, raw data never leaves the local nodes. Only model updates in the form of gradients or weights are shared, greatly reducing the need for frequent data exchange. This approach mitigates many communication bottlenecks but requires robust optimization algorithms to ensure the global model converges without access to centralized datasets. Techniques like federated averaging help aggregate model updates efficiently. 6. Kernel Fusion for Inference Efficiency For inference workloads, fusing multiple CUDA kernels together can dramatically improve efficiency. By keeping intermediate computations in local cache/registers rather than writing them back to memory, the need for expensive data movement between the memory and compute units is minimized. This kernel fusion technique serves latency-sensitive downstream consumers, such as real-time applications, exceptionally well. But it requires careful tuning of the model architecture and operations to identify opportunities for fusion. Evidence and Solutions To address these bottlenecks, several technologies and methodologies have been developed: -- -- Written by Dorha Szasz Using Deep Learning and Computer Vision to analyze data in multiple domain such as: medical imaging, economics, archeology No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Ramble in CUDA optimization Lawliet Follow -- Listen Share Motivation I have written my own auto encoder for research purpose since several months ago based on Simoncelli’s 2016 paper. At beginning, I wanted to use some popular Deep Learning frameworks (e.g. Tensor Flow, Caffe2 or MXNet) to do my experiment. However, after several weeks investigation on all these frameworks, I found a very headache problem — extensibility. I’m not saying these frameworks are badly designed, but not allowing users to develop third-part operators just as easily as writing a plug-in is like that you give me a function without any parameters. Then the only way to change the behavior of the function is to modify the source code, this is no doubt a huge engineering stuff because of the poor organization of documents. (It seems to be the common disease of open source software.) Therefore, designing a new framework seems to be the only solution due to the uncommon operator GDN not included in all these frameworks. GDN This operator is the core non-linear function in this theory, the expression is as follow (The formula is not important, if you don’t like these god damn symbols, you can simply skip this section.): The superscript (k) and (k+1) means the layer number, w and u is the input and output which are multi-channels images, the subscript i is the channel number. β and γ are the parameters I want to train. Let’s say we have N channels, then γ is an N by N matrix, β is a N by 1 vector. At the first glance, this function is very similar to Batch Normalization (BN) or Local Reponse Normalization (LRN) which are well supported by cudnn and all Deep Learning framework. But trust me, don’t let your eyes cheat you. It’s very different. (Notice the big division is elementwise division.) The forward will not consume much computation power, while the backward will eat most power of my GPU. Now let’s look at the backward. There are 3 gradients I need to calculate, ∇β, ∇γ and ∇u. Oops, I know the feelings when people first look at this because I also want to kill myself when I first look at this monster. But if I can draw a picture for all these shits, you will feel more comfortable. First, we can easily notice that the input can be regarded as a vector with length m x n. Second, the (blabla…)^(-3/2) appears everywhere in all these gradients. It means that we can just compute this term just 1 time, and cache them for later use. We call this “(blabla…)^(-1/2)” as matrix D . Last, δ is the error propagated to previous layer. After some simplification, it’s more clear, right? I know a little explanation stilled needed. For the right-handside of the equation, each rectangle is a vector stacked by the matrix as we mentioned above. D is the denominator term in GDN formula, remember the “(blabla…)^(-1/2)” we just mention? Unlike some advanced algorithms, this computation is so intuitive to most people that we can easily write a CPU program to handle this. And with a little knowledge of CUDA, everyone can port their CPU code to GPU. However, the speed will vary huge differently if you can choose different organizations to launch the kernels. I call this method “less than just naive” is because this is the first method I ever used. And it almost eats all of my GPU memory even using small size image as input, and achieve slowest performance. Without exploitation any memory reuse, I just copy all these little rectangles both vertically and horizontally to get bigger matrices as shown in the below picture, and launch a lot of kernels organized in 1-D. Then sum them together. The only advantage of this algorithm is that you don’t need to calculate the index in each CUDA thread, because the thread id is just corresponding the memory index uniquely. So all you need to do is some multiplication, and then use cublas to sum each little colored rectangle by dot product with 1-vector(a vector that is fulled with all ones.). But as you see, the size of rectangle is not as small as I just draw here, the size is equal as an image. And for each vector in this picture, the size will be N x N x imageSize x batchSize. And obviously, we waste (N-1) x N x imageSize x batchSize x 4 bytes, not to say the time wasted on accessing all these redundancy global memory. 2. Naive Algo. For the 1st algorithm, I can only train less than 4 images with size 128 x 128 in my network each iteration, and the time is almost 2 seconds. (My GPU is GTX 1080.) This reality forced me to improve my algorithm, otherwise, I have to wait for almost 2 months to get my results. Because the number of kernels I need to launch is definitely much more than the CUDA cores in my GPU, so no matter what method I use, the cuda driver will serialize these tasks. Then I decide not to copy all these memories. Instead, I will launch the N x imageSize kernels organized in 1-D for N times (N is the total number of channels ). One can see that the improvement is obvious. Because, we don’t need to replica the data tons of times any more. The global memory access in GPU is very expensive. The memory access pattern is also easy, because when you get the thread id, by using just one mod operation you can get the memory index ( memory index = thread id % imageSize). However, in this method, because the kernels are still organized in 1-D, and we use a for-loop to launch all these kernels, then we may not be benefit from the smarter schedule algorithm by GPU, although I have already tasted blood. Now 2 months of training time can be shrunken to almost 2 weeks by this little change. 3. Smarter organization Algo. So far now, I still haven’t considered about the power of share memory, because for me, usually design a good kernel pattern is boring and headache. Obviously, 1-D kernel pattern is the easiest code to write. However, better performance deserves more careful design. The algorithm in this section, to my surprise, achieves 3x speed up to the second algorithm. Back to the Fig 1., one can see that the first row of first 3 right-handside matrices δ0, w0 and D0, are the same. Therefore, we can calculate one row of γ in one block, for each block we can launch imageSize of threads, and for each thread we can compute all channels by using a for-loop. So from Fig 5., it’s very intuitive to put δ0, w0 and D0 in share memories, and for thread i, it goes from 0 to N-1 to read one pix of N channels to multiply with δ0, w0 and D0 in the share memories. The pseudocode is as follow: blockId = blockIdx.x; threadId = threadIdx.x;shareDelta <- delta[blockId]; shareW <- W[blockId];shareD <- D[blockId];_synchronize();for(i = 0; i < N-1; i++){ result[threadIdx i*imgSize] = shareDelta[threadId] * shareW[threadId] * shareD[threadId] * W[threadId + i*imgSize];} The selection of row major computation instead of column major computation as Algo 2 is because that to computation one row in one grid, we can share 3 vectors δ0, w0 and D0. But if we compute one column as in Algo, we can only share 1 vector w0. (Again, see Fig 1.). In this code snippets, there is no if … else … block. This is very important in parallel computing. Because all threads are parallelly running, the ideal situation is that all these threads finish their jobs at the same time. But if there is if … else … block, the branches will make these threads do the different tasks so that they will finish at different time. Then the compute time will be determined by the slowest thread(s). No index computing is also an advantage. By design the 1-D pattern, we have to use thread id to compute the memory index, but here, there is no need to convert the blockId and threadId to 1-D memory index to access the data. Last, because my data is stored in the column major, it means that, like vector δ0, all the elements in this vector are contiguously stored. So it’s benefit from the global memory coalescing mechanism. Global memory is also an important concept in cuda. In the aspect of hardware, there are 16 cuda kernels are organized in one warp. When one of the thread accesses data, say a1 in above picture, the data bus will not only transfer a1, but also transfer a1~a32 to the cache in order to accelerate the data accessing for the other 15 kernels. Therefore, when I read global data to share memory, each 32 bytes I just read once, and all the others are read from cache which is several hundred faster. Thanks to the space and time locality theory. 4. A little more improvement Today, I suddenly notice that actually I don’t need any share memory, but can use const memory. Because for vectors δ0, w0 and D0, each thread in one block only need to access once. So before the for-loop, we actually can cache the element in the const memory. And another sugar is because that each thread only access one element, no thread synchronization is needed. The code is as follow: blockId = blockIdx.x; threadId = threadIdx.x;const float constDelta = delta[blockId * imgSize + threadId]; const float constW = W[blockId * imgSize + threadId];const float constD = D[blockId * imgSize + threadId];for(i = 0; i < N-1; i++){ result[threadIdx + i*imgSize] = constDelta * constW * constD * W[threadId + i*imgSize];} From the code above, we can see that constDelta, constW, constD can be reused for N times from local memory which is always stored in local register. Therefore, the bandwidth is more than share memory. Reduce Operation All the algorithms I have talked about are not completed, because all I got from above algorithms are actually raw γ as shown below: I need to accumulate each vector on the left-handside to get one element. The first choice is the cublas API, cublasSsbmv. This function will do the matrix-vector multiplication. So we can regard the left-handside vectors as a matrix, and multiply it with an all ones vector to get one row of gradient of γ. And repeat it N times to get final result. But I notice that there are other API cublasSgemmBatched. This function can do the batch matrix-vector multiplications. Then I did an experiment to test which one is faster: for-loop of N matrix-vector multiplications VS batch matrix-vector multiplications. The result shows that the for-loop is much faster. I don’t know the reason however, maybe it’s because my N here is too small (N = 256). I will not show how to compute ∇β and ∇u, because they’re similar to ∇γ. I know there must be further optimization or better design than I have. CUDA optimization is usually difficult for people don’t deeply understand the organization of GPU. Programmers who are familiar with CPU are always benefit from modern OS and powerful compiler. However, GPU is much different and sophisticated than CPU on writing sufficient code although it’s much convenient than using graphic shaders to do computation previously. It still needs several years to perfect the ecological environment. -- -- Written by Lawliet No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Features of NVIDIA CUDA Abhishek Nandy Follow -- Listen Share As our startup Dynopii got selected for the Nvidia inception program I started exploring the libraries here is a small walkthrough on features of NVIDIA CUDA. CUDA (Compute Unified Device Architecture) is a parallel computing platform and programming model developed by NVIDIA for general purpose computing on GPUs (graphics processing units). CUDA enables developers to leverage the massive parallelism of GPUs for a wide range of applications, including machine learning, computer vision, and scientific computing. In recent years, NVIDIA has introduced several new features and improvements to the CUDA platform, including: CUDA Graphs: CUDA Graphs is a new programming model for CUDA that allows developers to express the dependencies between CUDA kernels in a more explicit and flexible manner. CUDA Graphs makes it easier to write complex, data-parallel algorithms that can take advantage of the full parallelism of the GPU. CUDA Dynamic Parallelism: CUDA Dynamic Parallelism is a feature that allows CUDA kernels to launch other CUDA kernels dynamically, without the need for explicit host-device memory transfers. This feature enables a new class of algorithms that can adapt to the characteristics of the data being processed, leading to more efficient use of GPU resources. CUDA Streams: CUDA Streams is a feature that allows multiple CUDA kernels to execute concurrently on the GPU. This can be used to overlap data transfers and kernel execution, resulting in better GPU utilization and improved performance. CUDA Libraries: NVIDIA provides a set of libraries that are optimized for CUDA and provide common functionality such as linear algebra, fast Fourier transforms, and image processing. These libraries are designed to be easy to use and can significantly reduce the time and effort required to write CUDA code. CUDA Toolkit: NVIDIA provides a comprehensive development environment for CUDA called the CUDA Toolkit. This includes a compiler, libraries, debugging and profiling tools, and documentation. The toolkit is designed to make it easy for developers to write, debug, and optimize CUDA code. CUDA on ARM: NVIDIA has announced that it is working on a version of CUDA that will run on ARM-based processors. This will enable developers to run CUDA code on a wide range of devices, including mobile phones, tablets, and embedded systems. CUDA on Multi-GPU systems: CUDA allows to use of multiple GPU for a single application. This can be used to scale the performance of an application beyond what is possible with a single GPU, and it can also be used to add redundancy to a system for improved reliability. CUDA on Cloud: NVIDIA provides cloud-based GPU instances that can be used to run CUDA code. This allows developers to easily scale up their applications to run on large clusters of GPUs without the need to purchase and maintain the hardware themselves. CUDA Graphs An example of a CUDA graph could be a series of image processing operations that need to be performed on a large set of images. The operations could include tasks such as image resizing, filtering, and feature extraction. The CUDA graph for this example could be organized as follows: A CUDA kernel that resizes the images to a smaller size is the first node in the graph.The second node in the graph is a CUDA kernel that applies a filtering operation to the resized images.The third node in the graph is a CUDA kernel that extracts features from the filtered images.The final node in the graph is a CUDA kernel that performs image classification on the extracted features.Each of these nodes in the graph represents a CUDA kernel and the edges between the nodes represent the dependencies between the kernels. For example, the image resizing kernel must complete before the filtering kernel can begin, and the filtering kernel must complete before the feature extraction kernel can begin. When the CUDA graph is launched, the CUDA runtime will automatically manage the execution of the kernels and the transfer of data between them, ensuring that dependencies are satisfied and that the kernels are executed in the most efficient order. The CUDA graph API allows developers to launch the graph and manage resources such as memory and stream. This is just an example, the CUDA graph can be more complex and can include multiple branches and loops, based on the task and the specific use case. complete code walkthrough CUDA graphs A complete code walkthrough of a CUDA graph would be quite extensive, as it would involve the creation and execution of multiple CUDA kernels, as well as the management of memory and other resources. However, I can provide an overview of the main steps involved in creating and executing a CUDA graph using the CUDA Graphs API: Include the necessary CUDA libraries and headers, such as cuda_runtime.h and cuda_graph.h. Create a CUDA graph handle using the cudaGraphCreate() function. This handle will be used to manage the graph throughout its lifetime. Create CUDA graph nodes for each of the kernels that will be included in the graph. This can be done using the cudaGraphAddKernelNode() function. Create edges between the nodes to represent the dependencies between the kernels. This can be done using the cudaGraphAddDependencies() function. Allocate memory and other resources required by the kernels, such as CUDA streams and event handles. Launch the CUDA graph using the cudaGraphLaunch() function. This will execute the kernels in the graph and manage the transfer of data between them. Wait for the graph to complete execution using the cudaGraphSynchronize() function. Release the resources allocated for the graph and free the CUDA graph handle using the cudaGraphDestroy() function. Here is an example of a simple CUDA graph that performs a vector addition on two arrays: #include <cuda_runtime.h>#include <cuda_graph.h>// CUDA kernel for vector addition__global__ void vecAdd(float* a, float* b, float* c, int N) { int i = threadIdx.x + blockIdx.x * blockDim.x; if (i < N) c[i] = a[i] + b[i];}int main() { // Allocate memory for input and output arrays float* a, *b, *c; cudaMalloc(&a, N * sizeof(float)); cudaMalloc(&b, N * sizeof(float)); cudaMalloc(&c, N * sizeof(float)); // Fill input arrays with data // ... // Create CUDA graph handle cudaGraph_t graph; cudaGraphCreate(&graph); // Create CUDA graph nodes for kernel and memcpy cudaGraphNode_t kernelNode, memcpyNode; cudaGraphAddKernelNode(&kernelNode, graph, vecAdd, 0, 0, 0); cudaGraphAddMemcpyNode(&memcpyNode, graph, cudaMemcpyDeviceToHost, c, 0, 0, 0); // Create edges between nodes to represent dependencies cudaGraphAddDependencies(graph, kernelNode, &memcpyNode, 1); // Launch the CUDA graph cudaGraphLaunch(graph, 0); // Wait for the graph to complete execution cudaGraphSynchronize(graph); // Release resources cudaGraphDestroy(graph); cudaFree(a); cudaFree(b); cudaFree(c); return 0;} That's a small start with CUDA.Let’s meet next time. -- -- Written by Abhishek Nandy AI Consultant |Intel Certified oneAPI Instructor|Thinker|Innovator |Bioinformatics AI|Rapid Development Specialist No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Apra Labs Blog Home About Technology company focussed on computer vision, machine learning and cloud. Basics of writing a fast CUDA Kernel Sai Chaitanya E Follow Apra Labs Blog -- Listen Share For one of our clients, we have developed an AI-powered Visual Inspection system. A single device handles multiple camera feeds, each running at over 250 fps. Each frame goes through a series of detection and classification modules. The camera feed is also streamed on demand via RTSP after applying rescale, image effects (brightness, contrast, hue, saturation adjustment) and logo overlay. In short, each module has to be very very fast. Below we show you how we have made the image effects module faster to meet performance goals. Lets first see these operations in action. Image Effects In RGB Color space, brightness and contrast adjustment can be modeled as g(i,j)=α⋅f(i,j)+β And in HSV Color space, hue and saturation adjustment can be modeled as The parameters α>0 and β are constants. NVIDIA Performance Primitives (NPP) The input image was in YUV420 color space. To apply the image effects using the NPP library, we have to do the below 10 operations in sequence. YUV420 color space (Input) → Convert to RGB color space → Contrast Adjustment → Brightness Adjustment → Convert to HSV color space → Split Channels → Hue Adjustment → Saturation Adjustment → Merge Channels → Convert to RGB color space → Convert to YUV420 color space (Output) For contrast and saturation the input values can be float, but the NPP MulC function accepts only uint8_t values. The 10 NPP functions are called in sequence and synchronized at the end one time, which internally launches 10 CUDA kernels, to put it in simple terms each pixel is read and updated (write) 10 times which is not ideal. The total time taken in microseconds averaged over 10000 calls and profiled using nvprof is 2057 microseconds. So, the reason for writing a custom CUDA kernel instead of using NPP library 1. float multiplication 2. Do all the operations in a single pass. Matching Performance We wanted to spend a day to see if we can write a CUDA kernel to get better performance. To learn the tricks of profiling and writing fast CUDA kernels, we started with a simple multiplication kernel and match NPP performance. g(i,j)=α⋅f(i,j) We first started with a basic loop — each thread operates on 1 pixel. The image is 8 bit and width is 32 byte aligned, the CUDA kernel block size is 32x32. The time taken was ~3x of MulC. So we started following the performance guide. Performance optimization revolves around three basic strategies: The most useful for our scenario was (2) from the above. We changed each thread to operate on 4 pixels — this way each warp (32 threads) will access 128 bytes. Now, the time taken is comparable to MulC. Additionally we are using CUDA math library wherever applicable. We compared the time taken in microseconds averaged over 10000 calls and profiled using nvprof. Conclusion With the insight gained, we wrote a CUDA kernel which does all the operations in the same thread. The time taken reduced from 2057 microseconds to 447 microseconds, roughly 5x faster compared to calling NPP functions in sequence. Also, the number of intermediate CUDA memory buffers required is 0 as compared to 5 with NPP because we are doing the computations in a single pass. So, to write a fast CUDA kernel, one of the important step is to review and optimize memory usage to achieve maximum memory throughput. -- -- Published in Apra Labs Blog Technology company focussed on computer vision, machine learning and cloud. Written by Sai Chaitanya E Architect — Machine Learning Solutions, Apra Labs No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Building fused CUDA kernels for RNNs kevin zhang Follow -- 1 Listen Share Introduction One day, you might decide to implement your own RNN. PyTorch offers a convenient way to do this using the torch.nn.functional module. For example, here is how we’d implement the forward pass of GRU: I won’t walk through the code for the forward pass, since this is not the purpose of the post. It suffices to say that the underlying mathematics for this implementation is the same as that of the PyTorch GRU module. So we would expect the performance of the two to be similar, right? Not quite. Here are the average times it take for the forward passes from each implementation to execute on GPU: PyTorch Library: 93 μs. Python: 332 μs. Several factors contribute to the performance gap. First, the operations in our forward pass implementation don’t know about each other, which means that PyTorch must execute the operations individually. Since each individual call to the kernel of an operation, which may involve launch of a CUDA kernel, has a certain amount of overhead, this overhead may become significant across many function calls. Second, the Python interpreter itself can slow down our program. On the other hand, the PyTorch library implements its GRU forward pass as a fused kernel. This means that multiple operations in the forward pass are placed into the same kernel, which results in fewer kernel calls. The library also implements the kernel with CUDA to take advantage of the parallelism GPUs provide. Therefore, a reliable way to optimize the speed of a custom RNN is to write its own fused CUDA kernel. PyTorch has an official tutorial on how to do this, so I won’t repeat the details. However, the tutorial assumed readers to have an understanding of GPU programming and didn’t explain the underlying logic of the RNN CUDA kernels. II’d like to fill in this knowledge gap. CUDA Optimization We will continue using the forward pass of GRU as our implementation example. First, note that we can rewrite lines 12–15 from the python implementation as: Two important properties of the right hand side are 1) All the constituents (i_n, i_r, i_i, h_n, h_r, h_i ) are vectors of the same dimension; and 2) all the operations are pointwise. It follows that the operations for a given index is independent from the operations of other indices. This principle can be visualized in the following diagram, where the red arrow suggests that the operations on the second index happen independently. We will call a such an index a pointwise index. The same principle applies when batch number is greater than 1: The pointwise operations of a particular index for a particular batch is independent from all other operations: We can take advantage of this fact by parallelizing the vector operations across multiple threads on a GPU — one thread for each index computation. This is the key to why GPU programming is so effective for RNNs, since pointwise operations are prevalent in most recurrent architectures. Let’s see how this is done. In CUDA programming, the GPU is conceptually broken down into blocks of threads. Here’s a visualization of 4095 blocks, with 255 threads in each block. The dimension of blocks can be more than 1. Here’s what having a block dimension of 2 (a grid of blocks) look like: Now we need to assign the computation for each pointwise index to a particular thread. Let’s dive into the CUDA code for the GRU forward pass to see how this is done. The code above consists of two functions. gru_cuda_forward is the entry point, it is executed on CPU. It calls the kernel function gru_cuda_forward_kernel, which is executed on GPU. Kernel methods have the __global__ key words in their declarations. I’ve left out the implementation for the kernel function, and will come back to it later. Since gru_cuda_forward is the entry point, we will go over it first. Lines 12–19 are essentially the same as lines 4–7 from our PyTorch implementation. The implementation for the matrix multiplication operator torch::addmm is highly optimized, so we make use of it. Lines 18–19 simply reshape our results into a format that will be easier to work with. Lines 21–24 initialize vectors, including the new state, that the kernel function will populate later. Line 26–27 partition the GPU into a grid of (m x n) blocks, each block containing 1024 threads. The horizontal grid dimension n equals the batch number, while the vertical grid dimension m is the minimum number of blocks, when concatenated together, required to match or surpass the state size, given that the single unit of length is the thread. This way, each pointwise index can be mapped to an unique thread: A subtle point is that due to the way the number of blocks is calculated on line 27, the total threads in a column may be greater than the actual state size. This is acceptable since it has no harmful effects. However, the reverse does not hold— if columns contains less threads than the state size, then there will be less threads than pointwise indices, making the reverse mapping from threads to every pointwise index impossible. On lines 29–38, the kernel function is called, and is executed on all the threads on the grid simultaneously. Inside the kernel function, the programmer has access to the Block and Thread ID of the current thread, which can be used to reverse map the thread to its pointwise index. All the inputs to the kernel function are represented as single dimensional vectors, this makes indexing a bit convoluted. For example, in order to access the element gate[batch][column] inside the kernel, we need to do gate[batch index * state_size + column]. Besides being verbose, this expression needs additional variables such as state_size being passed into the kernel as arguments. Fortunately, ATen provides a way to index into a tensor efficiently without having to convert to a single pointer through packed accessors. On lines 31–37, the input vectors are transformed into accessors. Let’s dissect the last input accessor on line 37: new_h.packed_accessor<scalar_t,2,torch::RestrictPtrTraits,size_t>()) Here, an accessor is created for new_h, while asserting that it is a 2 dimensional tensor with scalar elements. This allows us to callnew_h[batch][column] inside the kernel. Now, let’s take a look at the kernel function itself: It takes in accessors as input arguments. On lines 11 and 13, the block and thread IDs are mapped to the pointwise index (which consists of the batch index and column index). Hopefully, the calculation for the column index makes sense given that blockDim.x equals to the number of threads in a block. Line 15 ensures that computation only takes place if the column index is smaller than the state size. This check is necessary since the number of threads in a column may be greater than the state size. Inside the conditional, we simply compute the operations from equation 1 for the pointwise index, and places the answers into the output vectors at that index. Once all the threads complete their kernel executions, the output vectors are returned back to the caller. That’s it! We have essentially optimized our forward pass by 1) fusing multiple operations into the same kernel; and 2) parallelizing the kernel function across multiple threads on a GPU. Performance Comparison Our hope was to speed up our forward pass by building for it a customized fused CUDA kernel. Let’s see if that holds true. Here’s the average time it takes for the forward pass to execute on GPU for various implementations: PyTorch Library: 93 μsCUDA: 178 μsPython: 332 μs The CUDA optimization yielded approximately 100% speed increase over the python implementation. But the interesting thing to note is that the customized kernel still lags behind the PyTorch library in performance. I plan to investigate the cause for the difference in the coming weeks, and will follow up with another post if the investigation yields fruitful results. Meanwhile, I hope this article helps you understand how GPU programming can help speed up RNNs, both in principle and in practice. The code used in this tutorial can be found here. -- -- 1 Written by kevin zhang Responses (1) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Boosting Performance with CUDA: A Deep Dive into Vector Addition on GPU vs. CPU Dhanush Follow -- Listen Share Introduction In the world of high-performance computing, the ability to process large datasets efficiently is paramount. Graphics Processing Units (GPUs) have become a go-to solution for parallel computing tasks due to their massive parallelism, especially when compared to traditional CPUs. This article explores the performance benefits of using GPUs for vector addition, a fundamental operation, by comparing it to the CPU. Understanding the Code The code snippet provided demonstrates how to perform vector addition on both the CPU and the GPU. Here’s a step-by-step breakdown: 1. Vector Initialization The code begins by initializing two vectors, one and two, each containing n integers. These vectors will be added together to produce a third vector, sum_of_two, which stores the results. 2. CPU Vector Addition Before diving into the GPU implementation, the code performs vector addition on the CPU to provide a baseline for comparison. This is done using a simple loop: for (int i = 0; i < n; i++) { sum_of_two_cpu[i] = one[i] + two[i];} This loop adds corresponding elements of one and two and stores the result in sum_of_two_cpu. 3. Memory Allocation on GPU To utilize the GPU, memory must be allocated on the device. The following lines of code allocate memory for the vectors on the GPU: cudaMalloc((void**) &dev_a, n*sizeof(int));cudaMalloc((void**) &dev_b, n*sizeof(int));cudaMalloc((void**) &dev_c, n*sizeof(int)); Here, cudaMalloc allocates memory on the GPU for vectors a, b, and c, which correspond to one, two, and sum_of_two on the CPU. 4. Data Transfer from Host to Device Next, the data from the host (CPU) needs to be copied to the device (GPU). This is done using cudaMemcpy: cudaMemcpy(dev_a, one.data(), n*sizeof(int), cudaMemcpyHostToDevice);cudaMemcpy(dev_b, two.data(), n*sizeof(int), cudaMemcpyHostToDevice); This step is crucial because, unlike the CPU, the GPU operates on its own memory space. 5. GPU Vector Addition The vector addition on the GPU is performed by the add_vec kernel function: __global__ void add_vec(int* a, int* b, int* c, int n) { int tid = threadIdx.x + blockIdx.x * blockDim.x; if(tid < n){ c[tid] = a[tid] + b[tid]; }} Each thread in the GPU is responsible for adding a pair of elements from the vectors a and b, and storing the result in c. The kernel is launched with a specific number of threads per block and blocks per grid, calculated as: int threads_per_block = 256;int blocks_per_grid = (n + threads_per_block - 1) / threads_per_block; 6. Timing the Operations To compare the performance, the code measures the time taken by both the CPU and GPU to complete the vector addition: auto start_cpu = high_resolution_clock::now(); auto end_cpu = high_resolution_clock::now(); cudaEvent_t start_gpu, stop_gpu;cudaEventCreate(&start_gpu); cudaEventCreate(&stop_gpu); cudaEventRecord(start_gpu); 7. Data Transfer from Device to Host After the computation, the result from the GPU is transferred back to the host: cudaMemcpy(sum_of_two.data(), dev_c, n*sizeof(int), cudaMemcpyDeviceToHost); 8. Memory Deallocation Finally, the code deallocates the memory on the GPU: cudaFree(dev_a);cudaFree(dev_b);cudaFree(dev_c); Why GPUs Excel in Large-Scale Operations GPUs are designed to handle thousands of threads simultaneously, making them ideal for tasks that can be parallelized, such as vector addition. The key reasons for their superior performance in large operations include: Conclusion This code demonstrates the power of GPUs in handling large-scale operations like vector addition. By leveraging CUDA, you can achieve significant performance gains, especially as the size of the dataset increases. Understanding how to manage memory transfer between the CPU and GPU and efficiently utilizing GPU resources is key to unlocking these performance benefits. For those interested in exploring the code further, you can find the complete implementation on GitHub: CUDA Vector Addition — CPU vs. GPU. Feel free to tweak the content to match your writing style. -- -- Written by Dhanush Graduate student from CSU- Dominguez Hills. Skilled in CUDA C/C++, parallel programming, and data structures. Proven expertise in React, ASP.NET Core & Docker. No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in How to Increase Computational Efficiency for PReLU in CUDA — OneFlow Performance Optimization OneFlow Follow CodeX -- Listen Share Written by Zekang Zheng; Translated by Yanjun Hu, Jiali Shen PReLU is an activation function that is frequently used in InsightFace. It has two operating modes: PReLU(1) and PReLU(channels). For the latter, PReLU is equivalent to a binary broadcast operation. In this article, we are going to talk about optimizing the broadcast operations in CUDA. PReLU is an activation function that is frequently used in InsightFace. It has two operating modes: InsightFace adopts the second mode of PReLU. We wrote about how to optimize CUDA elementwise operationsbefore. Today, we are going to talk about optimizing the broadcast operations in CUDA. 1 Naïve implementation Here is a naïve implementation. Firstly, get the index of the current element-“x” in the loop. Secondly, infer the index of the corresponding alpha weight. Thirdly, return results by seeing if “x”>0: if “x”>0, return “x”; if “x”<0, return “alpha*x”. The code is as follows: Note: In CUDA, integer division comes with high computational costs. As you can find in the CUDA Toolkit Documentation (Chapter 5.4.1 Arithmetic Instructions): Integer division and modulo operation are costly as they compile to up to 20 instructions. Calculating the index of alpha involves one integer division and one modulo operation. These computations represent half of the workload of the kernel. Thus, in what follows, we are going to tell you how to reduce integer divisions and modulo operations while increasing read/write bandwidth using a vectorized method. 2 Optimizing by PackType vectorization Here is a simple case: if the tensor shape being input is (1,2,4,4), then its operating mode will be PReLU(2). What’s obvious is that the input on H and W dimensions is continuous. In this situation, if inner_size is divisible by pack size, the elements in a pack will be applied to the same alpha weight, just as the following figure shows: In this way, vectorization operations are advisable to improve the read/write bandwidth utilization. And the elements in each pack only need to be calculated once, which cuts down considerable computations, since we no longer need to calculate the elements one by one. The code is as follows: Let’s compare the results before and after an optimization operation offered by Nsight Compute. After testing data (96, 64, 112, 112) on the A100–40GB GPU, we got the performance results of two kernels as shown in the following figure: the blue one is optimized by vectorization, while the green one is naively implemented. It’s clear that an optimization operation efficiently reduces 20%-30% of calculations and boosts throughput by 30%. Besides, the optimized kernel’s bandwidth is up to 1350GB/s, which gets very close to A100’s theoretical limit. However, not all tensor shapes support a vectorization operation. If the inner_size cannot be divisible by its corresponding pack_size, the shape can only settle for a naive implementation. 3 Benchmark When conducting a benchmark test on the NVIDIA A100–40GB GPU, we compared the performance of OneFlow and PyTorch when they deal with different tensor shapes in the InsightFace library. And the testing result is as follows: We can see that OneFlow, empowered by the optimized activation function-PReLU, delivers better performance than PyTorch by nearly 200% in most conditions. As for the last testing example, the tensor shape is so special that a vectorization optimization doesn’t work, so OneFlow performs on a par with PyTorch. Related articles: Welcome to visit OneFlow on GitHub and follow us on Twitter and LinkedIn. Also, welcome to join our Discord group to discuss and ask OneFlow related questions, and connect with OneFlow contributors and users all around the world. -- -- Published in CodeX Everything connected with Tech & Code. Follow to join our 1M+ monthly readers Written by OneFlow OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient. https://github.com/Oneflow-Inc/oneflow No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in CUDA: Shared memory Rustam Follow -- Listen Share CUDA shared memory is a type of memory accessible to all threads within the same block. It resides on the GPU chip itself, making it significantly faster to access compared to off-chip global memory. Memory size depends on a GPU architecture and configuration. For example, in a GPU (compute capability 8.6), based on the Ampere architecture, the shared memory capacity per SM is up to 100 KB and maximum shared memory per thread block is 99 KB. The maximum amount of shared memory available per thread block is typically smaller than the maximum shared memory partition available per Streaming Multiprocessor (SM). 1 KB of Shared Memory is reserved for system use and is not made available to thread blocks. Dynamic & Static __shared__ int sharedMem[256]; But static memory allocation has a restriction: one cannot allocate over 48KB of Shared Memory, it will lead to compile time error: “uses too much shared data”. If you need multiple dynamically allocated memory regions within a block, you must use pointers to offsets within a single shared memory allocation. // kernelextern __shared__ int sharedMem[];int* v1 = &sharedMem[0];int* v2 = &sharedMem[10];...// specify a memory size as the 3rd parameter on kernel launchkernel<<<BLOCKS, THREADS, 20 * sizeof(int)>>>(...); Also, if you need over 48KB dynamic Shared Memory, you need to call a method cudaFuncSetAttribute before launching a kernel: cudaFuncSetAttribute(KERNEL, cudaFuncAttributeMaxDynamicSharedMemorySize, SIZE); Occupancy When configuring an application, it’s crucial to consider the resources available on each Streaming Multiprocessor (SM). One important resource to manage is a Shared memory, which is distributed among thread blocks residing in the SM. If a thread block requires more shared memory than is available on an SM, that block won’t be scheduled and will wait for the first free SM where it can reside. Warp & Banks Shared memory in CUDA consists of 32 banks, organized such that successive 32-bit words map to successive banks. Each bank has a bandwidth of 32 bits per clock cycle. A warp is a group of 32 threads within the same thread block. When a thread attempts to access shared memory, the access pattern across the threads in the warp can impact memory performance due to bank conflicts. Bank Conflict A bank conflict can occur only in the same warp A bank conflict occurs when multiple threads within the same warp attempt to access memory locations that belong to the same memory bank simultaneously. This can lead to serialization of memory accesses, reducing memory bandwidth utilization and potentially slowing down the kernel’s performance. Example (8 banks, warp size = 8, data size = 2x8): To fix it when accessing a 2D array by column, we can add an extra column to the matrix. While this column doesn’t contain useful data for our calculations, it helps to ensure that each thread within a warp accesses data from different memory banks. Nvidia Nsight A simple example, that attempts to access to columns with a stride = 8: sharedData[block.thread_rank() * 8] += data[globalId]; You can detect and analyze bank conflicts using Nvidia Nsight Compute. In the image below, you can observe that the source code exhibits 8-way bank conflicts, meaning each thread in a warp accesses to some banks 8 times. This results in increased load/store wavefronts, causing serialization and processing on different cycles, each occurring 8 times. Wavefront is the maximum unit that can pass through that pipeline stage per cycle. If not all cache lines or sectors can be accessed in a single wavefront, multiple wavefronts are created and sent for processing one by one, i.e. in a serialized manner A fixed code block: sharedData[block.thread_rank()] += data[globalId]; Broadcasting When 2 or more threads in a warp access to the same address in a bank, it will not result in a bank conflict. The data will be broadcasted with no affect to the performance. More than the word size When dealing with data elements larger than the word size (4 bytes), such as double precision floating-point numbers (8 bytes), the hardware still needs to ensure proper access to the data, and this may involve multiple transactions. Each transaction fetches a chunk of data, often referred to as a “cache line” or “coalesced access unit.” You don’t need to care about that, hardware will do it itself. The same applies if the data size is less than the word size (< 32 bits). L1/Shared Memory Shared Memory shares on-chip storage with the L1 cache. But Shared memory is explicitly controlled by the programmer and used for inter-thread communication and data sharing, while the L1 cache is managed by the GPU hardware and helps improve memory access latency and bandwidth by caching data and instructions fetched from global memory. CUDA provides an API to set the carveout, i.e., the preferred shared memory capacity: cudaFuncSetAttribute(kernel_name, cudaFuncAttributePreferredSharedMemoryCarveout, carveout); It is considered a hint by the driver, it may choose a different configuration, if needed. The driver may choose a different configuration if required to execute the function or to avoid thrashing. Don’t use it if you are not sure, increasing carveout may lead to decreasing L1 hit rate, consequently — result in a worse performance. Conclusion Thanks for reading, feel free to comment with corrections or ideas. And don’t forget to clap if you like this article :) References https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html NVIDIA Ampere GPU Architecture Tuning Guide The programming guide for tuning CUDA Applications for GPUs based on the NVIDIA Ampere GPU Architecture. docs.nvidia.com https://docs.nvidia.com/nsight-compute -- -- Written by Rustam My interests are C++, ML and CUDA. github: https://github.com/fatlipp No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Run Your Python User Defined Functions in Native CUDA Kernels with RAPIDS cuDF Combining Python/CUDA JIT Compilation for Flexible Acceleration in RAPIDS Jiqun Tu Follow RAPIDS AI -- 1 Listen Share In this blog, we’ll introduce our design and implementation of a framework within RAPIDS cuDF that enables compiling Python user-defined functions (UDF) and inlining them into native CUDA kernels. Our framework uses the Numba Python compiler and Jitify CUDA just-in-time (JIT) compilation library to provide cuDF users the flexibility of Python with the performance of CUDA as a compiled language. An essential part of the framework is a parser that parses a function in one of the CUDA intermediate representations stage, which is compiled from the Python UDF, into an equivalent CUDA device function that can be inlined into native CUDA C++ kernels. Our approach makes it possible for Python users without CUDA programming knowledge to extend optimized dataframe operations with their own Python UDFs, which enables more flexibility and generality for high-performance computations on dataframes in RAPIDS. We start by giving examples on how to use the feature, followed by the goals we intend to achieve. Finally, we explain how things work in the background to make the function possible. How to Use the Feature The feature is built into the framework of RAPIDS cuDF and is easy to use. Once a dataframe is created, simply call the interfaces that support this feature with the user-defined Python function. Currently, the list of support includes: In the following, we give examples with applymap and rolling. The `applymap` example: >>> import cudf>>> import cudf.core>>> from cudf.core import Series>>> import numpy as np>>> a = Series([9, 16, 25, 36, 49], dtype=np.float64)>>> a.applymap(lambda x: x ** 2)0 81.01 256.02 625.03 1296.04 2401.0dtype: float64>>> a.applymap(lambda x: 1 if x in [9, 44] else 2)0 11 22 23 24 2dtype: int64 The `rolling` example: >>> def foo(A):... sum = 0... for a in A:... sum = sum + a... return sum... >>> a.rolling(3, 1, False).apply(foo)0 9.01 25.02 50.03 77.04 110.0dtype: float64 What the Feature Intends to Achieve: Flexibility and Performance Ahead-of-Time Compilation Traditionally, with ahead-of-time compilation, CUDA kernels are compiled into SASS machine-level code at compile-time and launched at run time. In cases where operator functions need to be called by kernels, the use of function pointers or stack frame, which usually jeopardizes performance, are avoided by inlining the operator function, as shown in the following code: Performance is achieved; however, at the price of flexibility. Often at compile-time, the operator function is not known. In most cases, the program does not reach the end-users until run time, and it is the users who are going to decide what operator function is needed. With ahead of time compilation, users cannot write their operator function without recompiling the whole program while still having the maximum performance. Just-in-Time Compilation Just-in-time (JIT) compilation, or run-time-compilation, helps. Utilizing CUDA runtime compilation (NVRTC) and the JITIFY library, the code string of the operator function, written at run time, can be inlined into the code string of the kernel (before the combination is compiled at run time) and launched with the same performance of a corresponding traditional native CUDA kernel. Flexibility and performance are optimized, with the only overhead being the time needed to perform the run time compilation. Combine Python and CUDA Combining the flexibility of Python as an interpreted language and the performance of CUDA as a compiled language will have broader coverage. A Python UDF can be written, without the knowledge or even awareness of CUDA, compiled and inlined into carefully optimized pre-defined CUDA kernels and launched on GPUs with maximum performance, as shown in the usage examples. For more information about how Python is added to the workflow on top of NVRTC/JITIFY framework, check out my NVIDIA DevBlog on the topic. A Performance Benchmark of `applymap` We compare the performance of pandas.apply with cudf.applymap for dataframes with large numbers of rows and the later one is able to achieve significant speedup over the former one. The following benchmark is measured on an Intel(R) Xeon(R) Gold 6128 CPU and an NVIDIA Quadro GV100 GPU. Note that these results do not include the overhead of JIT compilation. This overhead is a one-time cost paid only on the first execution of this feature using a specific UDF. Conclusion Utilizing the benefits of Python and CUDA, the combined Python/CUDA JIT compilation in RAPIDS cuDF allows users to apply their Python functions on dataframes on NVIDIA GPUs with great flexibility while achieving the maximum performance. The feature’s idea of combining Python and just-in-time compilation applies beyond the scope of dataframe extract, transform, load (ETL), and has potentially many more use cases. -- -- 1 Published in RAPIDS AI RAPIDS is a suite of software libraries for executing end-to-end data science & analytics pipelines entirely on GPUs. Written by Jiqun Tu Responses (1) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in CUDA Neural Network Implementation (Part 1) Paweł Luniak Follow -- Listen Share When you want to try out some neural network architecture, your method of choice will be probably to take some popular deep learning library (TensorFlow, pyTorch, etc.) and implement your network in Python. Well, it seems as the right way to do it but have you ever wondered what happens under the hood of these libraries? Let’s face the truth, the core of these libraries is not written in Python. The vast mojority of critical code is implemented in C/C++ language. What is more, if you desire to really speed up the computations, you need to take advantage of your GPU processor. And here, CUDA platform comes into play. In this post I’m going to present a very simple implementation of feedforward network using CUDA. In the next part I will focus more on ways we can optimize this simple implementation. I assume the reader is familiar with main neural networks concepts and with C++ language. Complete code is available at github. Table of Contents What is CUDACUDA Programming ModelImplementation PlanImplementationTrainingConclusion What is CUDA CUDA is a parallel computing platform intended for general-purpose computing on graphical processing units (GPUs). It stands for Compute Unified Device Architecutre and is developed by NVIDIA. But let’s start from the beginning. Parallel computing is a method of performing computations, where many operations are carried out simultaneously. GPU unit is an example of a device which is able to perform such computations. Originally GPUs were created in order to assist in the process of an image generation. Nevertheless, today they can be harnessed to perform many other computations which is called general-purpose computing on graphics processing units (GPGPU). In 2006 NVIDIA announced CUDA architecture along with C language dedicated for GPU. Very similar technology called OpenCL was published in 2009 and, unlike CUDA, is implemented in hardware produced by different companies, not only by NVIDIA. GPU vs. CPU One may ask, why do we even bother about some GPU if CPUs are so efficient today. They are, but in terms of sequential processing. In many applications we need to perform vast amount of exactly the same operations, e.g. add two very big vectors with 1M elements together. It turns out that we can have our result much faster using a lot of weak computing units, where every single unit has to take care just about 1000 vector elements, rather than having one powerful unit that has to compute all 1M results. “Strength in numbers” they say. While most CPUs have 4 or 8 cores, GPU can have even 3072 cores (NVIDIA Tesla M40) optimized for masivelly parallel computations. In the video below you can see really vivid explanation of difference between CPU and GPU. CUDA Programming Model One of the most important concepts in CUDA is kernel. Kernel is just a function that is executed in parallel by N different CUDA threads. When kernel is called we have to specify how many threads should execute our function. We distinguish between device code and host code. Device code is executed on GPU, and host code is executed on CPU. Let’s look at some example. As you can see, kernel is defined with __global__ keyword which means that this function runs on the device and is called from the host code. We can call this function with addVectors<<<blocks_number, threads_in_block>>>(...), so in the example above we run 1 block of N CUDA threads. Inside our kernel function we can obtain index of current thread with threadIdx.x. In this case our thread block have just one dimension of N threads, but we can also execute multidimensional thread blocks. For instance, if we need to operate on matrices it would be much more convenient to run block of NxM threads and then obtain matrix row and column with col = threadIdx.x; row = threadIdx.y. Thread Hierarchy In a single block we can have up to 1024 threads. Therefore, we usually need more blocks in order to execute much more threads. Just as we can have multidimensional thread blocks, we can have multidimensional grid of blocks as well. For convenience we can use dim3 type to define either grid of blocks or block of threads. Let's refactor our kernel invocation code fragment to have 25x25 grid of blocks, and each of them have 32x32 threads. In the figure below you can find visualisation of all this concept about grid of blocks and blocks of threads. As you can see, we have many blocks in a grid and every single block consist of number of threads. We can lay out these elements in 1D, 2D or 3D manner. Memory Hierarchy There is one more thing we have to cover to complete this quick CUDA programming introduction — memory. As you know, CPU has RAM memory for its disposition, as well as GPU has its own, separate RAM memory. These memories have different address spaces and CPU cannot easily access data that reside in GPU memory and vice-versa. Therefore, we need to manage transfer of data from CPU memory to GPU memory and the other way around. We also need to allocate host memory and device memory separately. So let’s make our code complete and make sure that vectors’ elements will be accessible from within a kernel, i.e. on a device (GPU). cudaMalloc(...) allows us to allocate memory on device and cudaMemcpy(...) is used for data transfer between host and device. And that's it, our two vectors can now be correctly added and we can access this operation result in the host code. The memory we are using here is called global memory. It has the biggest capacity (several GB) and all CUDA threads have access to it, but it is also the slowest one. If you want your CUDA kernels to be fast, memory access performance is what you should really care about. Each thread block has shared memory visible only for its threads. It is much faster than global memory, but it has also much lower capacity (several dozen of kB). Every single thread is also equipped in private local memory that is the fastest one, but it is visible to a single thread. With CUDA 6, NVIDIA introduced unified memory that is a pool of managed memory that can be shared between CPU and GPU. It is accessible for CPU and for GPU and really simplifies memory management when writing CUDA programs. To allocate space in unified memory we have to use cudaMallocManaged() function. If you want to know more about it, check Further Reading section. Implementation Plan We know already fundamentals of CUDA programming. We can use that knowledge to prepare a simple feedforward neural network implementation and harness GPU power for this purpose. Let’s say we want to be able to build a network shown in the figure 3. It is just linear layer followed by ReLU activation and one more linear layer followed by sigmoid activation. What do we need more? Ah yeah — a cost function. Let’s say we would like our network to solve binary classification problem, therefore we will use binary cross-entropy function. In order to implement a whole neural network we will need following classes: Implementation We will go through implementation of every class listed above, but to keep this post on track we will skip some implementation details. Complete working code along with unit tests is available at github. Let’s start with Matrix class implementation. Matrix Class We need some data structure that will keep all the numbers and parameters — a matrix. The important thing here is that we want a matrix to be available both on host memory (for CPU) and on device memory (for GPU). Most operations will be performed on device but we might need e.g. to initialize a matrix on host, just because it will be easier. Of course we could implement everything on device, but we want to keep it simple. Matrix class will manage memory allocation and make data transfer between host and device easier. On listing 4 you can see a Matrix class header. As you can see we are using smart pointers (std::shared_ptr) which will count references for us and deallocate memory when suitable (both on host and on device). In a while you will see how we will allocate device memory with smart pointer. The most important functions here are allocateMemory() (allocate memory on host and on device) and two functions for data transfer i.e. copyHostToDevice() and copyDeviceToHost(). A Shape is just a structure that keeps X and Y dimensions. allocateMemoryIfNotAllocated() function checks whether memory is already allocated and if not it will allocate memory for a matrix of a given shape. This will be useful when we won't know the matrix shape upfront. For convenience we overload subscript operator [] to have easy access to matrix values (from data_host). Let's look at functions performing memory allocation. In listings 5 and 6 you can see host memory allocation and device memory allocation using smart pointers. As you can see in the listing 5 we have an ordinary memory allocation with new operator. We are passing pointer to an allocated memory to shared_ptr (1st argument). As smart pointer by default will call delete operator we need to pass also a second argument that will tell how to perform deallocation. We can put here a pointer to a function or, as we did here, enter a lambda expression with delete[] operator. In case of device memory allocation we need to perform analogous operations, but on device, i.e. for GPU. Firstly we allocate memory using cudaMalloc(...) function, then we pass a pointer to allocated memory space to shared_ptr. Again we are passing lambda expression as the second argument but this time we are deallocating device memory, so we need to use cudaFree(...) function instead of delete[] operator. Memory allocation is the most important and usefull thing when it comes to Matrix class. allocateMemory() function just call two presented above functions. When it comes to copyHostToDevice() and copyDeviceToHost() functions they just call cudaMemcpy(...) function in suitable direction, i.e. from device to host or the other way around. Layers Classes Every class that will implement any neural network layer has to perform forward and backward propagation. What is more we want NeuralNetwork class to treat every layer the same way. We don’t want to care about implementation details of layers classes, we just want to pass some data to every of them and get some results. All right, it sounds like we need polymorphism. We need an interface for all network layers — you can see it in the listing 7. To sum this interface up, every layer is required to have forward(...) and backprop(...) functions. Each layer has also some name. Sigmoid Layer Both activation layers we are implementing are very simple. Sigmoid layer should compute sigmoid function for every matrix element in the forward pass. The sigmoid function is: In the backward pass we need to use the chain rule which is foundation of backpropagation. We want to compute an error introduced by layer’s input Z. We will denote this error as dZ. According to the chain rule: Where J is a cost function and dJ/dsigma is an error introduced by sigmoid layer — we obtain it as backprop(...) function argument and we denote it as dA. Below you can see how SigmoidActivation class header looks like. Note that we want to store layer’s input Z as well as its output A. We want also to store output of backpropagation from this layer, this is denoted here by dZ (because we are calculating error introduced by layer's input Z). I am using here following convention: Z denotes output of a linear layer, A denotes output of activation layer. In general we can write operation performed by sigmoid function as A_n = sigma(Z_n), where Z_n = W_n A_(n-1) + b_n is the output from linear layer. We start by implementing CUDA kernels for our sigmoid layer. In the listing 9 you can see forward pass kernel. The implementation is rather straightforward. We calculate index for current thread that is executing the kernel, then we check if this index is within matrix bounds and compute sigmoid activation. Every CUDA thread compute a single output. What might be new here is a function with __device__ keyword. This is a function that can be called only on device, e.g. by our kernel function, and is executed on device. On the other hand __global__ functions can be called from host and are executed on device. Backward pass logic is very similar to forward pass, the only difference is that this time we are implementing another equation. As we have kernels already implemented, we can call them in forward(...) and backprop(...) functions of SigmoidActivation class. Let's see how it is done. In forward pass we store Z input, because we will need it during backpropagation step. Then we make sure that output matrix A has allocated space and a proper shape. After that we compute number of blocks we need, so that every of them contains 256 threads. After computations we return result matrix A. There is nothing extraordinary in backprop(...) function if we compare it with forward(...) function. We just make sure that dZ matrix, that is an output of this function, is correctly allocated. Next we define number of threads and blocks and call a kernel. This pattern repeats in every layer implementation, therefore I won't list these functions for further layers. The most interesting things for us are actually kernels and this is what we will analyse hereafter. If you are interested in all details, please check the github repository. ReLU Layer ReLUActivation class has almost the same header as SigmoidActivation class, so we skip it. The main difference here is equation defining ReLU function: Note that derivative of ReLU function is 1 when x > 0 and 0 otherwise. Therefore for backpropagation, using the chain rule, we obtain following formula to implement: In the listing 13 you can find implementation of the forward pass kernel. As we already know equations, implementing CUDA kernels is quite simple and in its logic very similar to sigmoid layer kernels. Within CUDA kernels we can use number of math built-in functions, one of them is fmaxf function. More on built-in functions you can find in CUDA Math API Documentation. Now it's time for backward pass implementation. Again, we just implement equation presented above. There is one thing worth to notice in this place. We had to use if statement here in order to check whether Z input was greater or lower than 0. This cause that some threads will execute different set of instructions than other threads. This hugely affects the kernel performance and we should in general avoid if statements in our kernels as much as possible. On the other hand the first if that checks matrix bounds is not so adverse, because most threads execute the same code anyway (most threads will have index within matrix bound). This is called thread divergence and I will write more about it in the part 2 of this post. Linear Layer LinearLayer class is more interesting than classes of activation functions because it has parameters W and b, so a bit more happens here. In particular we need to implement gradient descent to update these parameters during back propagation. As for previous layers we will start with equations. Linear layer in a forward pass should implement following equation: Where W is weights matrix, b is bias vector and A is input to this layer. Note that this time we are computing Z, not A as in case of activation layers. Furthermore we need three derivatives this time. One to find out what error was introduced by input A, and this one should be passed to preceding layer during back propagation. We need also to know what error was introduced by W and b to be able to update these parameters accordingly using gradient descent. Where dZ^i is i’th column of dZ (i’th input in a batch) and m is size of a batch. Below you can see how the header of LinearLayer class looks like. As a quick overview let’s skim over functions of this class. Of course we have forward(...) and backprop(...) functions here and additionally we have some set of getters. We have two initialization methods here, i.e. for bias initialization and for weights initialization. We will initialize bias vector all with zeros and we will initialize weights matrix with values from normal distribution with 0 mean and standard deviation equal to 1. These will be additionally multiplied by weights_init_threshold value. If you are interested in mentioned initialization methods you can find details in the github repository. I guess that computeAndStore and update functions are self-explanatory. These are just helper functions that will call relevant kernel and these are used in forward(...) and backprop(...) functions. Let's go straight to the most interesting part that are CUDA kernels. This time we don’t have to compute a function value for every matrix element independently as in case of activation functions. Now we need to compute matrices product. We should avoid synchronization of threads if it is possible, bacause it slows down our kernel. In order to omit the synchronization two threads cannot write to the same localization because otherwise we will have race conditions. On the other hand our threads can read from the same location simultaneously, i.e. values from multiplied matrices, because we know that these won’t change during computations. We can do this easily by asking every thread to calculate a single element of an output matrix. You can see it in the figure 4. Every single thread will compute a single pink dot. Every pink dot is a dot product of row from matrix A and a column from matrix B. In the listing 16 you can find how forward propagation is implemented as a CUDA kernel. Until now, we were creating 1D threads grid but here we create 2D threads grid. It makes things a bit easier, because by Y thread’s index we can get a row of the result matrix and by X thread’s index we get a column of the result matrix. Then we simply multiply each row element of matrix W with each column element of matrix A. Finally we add a bias to our result. That’s it. We computed output of a linear layer forward pass, i.e. Z matrix. Now we need to implement a whole backpropagation step. First of all we need to compute dA that will be passed to preceding layer. This kernel is quite similar to this for forward pass. The only difference is that we need W matrix to be transposed. Instead of making separate kernel for transposition we can simply multiply each W column by dZ columns. It will be equivalent of computing transpose(W)*dZ. Next step is to update linear layer's weights accordingly to dZ which is presented in the listing 18. We apply similar trick here to pretend that A matrix is transposed. The final step in this kernel is updating weights matrix. We are using the simplest form of gradient descent here and just subtract gradient value multiplied by learning rate from current weights matrix. The last step during backpropagation in our linear layer is performing bias vector update. Bias update kernel is very simple and we simply apply gradient descent rule. What is iteresting in this kernel is usage of atomicAdd(...) function. When we implement bias update this way, multiple threads will write to the same memory location, i.e. bias vector elements. We have to make sure that we won't have race conditions here, therefore we call atomic operation that guarantee that another thread will have access to memory location when current thread complete his addition operation. Nevertheless, using atomic operations in CUDA kernels is undesirable because it harms kernel performance. Instead of quickly performing a lot of operations, some threads need to wait for their turn. Of course, sometimes we just have to use atomic operations and CUDA provides some set of such. All right. Implementation of all neural network layers is ready. We still need a cost function. Binary Cross-Entropy We have decided to use binary cross-entropy cost function, so what do we need exactly? Well, just a function that computes a cost and a function that returns gradient accordingly to network predictions and our target values. Binary cross-entropy is defined with following equation: And by calculating its derivative we compute gradient as: Where by y-hat we denote predicted values and by y the ground truth values. The header of BCECost class is straightforward. Nothing extraordinary happens in BCE cost calculation kernel. Note that we use again built-in math function, this time logf. And below is gradient computation kernel. So far we have implemented all fundamental building blocks of our neural network. We just need one more class — NeuralNetwork, that will keep them all together. NeuralNetwork Class This class is responsible for managing all network components and also to communicate layers with each other during forward and during backward passes. Let’s look at its header declaration. Our NeuralNetwork class keeps all layers in layers vector. We can add new layers using addLayer(...) function. The class holds also cost function object. We could probably make this more generic to allow user to put his own cost function, but let's leave it that way for now. The most important functions here are forward(...) and backprop(...). These function are just passing output from one layer to the next one (forward pass), or to the previous one in case of back propagation. In the listing 23 you can find code of the forward(...) function. As you can see, all we do here is to iterate over every layer and pass output from one layer to another. Of course, to the first layer in the vector we pass network input X. The backprop(...) function is very similar, but we have to use reversed iterator to go through the layers vector from the last layer to the first one. Back propagation function takes as its arguments predicted and target values, computes gradient of BCE cost and then passes errors between all network layers calling their backprop(...) functions. A new thing here is cudaDeviceSynchronize() call. This function waits untill all CUDA threads end its job. It is similar to join() function when we are working with traditional threads. We need this function because we want to get cost value from host code during training time. We have to be certain that all computations are over, otherwise we can obtain some pseudorandom values. Training We have implemented everything we need to build neural network based on linear layers, ReLU and sigmoid activations and binary cross-entropy cost. Let’s create a neural network that was described at the beginning and train it to predict some values. I created CoordinatesDataset class to check whether our network is able to learn something. It just draws random coordinates in 2D space. We want to predict whether a point lies within 1st or 3rd quadrant or whether it lies within 2nd or 4th quadrant. It is time to add some layers to our network and try to train it so it will predict to which class a given point in 2D space belongs. At the beginning we create a dataset with 21 batches, every one containing 100 data points. We will use 20 batches for training and the last one as our test set to check the final accuracy score. Then we populate our neural network with some layers. First layer takes batch of points, so we have 2 inputs here and we would like to have 30 neurons in the hidden layer. Then we add ReLU activation and define new linear layer with 30 neurons that will output just a single value that will be activated with sigmoid function. The next part is a training. We are going to train the network over 1000 epochs and print the cost after every 100 epochs. After training we can check the accuracy score of our neural network. computeAccuracy(...) function just counts number of correctly predicted values and divides it by the size of output vector. You can find the output in listing 25. As you can see the neural network converges over number of epochs. The convergence is a little bit slow which is probably a result of using simple gradient descent. After 1000 epochs the network scores 93% on accuracy. With such simple dataset we can easily make it 100% with more epochs. Finally it seems that our CUDA implementation of neural network is working correctly. Conclusion In this a little bit lenghty post we have implemented simple neural network using CUDA technology and performed the crucial operations on GPU processor. GPU is great for massively parallel operations such as matrix operations. However, originally GPUs were not designed for general-purpose computations including neural networks implementation. This is one of reasons why hardware companies are still working on much more suitable processor architectures to get as high performance as possible on such applications, though it’s a topic for another post. An important thing is that writing CUDA kernels is not as easy as it might seem at first glance. We have to be cautious about different conditions and restrictions to make computations correct but also to not harm kernels performance. What you have seen in this post is a very simple implementation and it can be improved. This is definitely not the implementation you would find if you take a look at popular deep learning libraries repositories. Nevertheless, I hope it shows that there is quite a lot going on under the hood of python deep learning libraries. I am going to write a second part of this post, where I will focus more on the performance issues and possible improvements of this implementation. Further Reading GitHub Repository — Here you can find complete code for the project presented in this post along with unit tests.An Even Easier Introduction to CUDA — An excelent introduction to CUDA. It uses unified memory instead of manual transfer of data between host and device.Unified Memory in CUDA 6 — More information on Unified Memory introduced in CUDA 6.CUDA Thread Execution Model — An extensive information on CUDA thread execution model. You can find here more about thread synchronization, scheduling and divergence.CUDA Toolkit Documentation — Everything you might need when working with CUDA. Originally published at luniak.io. -- -- Written by Paweł Luniak No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in How to squeeze every ounce of performance out of the robot 🤖? Mayur H Follow -- Listen Share My Journey with OpenACC and CUDA in Robotics . Hey there! If you’re into robotics like me, you’ve probably heard of OpenACC and CUDA. If not then buckle up because If you’re serious about robotics, I can’t stress enough how important it is to get comfortable with hardware optimization tools. They’ve become the daily needs in my toolkit for squeezing every ounce of performance out of the robots I work on and I’m sure they can do the same for you. Remember, every millisecond counts in robotics. Whether you’re making a simple robot arm or explore Mars, these tools can be the difference between a good robot and a great one. So let me explain why and how far you can push your robotic performance! Why I Care About Hardware Optimization? In the world of robotics, getting the most out of your hardware isn’t just nice to have — it’s essential. It’s the difference between a robot that sluggishly goes through the motions and one that zips through tasks with precision. I’ve seen firsthand how optimized robots work faster, last longer on a single charge, and handle more complex jobs without breaking a sweat (or overheating, in robot terms). Whether you’re tinkering with a vacuum bot or programming industrial arms, trust me, hardware optimization is an absolute necessity. It’s not just about performance; it’s about cost-effectiveness too. Who doesn’t want their robots to work better and cost less, right? Why I Started Using OpenACC and CUDA When I first started building robots, I hit a wall with performance. My algorithms were solid, but my robots were sluggish. That’s when I discovered the world of GPU acceleration and parallel computing. OpenACC and CUDA are both tools for parallelizing code, but they work differently: OpenACC: I started with OpenACC because it was easier to grasp. Here is a simple example of OpenACC to give you an intuition:: This is an OpenACC directive for parallelizing a loop. This simple directive parallelized vector addition, giving a significant speed boost in sensor data processing. OpenACC is a high-level, directive-based approach to parallel computing that can be used with C, C++, and Fortran. The #pragma acc parallel loop directive tells the compiler to parallelize the following loop, distributing the work across available computing resources (typically GPU cores). CUDA: When I Needed More Power As my projects got more complex, I moved to CUDA. It’s more challenging but offers finer control: Here’s a basic CUDA kernel used for the same vector addition: This is a CUDA kernel function and its invocation. CUDA is NVIDIA’s parallel computing platform and programming model for their GPUs. The __global__ keyword indicates that this is a kernel function that runs on the GPU and can be called from the CPU. The kernel function vectorAdd performs element-wise addition of two vectors a and b, storing the result in result. The if (i < n) check ensures that the thread doesn't access memory beyond the array bounds. The last line shows how the kernel is launched from the main function, using CUDA’s triple angle bracket syntax to specify the number of blocks and threads per block. This level of control allows to optimize a computer vision algorithm for real-time object detection on a onboard compute. Both snippets are implementing the same operation: parallel vector addition. The OpenACC version is more abstract and relies on the compiler to handle the parallelization details, while the CUDA version gives the programmer (you ) more explicit control over the parallelization strategy. Technical Challenges I Faced and you probably will too ! 1. Memory Management: 2. Optimization Techniques: 3. Debugging: Real-World Applications In my projects, I’ve used OpenACC for simpler tasks like basic sensor data processing. It’s great for those “set it and forget it” kind of jobs where you don’t need to micromanage every computation. But when I’ve worked on more demanding tasks like real-time SLAM or complex path planning, that’s where CUDA really shines. I remember spending days optimizing a vision system for a high-speed robot, and CUDA’s fine-grained control made all the difference in achieving the performance we needed. Choosing Between OpenACC and CUDA Choosing between OpenACC and CUDA often feels like deciding between an automatic and a manual car. OpenACC is easier to drive, but CUDA gives you that extra control when you need it. I’ve found that for quick projects or when working with a team that’s not deep into GPU programming, OpenACC is a lifesaver. But for those critical, performance-intensive applications, rolling up my sleeves and diving into CUDA has always paid off. Why Learn Both? Knowing both is like being skilled in driving both manual and automatic car. They exercise different skills and are useful in different scenarios. OpenACC gets you up and running quickly, while CUDA lets you fine-tune performance when every millisecond counts. In my experience, understanding both has made me a more versatile roboticist. I can quickly prototype with OpenACC and then optimize critical parts with CUDA when needed. Tips for Beginners 🚀 Bonus Below are some of the interview questions for Robotics/Computer Vision roles which I have been asked in internships/ full time job interviews These questions test not only the candidate’s knowledge of OpenACC and CUDA but also their ability to apply this knowledge to complex robotics and computer vision problems. They require a deep understanding of parallel computing, algorithm design, and hardware optimization in the context of real-world robotic applications. Happy Learning 🙌🏽 -- -- Written by Mayur H 👨🏽‍💻Staff Computer Vision & Robotics Engineer 🤖 Building Embodied Intelligence 📍 London 🇬🇧 No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Analytics Vidhya Home Newsletter About Analytics Vidhya is a community of Generative AI and Data Science professionals. We are building… Sparse Matrix-Vector Multiplication with CUDA Georgii Evtushenko Follow Analytics Vidhya -- 1 Listen Share Introduction Standard methods of differential equations discretization usually lead to systems of linear equations. General feature of produced systems is that the number of entries in each equation depends on local topological features of the discretization. Thus, the matrices generated by these systems contain a lot of zeroes (fig. 1). It’s possible to take advantage of knowledge about position of zeroes by storing matrices in special data structures. The abstract data type for these structures is called sparse matrix. While I was reading about yet another matrix format, I decided to actualize the comparison of performances of different matrix formats. This post provides an review of efficiency for basic sparse matrix data structures in the context of sparse matrix-vector multiplication (SpMV) on GPU. Data Structures for Sparse Matrices In general, SpMV performance is limited by memory bandwidth. The storage formats, which are used for the sparse matrices define SpMV algorithms. Each of these algorithms has its own granularity, which impacts performance. The primary distinction among sparse matrix representations is the sparsity pattern, or the structure of the non-zero entries, for which they are best suited. However, I’ll start with general sparse matrix formats. To access the efficiency of SpMV on different sparse matrix formats, I’ve collected performance data on general matrices from Florida Sparse Matrix Collection. All of the experiments were run on a system with NVIDIA RTX 2080 GPU paired with an Intel Core i7–7700k CPU. Each of the measurements is an average (arithmetic mean) over 30 trials. Before measuring performance, both CPU and GPU frequency were fixed.The speedup was computed by dividing single thread CSR SpMV execution time by GPU one. CSR The Compressed Sparse Row (CSR) format is a general sparse matrix format. CSR format consists of three arrays: row_ptr, columns of non-zeroes, and matrix values (fig. 2). The non-zero values of the row are stored consequentially in an one-dimensional values array. The row_ptr array is used to divide values array into separate rows. Its size is equal to n_rows + 1. The last entry in row_ptr stores a number of non-zeroes (nnz) in the matrix. That allows fast querying of non-zeroes number in a particular row (row_ptr[row + 1] − row_ptr[row]). For each non-zero value column index is stored in columns array. Let’s assume for simplicity that there are four threads in each CUDA thread block. General CSR SpMV implementation works at the granularity of threads per row (fig. 3). Hence, the matrix in figure 2 is processed by three thread blocks. This implementation is usually referenced as CSR-Scalar (list. 1). Presented implementation of CSR SpMV algorithm on GPU is usually considered very inefficient. The reasons of inefficiency are load balancing, thread divergence, and memory access pattern. As shown in figure3, only half of the block threads has non-zeroes to process. Thus, a single dense row can arbitrarily delay the execution while all the other cores are idle. Moreover, as shown in figure 3, adjacent threads access matrixvalues in a strided way. When concurrent threads simultaneously access memory addresses that are far apart in physical memory, then there is no chance for the hardware to combine the accesses. Performance resultsfor naive CSR-Scalar implementation are presented in table 1. The speedup distribution is shown in figures below. To answer the question how naive described implementation really is I’ve compared it with the NVIDIA CUDA Sparse Matrix library (cuSPARSE) CSR implementation (tab. 2), which has a better average speedup. These results show that there is a room for optimization of CSR SpMV. The first possible optimization is to assign warp per row instead of thread. This algorithm (list. 3) is called CSR-Vector. The vector kernel accesses indices and data contiguously (fig. 4), and therefore overcomes the principal deficiency ofthe scalar approach. Unlike the previous CSR implementation, which uses one thread per matrix row, this optimization requires coordination among threads within the same warp. In the case of CSR-Vector reduction might be implemented using warp-level primitives (list. 2). In that case, the data exchange is performed between registers and more efficient than going through shared memory, which requires a load, a store, and an extra register to hold the address. CSR-Vector has better speedup (tab. 4) and speedup distribution than CSR-Scalar (for both float and double matrices) and cuSPARSE implementation (for float matrices). However, CSR-Scalar outperforms CSR-Vector on about 33% of float matrices with 10000 nnz lower limit and on 40% of float matrices with 100000 nnz lower limit. On that matrices, CSR shows average speedup equal to 8.57 while CSR-Vector only 4.80. To discover further improvements of CSR SpMV implementation, we need to consider the first matrix part from figure 2. In the first four rows of the matrix, there is only one non-zero value per row. In that case all threads of warp except first are idle. In this case, it’s possible for naive CSR SpMV implementation to outperform vector implementation. There is an SpMV algorithm for the CSR matrix format that doesn’t depend on nnz/row ratio. The CSR-Adaptive changes it’s behavior depending on the nnz in each row (list. 4). After selecting non-zeroes per block value, additional array (row blocks) for storing rows of block is constructed. If some rows contain small nnz, they’ll be gathered into one block. Then CUDA threads block is assigned to each block of rows. The case of multiple rows in one block of rows is called CSR-Stream. If there is only one row in block of rows, the CSR-Vector will be called. If this row exceeds nnz_per_wg than CSR-VectorL variant will be used. The main difference between CSR-Vector and CSR-VectorL is that CSR-VectorL allows executing multiple CSR-VectorL on one row and then reducing the results by using atomic operations. The CSR-Vector and CSR-VectorL parts are quite similar, so I won’t include listing here. Figure 5 illustrates memory access pattern of the CSR-Stream part. It stores partial sums in shared memory of GPU and then reduces them. The partial results in cache in figure 5 are calculated with x filled with 1. Thesource code of CSR-Stream is presented in listing 5. On the discussed set of matrices, where CSR outperformed CSR-Vector, CSR-Adaptive shows better speedup. CSR-Adaptive outperforms CSR-Scalar on those 291 matrices. Although CSR-Adaptive might be outperformed by CSR-Vector on some long-row matrices, it has better speedup in average (tab. 4). The main advantage of CSR-Adaptive is that you won’t need to change the code that generates a matrix if your code already uses CSR. The matrix formats presented below don’t have this quality. ELL The problem of noncoalesced memory accesses of CSR can be addressed by applying data padding and transposition on the sparse matrix data (fig. 6). The Ellpack-Itpack (ELL) sparse matrix format assumes that each row contains at most elements in rows elements and elements in rows is small. All rows are zero-padded to that value. Unlike CSR, the rows pointers array is of no need. ELL is most efficient when the maximum number of nonzeros per row does not substantially differ from the average. Kernel for ELL matrix format is presented in the listing 6. With element padding of the ELL format, it’s easy to get the next row’s element position by simply adding the number of rows in the matrix. The padding also fixes the number of iteration for each thread, so there is no control flow divergence in warps. Elimination of control flow divergence and enabling of memory coalescing allow ELL SpMV kernel to outperform CSR-Scalar implementation on many matrices (tab. 5). The obvious disadvantage of ELL format consists of padding itself. In the case of a matrix with a few long rows, ELL format will result in an excessive number of padded elements. There are a lot of matrices in Florida Collection, that couldn’t fit into 8GB of my GPU because of ELL’s padding. In some cases, it leads to a situation where CSR-Scalar outperforms ELL implementation. To eliminate this issue, it’s possible to remove long rows’ extra nnz from ELL matrix into the different matrix. It is important to note that extracted matrix would have an unordered scheme. Many rows will likely be missing from that scheme, so CSR using would be inefficient. One of the formats that could handle that case is COO. COO The coordinate (COO) matrix format is the simplest one. For each NZ it stores it’s column and row indices. Therefore, COO doesn’t map elements in rows. That leads us to the necessity of atomic operations in COO kernel (list 7). COO SpMV implementation works at the granularity of threads per element (7). Atomic updates to the result vector reduce performance. The wider rows in COO format, the more serialized SpMV is. This fact can be noticed in figure 7. To improve the performance of this format, it’s possible to slice the matrix info chunks with the rows count that fits into shared memory. Matrix format that uses shared memory to improve atomic operations performance in COO SpMV is called Sliced COO (SCOO). To reduce shared memory bank conflicts, SCOO allows multiple lanes in the shared memory for updating the intermediate results of a single row. Reducing slice size increases the size of lane, and thus more shared memory lanes are available. Hybrid It’s possible to use ELL matrix format on the regular part of the matrix and COO on the elements removed from extra-long rows. This scheme significantly reduces the number of padded elements in ELL format. Thisapproach is often called as hybrid. There are different options for combining the results of ELL and COO SpMV. In this post I use atomic case (list. 8). Althought the average performance results (tab. 8) are quite close to CSR-Adaptive SpMV, Hybrid format requires extra actions for the splitting matrix, which might require rewriting of a matrix calculation code base. Conclusion To conclude this post, I would like to show you some misleading results. I’ve selected some matrices (tab. 9) to show the obvious fact that there is no universal matrix format. The leader changes even after data type change. In my next post, I’m going to focus on block matrix formats generated by real applications. Source code and pdf version of this post are available in github. -- -- 1 Published in Analytics Vidhya Analytics Vidhya is a community of Generative AI and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com Written by Georgii Evtushenko C++ Software Developer with experience in high-performance computing. Responses (1) Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in What I wish I knew when I started programming CUDA (Part 2) Robby Follow -- Listen Share This is the second part of my CUDA series. In the first post I showed a few recourses that are helpful, narrowing down the vast field of CUDA trainings and teaching videos to the essential ones. I also tried to decipher the concept of streaming multiprocessors, blocks and threads, and how they relate to each other. Finally, I talked about warps and why they are useful. In this post, firstly I want to talk a bit on kernel launch configurations. Then I want to expand on warps and how to (not) use them appropriately, and what to look out for in data structures. Finally, I want to give a hint on which high level libraries help you design code faster. Kernel launch configurations and limits: It is known that when launching kernels, the number of blocks and threads have to be chosen. Those are called the kernel launch configurations. While the kernel launch configurations are individual to each problem, they should however remain a multitude of 32, the size of a warp. Moreover, they underlie certain limits. At this point, according to the thread hierarchy chapter of the CUDA programming guide, it is 1024 threads per block on current GPUs. There is furthermore a limit on the blocks that can be evoked, which is subject to the compute capability of the device. The exact specs can be taken from the specs of the programming guide. Currently, in the x-direction, they are limited to 2³¹-1, in the y- and z-direction they are limited by 65535 blocks. Flattened arrays and memory accesses: In the previous post I have explained how CUDA uses warps to simultaneously fetch data from memory and tries to process it simultaneously. This has a few implications on how to structure the data in use. Technically, CUDA provides the user with means to design multidimensional algorithms by embedding three-dimensional data structures in its syntax, such as the dim3-structure to specify grids and blocks, as shown in the example below. This code simply calls 32 x 32 blocks, and each one of them would have 128 x 128 threads. Indexing within this multidimensional setting goes via the next code snippet: It is easy to see that multidimensional arrays could be indexed very easily like this. They can be created just like C-style multidimensional arrays, realized via pointers to pointers. The catch however comes when warps get into the equation. Imagine for example when you want to index a 3x3 sized block of pixels of an image, like depicted in the image to the left. In the block to the left, the indexed pixels are the orange ones, whereas the blue ones are the normal surrounding ones. The code could be looking as in the example below. In that case, each time the GPU would fetch not only fetch the 9 elements of the indexed pixels, but instead fetches the data in batches, and then only uses one single element per batch to copy it into a single variable. To make matters worse, although accessing a two-dimensional pointer, the GPU memory is still more of a linear array, introducing strided memory accesses that Mark Harris is talking about in this post. An initial remedy would be to flatten out the image. Rather than having a pointer to a pointer, the data could be stored in a flat array. Knowing the dimensions of the image would be enough to determine row and column, and the unnecessary memory fetches at the edge of the two-dimensional solution would be a matter of the past. In order to alleviate the strided memory accesses, depending on the problem different solutions are possible. What could help here is the uses shared memory, a technique Mark Harris shows in order to transpose a matrix efficiently. However, it is up to the engineer how to solve this. The key takeaway is that memory accesses should be in continuous ways in flat structures, and that strided memory accesses should be avoided in order to make best use of warps. Data structures: Expanding on the idea above of flat structures, data structures such as structs or classes should be handled with extra care. Although a bit of a superficial example, the following code has two kernels implementing the same purpose, namely for each point of an array of points compute the sum of its x, y and z coordinates. In expectation, the second kernel using the arrays should be faster than the first kernel. An explanantion follows below. As can be seen, the first kernel, sumPoint(), takes in a point data structure, with the components distributed. The second kernel however receives three arrays, each one carries the individual component. Thus, they appear different in memory, and memory is fetched differently, as shown below. Because the second kernel, sumPointArray(), does parallelize fetching of the individual components, it uses the way a warp and SIMT work, simply better. Thrust and supported libraries: Writing CUDA code can be an arduous process, because the normal interface serves a C-style memory management. In order to alleviate this and reduce problems arising from forgotten pointers, C++ supports the concept of smart pointers since the C++ 11 standard. The concept is simple: Create a wrapper class for the pointer, and when the object goes out of scope, free the memory from the constructor. Thrust is a high-level library support that behavior on CUDA. The interface is written in the C++ standard library (STL) vector style, allocating memory on the GPU and automatically transferring the data on it. It furthermore supports operations such as resize(), push_back() and normal indexing, and some high-level algorithms such as sorting. This library is very good for fast prototyping as well as more legible code, and I personally have not seen significant performance drops, if none at all. The documentation can be found here. Furthermore, CUDA supports other libraries, which could prove useful in areas such as signal processing or image processing. They can be found here. This concludes my two-post series about CUDA up to now. I may add one or another chapter later, but at the moment I will keep it as it is. If you find mistakes or have questions please do not hesitate to ask me. I hope you enjoyed my articles and learned something from them, and best of luck on your projects involving GPU programming. -- -- Written by Robby Electrical and software engineer from Germany. I like to travel and get to know cultures, learn and explore life. No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Update Reduce Operation Lawliet Follow -- Listen Share In my last post, I talked little about the Reduce Operation (actually sum operation in my case). I will compare different strategies of Reduce Operation. First, recall the raw result after my cuda kernel finished as shown in following picture: Of course there are lots of libraries (e.g. cudpp, cub, etc.) that can do Reduce Operation (e.g. sum, minus, multiply, min, max, etc..). However, lack of them can perform in batch. Also, for each of these libraries, we need to manage several abstract pointers such as handles, plans, configs and so on. When programming with multi-GPUs, managing all these stuffs will become very trivial. Not to mention these libraries are not necessarily faster than cublas. Therefore, as cublas is indispensable in DL, why not just use cublas to finish this Reduce Op. Actually, all the algorithms I will compare here are based on matrix-matrix or matrix-vector multiplication. Since all the data in cuda is stored in 1-D layout, it’s very convenient to reorganize data depending on situations. All you need is a pointer to memory and how to interpret data pointed. For example, from the memory aspect, the raw γ in the Fig 1 is actually stored in this way: each γ_i,j here is a vector. We take one row out and reorganize it as a matrix: In cublas’s eyes, it regards all data in column major, so one row of raw γ is actually looks like right layout in Fig 2. But we want to use the left matrix in Fig 2 times with a 1s vector to get one row of final γ: Fortunately, cublas can do matrix transposition before multiplication, so we don’t need to worry about that. On the other side, cublas doesn’t have API being able to do batch matrix-vector multiplication. Only batch matrix-matrix multiplication is supported. So we have to regard the 1s vector as a one column matrix. The code is as follow: cublasSgemmBatched(cublasHandle, CUBLAS_OP_T, // the raw γ needs to be // transposited as Fig. 4 CUBLAS_OP_N, // the 1s matrix doesn't needs to // be transposited channelCount, // the number of rows of raw γ // after tranposition 1, // the number of columns of 1s imgSize, // the number of columns of raw γ // after transposition &alpha, // alpha = 1 rawGamma, // pointer to raw γ (Fig. 2) imgSize, // the number of rows of raw γ // before transposition onesMatrix, // pointer to 1s matrix imgSize, // the number of row of 1s &beta, // beta = 0 gradientGamma, // final result of γ channelCount, // the number of raw of γ channelCount) // batch size The above function will repeatedly do the matrix-matrix multiplication in Fig. 4 channelCount times. 2. For-loop Algo. — Don’t use batch At beginning, I thought the batch API must be efficient, but the speed is not as I expected. In my neural network, I have totally 6 convolution layers, 6 GDN layers and 6 down/up -sample layers. By using 1st Algo., it takes more than 2 seconds for 4 images in one iteration. I’m so impatient that can’t help trying other methods. Then I decide to use for-loop instead of batch. It’s actually similar to the first algorithm, but changes batch to for loop. oneBatchSize = imageSize * channelCount;for(int j=0; j<channelCount; j++) { cublasSgemv(cublasHandle, CUBLAS_OP_T, // the raw γ needs to be // transposited as Fig 4 imageSize, // the number of rows of raw γ // before tranposition channelCount, // the number of columns of raw γ // before transposition &alpha, // alpha = 1 rawGamma+ j*oneBatchSize, //choose one column of // raw γ (Fig 4.) imageSize, // rows of 1s vector onesVector, // pointer to 1s vector 1, // stride of 1s vector &beta, // beta = 0 gradientGamma + j*channelCount, // pointer to one // column of final // result of γ 1)); //stride of result } The advantage of this method is that we don’t need to use matrix-matrix multiplication but matrix-vector. If you’re interested in the cuda matrix-matrix multiplication, you can search the algorithm on Nvidia’s official document. I can only say that matrix-matrix multiplication algorithm is more inefficient than matrix-vector multiplication. Because matrix-matrix multiplication is sophisticated, which involves a lot of blocking and share memory stuffs. However, matrix-vector is relatively easy. Although our task is matrix-vector multiplication, as mentioned in last section, cublas doesn’t have batch matrix-vector. So we have to convert the problem to matrix-matrix. Now the experiment shows that for-loop matrix-vector multiplication is faster than batch matrix-matrix multiplication. For the same size of input and same configuration of net, the computation time is now 800ms that is muuuch faster. 3. Super fast — Just need one matrix-vector multiplication I know smart as you must notice this method if you understand the 2nd method. If you stare at the raw γ matrix for some time then close your eyes, the idea will come out to your minds. Like I said, pointer is one of the greatest and loveliest feature in C++ because of it’s flexibility. Without any extra manipulation on data, all you need is just changing angles to look at the memory and start imagining. Therefore, all you need to do is just one matrix-vector multiplication. I think the improvement is self-explained. And now the computation time is 200ms. Maybe you don’t understand what 200ms means. Once, I measured the the time spent on all convolution layers because I want to locate the bottleneck of my net. It’s 100ms totally. So, that means my GDN implementation is as fast as convolution layer implemented by Nvidia’s cudnn that is extremely optimized. That’s a huge jump. Again I know there must still be some space to improve, but I cannot figure out how so far. Now I can train 1000 images in just one day which is much faster than my expectation. For all these improvements, there is no magic and fancy theory. However, by implementing all these different algorithms, a lot of non-intuitive experiences are achieved. Therefore, when analyzing and optimizing your own cuda kernel, don’t just rely on your past experience about CPU programming. Doing all these experiments is very necessary. Usually simple changes will make difference. -- -- Written by Lawliet No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in Mayin Dev Follow -- Listen Share I implemented a kernel that performs a single-pass scan (proposed in the book Programming Massively Parallel Processors): Harnessing the Power of Parallel Processing with CUDA and GPUs Parallel processing is revolutionizing the way we approach computationally intensive tasks. By breaking down problems into smaller, independent parts that can be solved simultaneously, we can achieve significant speedups compared to traditional sequential processing. CUDA (Compute Unified Device Architecture), developed by NVIDIA, provides a powerful framework for leveraging the parallel processing capabilities of GPUs (Graphics Processing Units) to accelerate a wide range of applications, from scientific simulations to machine learning algorithms. This article delves into the intricacies of CUDA programming and explores its advantages in the realm of parallel processing. Understanding the Fundamentals of CUDA Programming CUDA allows programmers to write code that runs on the GPU, taking advantage of its many cores. The GPU is organized into a hierarchy of blocks and threads, each performing a portion of the computation. Programmers write kernel functions, which are executed by these threads in parallel. Effective CUDA programming requires careful consideration of memory management, data transfer between the CPU and GPU, and efficient thread organization to maximize performance. Understanding the nuances of thread synchronization and data sharing is crucial for avoiding bottlenecks and achieving optimal speedups. Properly utilizing shared memory, a faster memory space within each multiprocessor, can significantly improve performance. CUDA’s Thread Hierarchy: Blocks and Grids CUDA’s parallel execution model is based on a hierarchical structure. Kernels are launched as grids of blocks, where each block contains a number of threads. This structure allows for a high degree of parallelism, enabling the execution of thousands or even millions of threads simultaneously. Understanding how to effectively organize threads into blocks and grids is essential for optimal performance, as it directly impacts memory access patterns and overall efficiency. Improper organization can lead to significant performance limitations. Memory Management in CUDA Efficient memory management is critical for high-performance CUDA programs. Data needs to be transferred between the CPU’s main memory and the GPU’s memory, a process that can be time-consuming. Strategies for minimizing data transfers and effectively utilizing different memory spaces on the GPU, such as global, shared, and constant memory, are essential for optimizing performance. Understanding the memory access patterns of your threads is also crucial to avoid memory conflicts and bottlenecks. The choice of memory type greatly influences the speed of access and the overall efficiency of your CUDA program. Comparing CPU and GPU Parallel Processing While CPUs excel at complex tasks and sequential processing, GPUs are optimized for massively parallel computations. This difference makes them ideal for different types of problems. CPUs generally have fewer, more powerful cores, while GPUs have many smaller, simpler cores. This architectural difference leads to significant performance variations across different applications. Feature CPU GPU Number of Cores Few, powerful cores Many, smaller cores Clock Speed Higher Lower Memory Bandwidth Generally lower Generally higher Best Suited For Complex, sequential tasks Massively parallel computations Real-World Applications of CUDA CUDA’s versatility extends across various domains. From accelerating scientific simulations and image processing to powering machine learning algorithms and enabling real-time rendering in video games, CUDA’s impact is undeniable. The ability to offload computationally intensive tasks to the GPU frees up the CPU for other operations, resulting in substantial performance improvements. Many libraries and frameworks are built upon CUDA, simplifying the development process and providing access to optimized algorithms. Advanced CUDA Techniques Beyond the basics, mastering advanced techniques unlocks the full potential of CUDA. Optimizing memory access patterns, understanding and utilizing shared memory efficiently, and employing techniques like cooperative groups and warp-level programming can significantly enhance performance. Careful consideration of data structures and algorithms is critical to achieve optimal results. For instance, understanding how to effectively manage data dependencies within parallel kernels is crucial for preventing performance issues. This often involves careful consideration of thread synchronization mechanisms. “Effective CUDA programming requires a deep understanding of both hardware architecture and parallel programming paradigms.” For a deeper dive into optimizing kernel performance, consider this resource: Extending a Single-Pass Scan Kernel for Independent Row-wise Scan in CUDA. This blog post provides valuable insights into advanced techniques. Conclusion CUDA empowers developers to harness the immense parallel processing power of GPUs, leading to significant speedups in computationally intensive applications. By understanding the fundamentals of CUDA programming, memory management, and thread organization, developers can unlock the full potential of this powerful framework. Mastering advanced techniques further enhances performance, enabling the creation of high-performance applications across a wide range of domains. Further exploration of CUDA resources and best practices is encouraged to fully leverage its capabilities. To learn more about parallel programming and GPU computing, consider exploring resources like the NVIDIA CUDA Zone and the Nvidia GPU Accelerated Computing website. You can also find numerous tutorials and courses on platforms like Coursera and Udemy. -- -- Written by Mayin Dev No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams
Sign up Sign in Sign up Sign in CUDA and GPU: The Dynamic Duo for Model Training Ebad Sayed Follow -- Listen Share Using .to("cuda") to transfer data to the GPU is a common practice for accelerating mathematical operations. CPUs are quicker for single operations, but the strength of GPUs lies in their ability to perform parallel processing. CPUs are designed to execute a sequence of operations (threads) quicker but can execute only a few operations. On the other hand GPUs are built to handle millions of operations simultaneously, even though this comes at the cost of individual thread performance. This capability allows us to perform operations in parallel. For instance, when adding elements of a million-sized tensor, instead of sequentially adding each element within a loop, a GPU can add all elements simultaneously. To achieve this, we use CUDA, a platform developed by NVIDIA that enables developers to incorporate GPU support into their software applications. History of GPUs Early Years: GPUs for Graphics 1980s-1990s: GPUs were initially developed to accelerate graphics rendering, primarily for video games and visual simulations. Companies like NVIDIA and Array Technologies Incorporated (ATI now part of Advanced Micro Devices AMD) led the way in developing powerful graphics cards. Birth of CUDA: The Turning Point 2006: NVIDIA introduced CUDA (Compute Unified Device Architecture), a parallel computing platform and programming model. CUDA allowed developers to harness the parallel processing power of GPUs for general-purpose computing, marking the beginning of GPU acceleration beyond graphics. GPUs Enter Deep Learning 2010s: Researchers began to explore GPUs for deep learning. The parallel architecture of GPUs proved to be highly efficient for the matrix and tensor computations essential in neural network training.2012: AlexNet, a convolutional neural network (CNN) that won the ImageNet Large Scale Visual Recognition Challenge, was trained using two NVIDIA GTX 580 GPUs. This success demonstrated the potential of GPUs in deep learning, sparking widespread interest.2014: Google introduced its TensorFlow framework, which included GPU support, making it easier for researchers and developers to leverage GPUs for deep learning tasks. GPUs in the Era of Large Language Models (LLMs) 2018-Present: The development of large language models like BERT, GPT-2, and GPT-3 highlighted the necessity for massive computational power.2019: NVIDIA’s Turing architecture, with the release of the RTX 20 series and later the A100 based on the Ampere architecture, provided further enhancements in deep learning performance.2020: GPT-3, developed by OpenAI, demonstrated the scalability of deep learning models, requiring thousands of GPUs to train. But how do deep learning algorithms take advantage of GPUs computation performance in practice? Let’s find out! Deep learning models use mathematical operations like matrix multiplication, vector addition, etc. If we optimize these operations, we can improve the performance of these models.For example, addition of two arrays C = A + B. We can use CPU multithreading in order to run this computation in parallel. But it can only handle a few such threads simultaneously. Using GPUs we can run millions of threads simultaneously, which improves the performance of mathematical operations on huge vectors and matrices. GPU vs CPU Imagine you need to send several packages across a city. A motorcycle (CPU) can deliver a single package quickly, but if you have many packages to send, a delivery truck (GPU) can carry them all at once. Although the truck might be slower than the motorcycle for a single package, it becomes much more efficient when handling multiple packages at the same time. CPUs are designed for fast sequential processing, making them ideal for tasks that require quick, single-threaded performance. While GPUs, on the other hand, excel in parallel processing, allowing them to handle many tasks simultaneously. Comparison Video :- https://www.youtube.com/watch?v=-P28LKWTzrI Understanding CPU and GPU Design Differences CPU (Central Processing Unit) Design: GPU (Graphical Processing Unit) Design: Practical Implications Single-Threaded Tasks (CPU) Parallel Tasks (GPU) Introduction to CUDA When you execute a deep learning model, you’ll often opt for a commonly used Python library such as PyTorch or TensorFlow. These libraries are known to execute C/C++ code at their core. Furthermore, to enhance processing speed, you may harness the power of GPUs. This is where CUDA comes into the picture! CUDA, short for Compute Unified Device Architecture, is a platform designed by NVIDIA for general-purpose computing on their GPUs. It enables developers to leverage NVIDIA’s GPU computational capabilities in their software applications, going beyond just graphics rendering.To achieve this, CUDA offers a straightforward interface based on C/C++ (CUDA C/C++) that allows access to the GPU’s virtual instruction set and specific operations such as data transfer between the CPU and GPU. Let’s compare CPU and GPU via coding MultiplyVectorsOnCPU function is performing element-wise multiplication of two input arrays a and b, storing the result in array c. This operation is executed sequentially on the CPU.In CUDA, you would typically write a kernel to perform the same element-wise multiplication but execute it on the GPU. Each thread in a CUDA kernel is responsible for one element of the output array. The number of threads and how they are organized into blocks and grids is specified by the programmer and depends on the problem being solved and the hardware constraints. To rewrite the MultiplyVectorsOnCPU function for execution on the GPU using CUDA, you would need to write a CUDA kernel and handle memory allocation and data transfer between the CPU and GPU. So let’s write the CUDA code for this operation: Code for Single-Dimensional Threading __global__ void MultiplyVectorsOnGPU(float *a, float *b, float *c, int N) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < N) { c[i] = a[i] * b[i]; }}int main() { int N = 64000000; // 64 Million size_t size = N * sizeof(float); float *A, *B, *C; cudaMallocManaged(&A, size); cudaMallocManaged(&B, size); cudaMallocManaged(&C, size); // Initialize A and B for (int i = 0; i < N; ++i) { A[i] = 1.0f; B[i] = 1.0f; } int blockSize = 256; int numBlocks = (N + blockSize - 1) / blockSize; cudaEvent_t start, stop; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord(start); MultiplyVectorsOnGPU<<<numBlocks, blockSize>>>(A, B, C, N); cudaEventRecord(stop); cudaEventSynchronize(stop); float milliseconds = 0; cudaEventElapsedTime(&milliseconds, start, stop); printf("Time taken for the execution of the task on GPU is %f milliseconds\n", milliseconds); cudaFree(A); cudaFree(B); cudaFree(C); return 0;} MultiplyVectorsOnGPU() is a CUDA kernel function. The __global__ qualifier indicates that this function is called from the host (CPU) but executed on the device (GPU). int i = blockIdx.x * blockDim.x + threadIdx.x; Each thread in the CUDA kernel is responsible for one element of the output vector. This line of code calculates the index ‘i’ of the element to be processed by the current thread based on the block index ‘blockIdx.x’, the thread index within the block ‘threadIdx.x’, and the block size ‘blockDim.x’.Next we check if the calculated index i is within the bounds of the vectors N and perform element-wise multiplication of a[i] and b[i], storing the result in c[i]. size_t size = N * sizeof(float); This calculates the total size in bytes needed to store the vectors (A, B, and C), considering each element as a float. float *A, *B, *C;cudaMallocManaged(&A, size);cudaMallocManaged(&B, size);cudaMallocManaged(&C, size); These lines allocate memory on the GPU for the input and output vectors using cudaMallocManaged, which allocates memory accessible from both the host and the device. int blockSize = 256;int numBlocks = (N + blockSize - 1) / blockSize; These lines determine the number of blocks needed to execute the CUDA kernel. The block size is set to 256 threads, and the number of blocks is calculated to ensure that all elements of the vectors are processed.In the provided code, we are launching ‘numBlocks’ blocks, each containing ‘blockSize’ threads. The total number of threads launched is numBlocks * blockSize. The numBlocks variable is calculated to ensure that all elements of the input vectors are processed. Each thread is responsible for one element of the output vector C. To limit the maximum number of threads per block in CUDA, you can specify the block size when launching the kernel using the <<<…>>>syntax. The maximum number of threads per block is determined by the compute capability of your GPU.For example, to limit the block size to 256 threads per block: int blockSize = 256;int numBlocks = (N + blockSize - 1) / blockSize; In this code, blockSize is set to 256, and numBlocks is calculated to ensure that all elements of the vectors are processed. When you launch the kernel MultiplyVectorsOnGPU, each block will contain 256 threads, unless the total number of elements in your vectors is less than 256, in which case fewer threads will be launched. cudaEvent_t start, stop;cudaEventCreate(&start);cudaEventCreate(&stop); These lines create CUDA events start and stop for measuring the execution time. cudaEventRecord(start);MultiplyVectorsOnGPU<<<numBlocks, blockSize>>>(A, B, C, N);cudaEventRecord(stop);cudaEventSynchronize(stop); These lines record the start and stop events, call the CUDA kernel MultiplyVectorsOnGPU with the specified number of blocks and block size, and synchronize to ensure that all threads have finished execution before measuring the time. float milliseconds = 0;cudaEventElapsedTime(&milliseconds, start, stop); This calculates the elapsed time between the start and stop events in milliseconds. cudaFree(A); cudaFree(B); cudaFree(C); This frees the memory allocated for the vectors on the GPU. Code for Multi-Dimensional Threading __global__ void MultiplyVectorsOnGPU(float *a, float *b, float *c, int N) { int idx = blockIdx.x * blockDim.x + threadIdx.x; int idy = blockIdx.y * blockDim.y + threadIdx.y; int offset = idy * N + idx; if (idx < N && idy < N) { c[offset] = a[offset] * b[offset]; }}int main() { int N = 4096; // 4096x4096 matrix size_t size = N * N * sizeof(float); float *A, *B, *C; cudaMallocManaged(&A, size); cudaMallocManaged(&B, size); cudaMallocManaged(&C, size); // Initialize A and B for (int i = 0; i < N * N; ++i) { A[i] = 1.0f; B[i] = 1.0f; } dim3 blockSize(32, 32); dim3 numBlocks((N + blockSize.x - 1) / blockSize.x, (N + blockSize.y - 1) / blockSize.y); cudaEvent_t start, stop; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord(start); MultiplyVectorsOnGPU<<<numBlocks, blockSize>>>(A, B, C, N); cudaEventRecord(stop); cudaEventSynchronize(stop); float milliseconds = 0; cudaEventElapsedTime(&milliseconds, start, stop); printf("Time taken for the execution of the task on GPU is %f milliseconds\n", milliseconds); cudaFree(A); cudaFree(B); cudaFree(C); return 0;} The main difference between the code for multi-dimensional threading and single-dimensional threading lies in the use of multidimensional thread blocks and grids for processing a 2D matrix. This allows for a more natural and efficient processing of a 2D matrix on the GPU, compared to the previous code which processed 1D vectors. Conclusion In this article, we discussed fundamental concepts related to GPU acceleration and CUDA for improving the performance of deep learning models. This is just the beginning, and there is much more to explore. In this article we just got a clearer understanding of what happens behind the scenes when we use .to("cuda")to execute deep learning models on GPUs. Further Readings PyTorch from Scratch CUDA Programming Guide CUDANN Implementation LLMs Training with CUDA in C++ -- -- Written by Ebad Sayed I am currently a final year undergraduate at IIT Dhanbad, looking to help out aspiring AI/ML enthusiasts with easy AI/ML guides. No responses yet Help Status About Careers Press Blog Privacy Terms Text to speech Teams