{"content": "How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog\n\nDecember 2022\n\nIn this post, I’ll iteratively optimize an implementation of matrix multiplication written in CUDA.\nMy goal is not to build a cuBLAS replacement, but to deeply understand the most important performance characteristics of the GPUs that are used for modern deep learning.\nThis includes coalescing global memory accesses, shared memory caching and occupancy optimizations, among others.You can download the code for all kernels from Github. Also checkout wangzyon’s repo from which I copied the benchmarking setup. This post is less polished than my normal uploads, and includes many more sidenotes. I used it as notepad for ideas and scribbles while writing the kernels. That’s why I called it a worklog :)\n\nMatrix multiplication on GPUs may currently be the most important algorithm that exists, considering it makes up almost all the FLOPs during the training and inference of large deep-learning models.\nSo how much work is it to write a performant CUDA SGEMMSGEMM performs C=αAB+βC at single (=32b) precision. from scratch?\nI’ll start with a naive kernel and step-by-step apply optimizations until we get within 95% (on a good day) of the performance of cuBLAS (NVIDIA’s official matrix library):cuBLAS at FP32 that is. In my setting, doing the matmul using TF32 or BF16 precision allows cuBLAS to use the tensor cores, which increases FLOPS by 2.5x or 3.5x. I may look into tensor cores / warp matrix functions in a future post.\n\n–\n\nCome work on kernels at Anthropic!\n\nWe’re always hiring for capable performance & kernel engineers to optimize our models on TPUs, GPUs & Trainium. Apply here!\n\n–\n\nKernel 1: Naive Implementation\n\nIn the CUDA programming model, computation is ordered in a three-level hierarchy. \nEach invocation of a CUDA kernel creates a new grid, which consists of multiple blocks. \nEach block consists of up to 1024 individual threads.These constants can be looked-up in the CUDA Programming guide.\nThreads that are in the same block have access to the same shared memory region (SMEM).\n\nThe number of threads in a block can be configured using a variable normally called blockDim, which is a vector consisting of three ints.\nThe entries of that vector specify the sizes of blockDim.x, blockDim.y and blockDim.z, as visualized below:\n\nSimilarly, the number of blocks in a grid is configurable using the gridDim variable.\nWhen we launch a new kernel from the hostIn accelerator lingo, host refers to the CPU and device is the accelerator, here the GPU., it creates a single grid, containing the blocks and threads as specified.From here on I’ll only be talking about 2D grids and blocks, partly because the 3D-structure is seldom used and because drawing in 3D is too hard.\nIt’s important to keep in mind that the thread hierarchy we just talked about mostly concerns program correctness.\nFor program performance, as we’ll see later, it’s not a good idea to treat all threads in the same block as equals.\n\nFor our first kernel, we’ll use the grid, block and thread hierarchy to assign each thread a unique entry in the result matrix C.\nThen that thread will compute the dot product of the corresponding row of A and column of B, and write the result to C.\nDue to each location of C being written to by only one thread, we have to do no synchronization.\nWe’ll launch the kernel like so:\n\n// create as many blocks as necessary to map all of C\ndim3 gridDim(CEIL_DIV(M, 32), CEIL_DIV(N, 32), 1);\n// 32 * 32 = 1024 thread per block\ndim3 blockDim(32, 32, 1);\n// launch the asynchronous execution of the kernel on the device\n// The function call returns immediately on the host\nsgemm_naive<<>>(M, N, K, alpha, A, B, beta, C);\n\nCUDA code is written from a single-thread perspective.\nIn the code of the kernel, we access the blockIdx and threadIdx built-in variables.\nThese will return different values based on the thread that’s accessing them.In our example, threadIdx.x and threadIdx.y will vary from 0 to 31 based on the position of the thread in the grid. Same for blockIdx.x and blockIdx.y, which will vary from 0 to CEIL_DIV(N, 32) or CEIL_DIV(M, 32) based on the position of the thread’s block in the grid. We’ll do a lot of indexing into strided in-memory representations of matrices. Edward Yang’s post on PyTorch Internals contains a good explanation of strided tensors.\n\n__global__ void sgemm_naive(int M, int N, int K, float alpha, const float *A,\n const float *B, float beta, float *C) {\n // compute position in C that this thread is responsible for\n const uint x = blockIdx.x * blockDim.x + threadIdx.x;\n const uint y = blockIdx.y * blockDim.y + threadIdx.y;\n\n // `if` condition is necessary for when M or N aren't multiples of 32.\n if (x < M && y < N) {\n float tmp = 0.0;\n for (int i = 0; i < K; ++i) {\n tmp += A[x * K + i] * B[i * N + y];\n }\n // C = α*(A@B)+β*C\n C[x * N + y] = alpha * tmp + beta * C[x * N + y];\n }\n}\n\nTo visualize this simple kernel:If the size of the matrix is not divisible by the size of the block, we’ll have to launch extra blocks to process the remainder. For example, in the picture below, we’ll create 9 blocks of equal threadsize, but only 4 of those fully utilize their 1024 threads. This artifact is called tile quantization, and appears whenever we try to map a fixed-sized volume across a variable-sized input.\n\nThis kernel takes about 0.5s to process three 4092² fp32 matrices on my A6000 GPU.\nLet’s do some non-implementation-specific calculations:\n\nLower Bounding the Fastest Possible Runtime\n\nFor a matrix multiplication of two 4092² matrices, followed by an addition of a 4092² matrix (to make the GEMM):\n\nSo 268MB is the absolute minimum of memory that any implementation would have to transfer from/to global GPU memory,Global memory is the GPU’s main memory region. If Nvidia sells you a GPU advertised with 80GB of memory and 1TB/s of bandwidth, they’re talking about the capacity and bandwidth of global memory. Later we’ll talk about other memory regions on the GPU, like the shared memory, which is physically distinct and has very different performance characteristics. assuming it has a big enough cache.The cuBLAS kernel loads a total of 500MB of GMEM during the whole calculation. We’ll see later how increasing arithmetic intensity allows us to achieve an access volume that low.\nLet’s calculate some upper bounds on kernel performance.\nThe GPU is advertised with 30TFLOPs/s of fp32 compute throughput and 768GB/s of global memory bandwidth.\nIf we achieved those numbers,Reminder that peak FLOPs is a reductionist metric, since it depends on the instruction mix. There’s no way you’d reach those 30TFLOPs/s if your FLOP of choice is DIV. However, since matmul uses mainly FMA instructions, which tends to be the fastest FLOPs, we have a good chance of actually getting close to that peak FLOP value. Similar story for the bandwidth: Peak bandwidth can only be reached if the access pattern suits the hardware. we’d need 4.5ms for the calculation and 0.34ms for the memory transfers.\nSo in our napkin math, the calculation takes ~10x more time than the memory accesses.\nThis means our final optimized kernel will be compute-bound, as long as we end up having to transfer <10x the absolute minimum memory volume of 278MB.The A6000 is advertised with 309TFLOPs/s of tensor core performance. If we could use tensor cores for our fp32 matmul, the calculation would only take 0.44ms, and an optimized kernel doing 4092^2 matrix mul would almost surely still be memory bound. This puts into perspective just how fast the tensor cores are.\n\nNow that we’ve calculated some lower bounds for our fp32 GEMM calculation, let’s get back to the kernel on hand, to figure out why it’s so much slower than it could be.\n\nMemory Access Pattern of the Naive Kernel\n\nIn our kernel, two threads in the same block with ThreadIds (0, 0) and (0, 1) will load the same column of B but different rows of A.\nIf we assume the worst case of zero caching, then each thread has to load 2*4092+1 floats from global memory.\nAs we have 4092² threads total, this would result in 548GB of memory traffic.\n\nBelow is a visualization of the memory access pattern of our naive kernel, taking two threads A (red) and B (green) as an example:\n\nSo to recap, when I run this kernel on an A6000 GPU it achieves ~300GFLOPs when multiplying two 4092x4092 float32 matrices.\nPretty bad, considering that the A6000 is advertised as being able to achieve almost 30 TFLOPs.Just for comparison, 300 GFLOPs is also roughly the performance achieved by the optimized BLAS library on the 2015 Haswell CPU that I used in my earlier post on CPU matmul.\nSo how can we start to make this faster?\nOne way is to optimize the memory access pattern of our kernel such that global memory accesses can be coalesced (=combined) into fewer accesses.\n\nKernel 2: Global Memory Coalescing\n\nBefore we get into global memory coalescing, we need to learn about the concept of a warp.\nFor execution, the threads of a block are grouped into so-called warps, consisting of 32 threads.\nA warp is then assigned to a warp scheduler, which is the physical core that executes the instructions.Before the Volta architecture, it used to be the case that all threads of a warp were fed from the same instruction stream. On a branch, the threads that didn’t take the branch were inactived using the so-called active mask. However, since Volta, it’s no longer a good idea to rely on this ‘warp-synchronous’ behaviour, as instructions from different branches may be interleaved even for the same threads within a warp.\nThere are four warp schedulers per multiprocessor.\nThe grouping into warps happens based on a consecutive threadId.\nIf we set the blockDim to be multi-dimension, then the threadId is calculated like so:\n\nthreadId = threadIdx.x+blockDim.x*(threadIdx.y+blockDim.y*threadIdx.z)\n\nThen, threads with neighbouring threadId become part of the same warp.\nBelow I tried to illustrate this, using a smaller “warpsize” of 8 threads (real warps always contain 32 threads):I like to think of the three dimensions x,y,z of threadId as being “column-major”, due to the first dimension x being the one that’s continuous in “warpspace”. I don’t know if others use that term, but it makes the concept more clear to me.\n\nThe concept of a warp is relevant for this second kernel, as sequential memory accesses by threads that are part of the same warp can be grouped and executed as one.\nThis is referred to as global memory coalescing.\nIt’s the most important thing to keep in mind when optimizing a kernel’s GMEM memory accesses toward achieving the peak bandwidth.\n\nBelow is an example, where consecutive memory accesses by threads in the same warp are grouped, allowing each warp to execute 8 memory accesses using only 2 32B loads:\n\nIn reality, the GPU supports 32B, 64B and 128B memory accesses.\nSo, if each thread is loading a 32bit float from global memory, the warp scheduler (probably the MIO) can coalesce this 32*4B=128B load into a single transaction.\nThis is only possible if the floats loaded are consecutive in memory, and if access is aligned.In that way, optimizing for global memory coalescing on GPU has a lot of similarities to optimizing for cache line utilization on CPU. Interestingly, to allow coalescing the threads within a warp have to access consecutive addresses, but the accesses don’t have to be consecutive within-warp. Illustrated below: \nIf they aren’t, or if access cannot be coalesced for some other reason, then the GPU will execute as many 32B loads as necessary to fetch all floats, leading to a lot of wasted bandwidth.\nProfiling our naive kernel, we can observe the detrimental effect of non-coalesced access as we achieve only 15GB/s of GMEM throughput.\n\nLooking back at the previous kernel, we assigned threads their entry of C like so:\n\nconst uint x = blockIdx.x * blockDim.x + threadIdx.x;\nconst uint y = blockIdx.y * blockDim.y + threadIdx.y;\n\nHence, threads of the same warp (those with consecutive threadIdx.x) were loading the rows of A non-consecutively from memory.\nThe naive kernel’s pattern of accessing the memory of A looked more like so:\n\nTo enable coalescing, we can change how we assign positions of the result matrix C to threads.\nThis change in the global memory access pattern is illustrated below:\n\nTo implement this, we only need to change the first two lines:\n\nconst int x = blockIdx.x * BLOCKSIZE + (threadIdx.x / BLOCKSIZE);\nconst int y = blockIdx.y * BLOCKSIZE + (threadIdx.x % BLOCKSIZE);\n\nif (x < M && y < N) {\n float tmp = 0.0;\n for (int i = 0; i < K; ++i) {\n tmp += A[x * K + i] * B[i * N + y];\n }\n C[x * N + y] = alpha * tmp + beta * C[x * N + y];\n}\n\nAnd we call it like so:This wasn’t immediately obvious to me, but enabling GMEM coalescing changes nothing in the assembly, see the SASS output on Godbolt. Access coalescing is done at kernel runtime by the hardware. This makes sense since coalescing requires aligned access, which cannot be guaranteed at compile time as we pass the matrix pointers as function arguments. Also: the assembly features partial unrolling of our inner loop even though the loop count K is not known at compile time. Exciting!\n\n// gridDim stays the same\ndim3 gridDim(CEIL_DIV(M, 32), CEIL_DIV(N, 32));\n// make blockDim 1-dimensional, but don't change number of threads\ndim3 blockDim(32 * 32);\nsgemm_coalescing<<>>(M, N, K, alpha, A, B, beta, C);\n\nGlobal memory coalescing increases memory throughput from 15GB/s to 110GB/s.\nPerformance reaches 2000 GFLOPS, a big improvement compared to the 300 GFLOPS of the first, naive kernel.\nFor the next kernel, we’ll use the GPU’s fast on-chip memory, called shared memory, to cache data that will be re-used.\n\nKernel 3: Shared Memory Cache-Blocking\n\nNext to the large global memory, a GPU has a much smaller region of memory that is physically located on the chip, called shared memory (SMEM).\nPhysically, there’s one shared memory per SM.Here’s a helpful illustration of the memory hierarchy on an A100 GPU (source):\nLogically, this shared memory is partitioned among the blocks.\nThis means that a thread can communicate with the other threads in its block via the shared memory chunk.\nOn my A6000 GPU, each block has access to a maximum of 48KB of shared memory.The amount of SMEM is configurable, by trading off a larger shared memory for a smaller L1 cache. For specifics, see the compute capability documentation. Also, it’s possible to use more than 48KB of SMEM per thread by utilizing dynamic shared memory.\n\nAs the shared memory is located on-chip, it has a much lower latency and higher bandwidth than global memory.\nI couldn’t find good benchmark results for the Ampere architecture but for Volta (released in 2017) the benchmarks performed in this paper report 750GiB/s of global memory bandwidth, and 12,080GiB/s of shared memory bandwidth.It doesn’t look like these numbers have changed much since Volta. Nvidia reports ~750GB of max GMEM bandwidth for my A6000 (Ampere).\n\nSo for this next kernel, we’ll load a chunk of A and a chunk of B from global memory into shared memory.\nThen we’ll perform as much work as possible on the two chunks, with each thread still being assigned one entry of C.\nWe’ll move the chunks along the columns of A and the rows of B performing partial sums on C until the result is computed.\n\nThis is illustrated below:\n\nThe important parts of the code are below, with variable names corresponding to the plot above:In general, I didn’t write the code to work for arbitrary sizes of M, N and K, as the condition checking introduces a lot of clutter and isn’t very interesting. To make sure the kernel works correctly, I test it with random data and a few different matrix sizes by comparing to cuBLAS.\n\n// advance pointers to the starting positions\nA += cRow * BLOCKSIZE * K; // row=cRow, col=0\nB += cCol * BLOCKSIZE; // row=0, col=cCol\nC += cRow * BLOCKSIZE * N + cCol * BLOCKSIZE; // row=cRow, col=cCol\n\nfloat tmp = 0.0;\n// the outer loop advances A along the columns and B along\n// the rows until we have fully calculated the result in C.\nfor (int bkIdx = 0; bkIdx < K; bkIdx += BLOCKSIZE) {\n // Have each thread load one of the elements in A & B from\n // global memory into shared memory.\n // Make the threadCol (=threadIdx.x) the consecutive index\n // to allow global memory access coalescing\n As[threadRow * BLOCKSIZE + threadCol] = A[threadRow * K + threadCol];\n Bs[threadRow * BLOCKSIZE + threadCol] = B[threadRow * N + threadCol];\n\n // block threads in this block until cache is fully populated\n __syncthreads();\n\n // advance pointers onto next chunk\n A += BLOCKSIZE;\n B += BLOCKSIZE * N;\n\n // execute the dotproduct on the currently cached block\n for (int dotIdx = 0; dotIdx < BLOCKSIZE; ++dotIdx) {\n tmp += As[threadRow * BLOCKSIZE + dotIdx] *\n Bs[dotIdx * BLOCKSIZE + threadCol];\n }\n // need to sync again at the end, to avoid faster threads\n // fetching the next block into the cache before slower threads are done\n __syncthreads();\n}\nC[threadRow * N + threadCol] =\n alpha * tmp + beta * C[threadRow * N + threadCol];\n\nThis kernel achieves ~2200 GFLOPS, a 50% improvement over the previous version.There’s only a 50% improvement partly because our previous kernel already had pretty good L1 cache hit rates.\nWe’re still far away from hitting the ~30 TFLOPs that the GPU can provide.\nThis is obvious from the roofline plot below:Notice how we’re achieving a higher memory bandwidth than cuBLAS. But because we’re doing much less work per byte loaded from memory (=lower arithmetic intensity), overall performance is worse.\n\nAt a CHUNKSIZE of 32, this uses 2*32*32*4B=8KB of shared memory space.This info can also be obtained by compiling with --ptxas-options=-v, which outputs: Used 37 registers, 8192 bytes smem, 400 bytes cmem[0].\nMy A6000 GPU has a maximum of 48KB of shared memory space available for each block, so we’re far away from hitting that limit.\nThis is not necessarily a problem, as there are downsides to increasing per-block shared-memory usage.\nEach multiprocessor (SM) has a maximum of 100KB of SMEM available.\nThis means that if we’d modify our kernel to use the full 48KB of SMEM available, each SM could only keep two blocks loaded at the same time. \nIn CUDA parlance, increasing per-block SMEM utilization can decrease occupancy.\nOccupancy is defined as the ratio between the number of active warps per SM and the maximum possible number of active warps per SM.\n\nHigh occupancy is useful because it allows us to hide the high latency of our operations, by having a bigger pool of issue-able instructions available.On GPUs, math operations like FMA have a latency of 4 cycles which is equal to 2.6ns at a 1.5GHz clock. Compare this to a recent x86 CPU, where FMA has a 6 cycle latency or 1.8ns at a 3.5GHz clock.\nThere are three main limits to keeping more active blocks loaded on an SM: register count, warp count and SMEM capacity.\nLet’s do an example calculation for our current kernel.\n\nOccupancy Calculation for Kernel 3\n\nHere are the relevant hardware stats for my GPU, obtained from the cudaGetDeviceProperties API (Multiprocessors are the SMs we talked about earlier):The amount of shared memory is configurable by using a feature called SharedMemoryCarveout. The so-called unified data cache is partitioned into L1 cache and shared memory, so we can trade-off less shared-memory for more L1 cache.\n\nAnd here are the resource demands for our kernel:\n\nWork is scheduled onto the SMs on a block granularity.\nEach SM will load more blocks, as long as it has enough resources to accommodate them.\nCalculation:I found lots of official and unofficial occupancy calculators, but no official formulae as how to calculate the occupancy. The results are correct (I checked using NVIDIA’s official tools), but there may be small errors eg in the application of rounding.\n\nSo this kernel is limited by the number of threads per block, and the number of registers per thread.\nWe cannot load more than one block per SM, giving us a final occupancy of 32 active warps / 48 max active warps = 66%.\n\nA 66% occupancy is not too bad, so this doesn’t explain why our kernel runs so slow.We know that it’s possible to optimize our kernel towards high arithmetic intensity (AI) by observing that cuBLAS achieves ~245 FLOPs/Byte. Both at very high and very low AI, high occupancy is not needed to achieve peak throughput. For more details on this, see V. Volkov’s PhD thesis and its coverage of “cusp behaviour”: \nLooking at the profiler gives us some hints. First, if we look at the mix of executed instructions, most of them are memory loads:LDS are shared memory loads. FMA is our fused multiply add. IADD3 is a “3 input integer addition”, which we need for moving the pointers along the K dimension.\n\nOur inner loop looks like this in PTX (Godbolt link):\n\nld.shared.f32 %f91, [%r8+3456];\nld.shared.f32 %f92, [%r7+108];\nfma.rn.f32 %f93, %f92, %f91, %f90;\n\nThat’s not good, given that a memory load is bound to have a higher latency than a simple FMA, and given that we know our kernel should be compute bound.\nWe see this effect when looking at the profiler’s sampling of warp states.\nThis quantifies how many cycles were spent in each state per executed instruction:Stall Not Selected means that the warp was eligible to be scheduled, but the scheduler selected another eligible warp instead. This adds evidence to our earlier hypothesis that occupancy is currently not a problem.\n\nThe meaning of the states is documented in the Kernel Profiling Guide.\nFor Stall MIO Throttle it reads:\n\nWarp was stalled waiting for the MIO (memory input/output) instruction queue to be not full. This stall reason is high in cases of extreme utilization of the MIO pipelines, which include special math instructions, dynamic branches, as well as shared memory instructions\n\nWe’re not using special math instructions, nor dynamic branches, so it’s clear that we’re stalling waiting for our SMEM accesses to return.\nSo how do we make our kernel issue less SMEM instructions?\nOne way is to have each thread compute more than one output element, which allows us to perform more of the work in registers and relying less on SMEM.\n\nKernel 4: 1D Blocktiling for Calculating Multiple Results per Thread\n\nSo this next kernel works like our last kernel, but adds a new inner loop, for calculating multiple C entries per thread.\nWe now use a SMEM cache size of BM*BK + BN*BK = 64*8 + 64*8 = 1024 floats, for a total of 4KB per block.\nBelow a visualization. \nI have highlighted two of the threads and the values they access in the inner loop in orange and red.\n\nAll of the important changes for this kernel happen in the inner loop.\nThe loading for GMEM to SMEM stays largely the same as before.\nLet’s have a look:Godbolt link.\n\n// allocate thread-local cache for results in registerfile\nfloat threadResults[TM] = {0.0};\n\n// outer loop over block tiles\nfor (uint bkIdx = 0; bkIdx < K; bkIdx += BK) {\n // populate the SMEM caches (same as before)\n As[innerRowA * BK + innerColA] = A[innerRowA * K + innerColA];\n Bs[innerRowB * BN + innerColB] = B[innerRowB * N + innerColB];\n __syncthreads();\n\n // advance blocktile for outer loop\n A += BK;\n B += BK * N;\n\n // calculate per-thread results\n for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {\n // we make the dotproduct loop the outside loop, which facilitates\n // reuse of the Bs entry, which we can cache in a tmp var.\n float Btmp = Bs[dotIdx * BN + threadCol];\n for (uint resIdx = 0; resIdx < TM; ++resIdx) {\n threadResults[resIdx] +=\n As[(threadRow * TM + resIdx) * BK + dotIdx] * Btmp;\n }\n }\n __syncthreads();\n}\n\nThis kernel achieves ~8600 GFLOPs, 2.2x faster than our previous kernel. \nLet’s calculate how many memory accesses each thread performed in our previous kernel, where each thread calculated one result:\n\nAnd for our new kernel, where each thread calculates eight results:\n\nAs expected, we now spend much fewer cycles per instruction stalling due to memory pressure:Careful: The axis has changed compared to the previous plot.\n\nSidenote on Compiler Optimizations\n\nAbove we explicitly cached the entry of B into Btmp and reordered the two inner loops for efficiency.\nIf we don’t do that, then the code looks like this:\n\nfor (uint resIdx = 0; resIdx < TM; ++resIdx) {\n for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {\n threadResults[resIdx] +=\n As[(threadRow * TM + resIdx) * BK + dotIdx] * Bs[dotIdx * BN + threadCol];\n }\n}\n\nInterestingly, this has no adverse effect on performance.\nThis is surprising since our inner two loops now incur BK (=8) * TM (=8) * 2 = 128 SMEM accesses, instead of the previous 72.\nLooking at the assembly (Godbolt link) has the answer:\n\n// first inner-most loop\nld.shared.f32 %f45, [%r9];\nld.shared.f32 %f46, [%r8];\nfma.rn.f32 %f47, %f46, %f45, %f212;\nld.shared.f32 %f48, [%r9+256];\nld.shared.f32 %f49, [%r8+4];\nfma.rn.f32 %f50, %f49, %f48, %f47;\nld.shared.f32 %f51, [%r9+512];\nld.shared.f32 %f52, [%r8+8];\nfma.rn.f32 %f53, %f52, %f51, %f50;\nld.shared.f32 %f54, [%r9+768];\nld.shared.f32 %f55, [%r8+12];\nfma.rn.f32 %f56, %f55, %f54, %f53;\nld.shared.f32 %f57, [%r9+1024];\nld.shared.f32 %f58, [%r8+16];\nfma.rn.f32 %f59, %f58, %f57, %f56;\nld.shared.f32 %f60, [%r9+1280];\nld.shared.f32 %f61, [%r8+20];\nfma.rn.f32 %f62, %f61, %f60, %f59;\nld.shared.f32 %f63, [%r9+1536];\nld.shared.f32 %f64, [%r8+24];\nfma.rn.f32 %f65, %f64, %f63, %f62;\nld.shared.f32 %f66, [%r9+1792];\nld.shared.f32 %f67, [%r8+28];\nfma.rn.f32 %f212, %f67, %f66, %f65;\n// second inner-most loop\nld.shared.f32 %f68, [%r8+32];\nfma.rn.f32 %f69, %f68, %f45, %f211;\nld.shared.f32 %f70, [%r8+36];\nfma.rn.f32 %f71, %f70, %f48, %f69;\nld.shared.f32 %f72, [%r8+40];\nfma.rn.f32 %f73, %f72, %f51, %f71;\nld.shared.f32 %f74, [%r8+44];\nfma.rn.f32 %f75, %f74, %f54, %f73;\nld.shared.f32 %f76, [%r8+48];\nfma.rn.f32 %f77, %f76, %f57, %f75;\nld.shared.f32 %f78, [%r8+52];\nfma.rn.f32 %f79, %f78, %f60, %f77;\nld.shared.f32 %f80, [%r8+56];\nfma.rn.f32 %f81, %f80, %f63, %f79;\nld.shared.f32 %f82, [%r8+60];\nfma.rn.f32 %f211, %f82, %f66, %f81;\n// ... continues like this for inner-loops 3-8 ...\n\nThe compiler unrolls both loopsThe compiler can unroll them since the loop count is known at compile time. and then eliminates the repeated SMEM loads of the Bs entries, so we end up with the same amount of SMEM accesses as our optimized CUDA code.\n\nWhen the PTX is compiled to SASS, the SMEM loads from Bs are vectorized:This already hints at an optimization we’ll perform later: Transposing As such that we can also vectorize those loads.\n\nLDS R26, [R35.X4+0x800] // a 32b load from As\nLDS.128 R8, [R2] // a 128b load from Bs\nLDS.128 R12, [R2+0x20] \nLDS R24, [R35.X4+0x900] \nLDS.128 R20, [R2+0x60] \nLDS R36, [R35.X4+0xb00] \nLDS.128 R16, [R2+0x40] \nLDS.128 R4, [R2+0x80] \nLDS R38, [R35.X4+0xd00]\n\nAreas of Improvement: Arithmetic Intensity\n\nOur current kernel still suffers from the same stalling-for-memory problem as kernel 3, just to a lesser extent.\nSo we’ll just apply the same optimization again: computing even more results per thread.\nThe main reason this makes our kernel run faster is that it increases arithmetic intensity.Defined as the number of FLOPs executed per byte transferred (load + store!) between GMEM and SMEM.\nBelow I tried to make it more immediately obvious why calculating more results per thread raises arithmetic intensity:It’s more efficient to calculate a square of results per thread than a column of results because we can share more of the inputs:\n\nIn conclusion, all our kernels perform the same number of FLOPs, but we can reduce the number of GMEM accesses by calculating more results per thread.\nWe’ll continue optimizing arithmetic intensity for as long as we’re still memory bound.\n\nKernel 5: Increasing Arithmetic Intensity via 2D Blocktiling\n\nThe basic idea for kernel 5 will be to compute a grid of 8*8 elements of C per thread.\nThe first stage of the kernel is for all threads to work together to populate the SMEM cache.\nWe’ll have each thread load multiple elements.\nThis code looks like so:Here’s a graphical representation of the GMEM loading:\n\nfor (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) {\n As[(innerRowA + loadOffset) * BK + innerColA] =\n A[(innerRowA + loadOffset) * K + innerColA];\n}\nfor (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) {\n Bs[(innerRowB + loadOffset) * BN + innerColB] =\n B[(innerRowB + loadOffset) * N + innerColB];\n}\n__syncthreads();\n\nNow that the SMEM cache is populated, we have each thread multiply its relevant SMEM entries and accumulate the result into local registers.\nBelow I illustrated the (unchanged) outer loop along the input matrices, and the three inner loops for the dot product and the TN and TM dimension:\n\nThe interesting parts of the code look like this:Godbolt link\n\n// allocate thread-local cache for results in registerfile\nfloat threadResults[TM * TN] = {0.0};\n// register caches for As and Bs\nfloat regM[TM] = {0.0};\nfloat regN[TN] = {0.0};\n\n// outer-most loop over block tiles\nfor (uint bkIdx = 0; bkIdx < K; bkIdx += BK) {\n // populate the SMEM caches\n for (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) {\n As[(innerRowA + loadOffset) * BK + innerColA] =\n A[(innerRowA + loadOffset) * K + innerColA];\n }\n for (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) {\n Bs[(innerRowB + loadOffset) * BN + innerColB] =\n B[(innerRowB + loadOffset) * N + innerColB];\n }\n __syncthreads();\n\n // advance blocktile\n A += BK; // move BK columns to right\n B += BK * N; // move BK rows down\n\n // calculate per-thread results\n for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {\n // load relevant As & Bs entries into registers\n for (uint i = 0; i < TM; ++i) {\n regM[i] = As[(threadRow * TM + i) * BK + dotIdx];\n }\n for (uint i = 0; i < TN; ++i) {\n regN[i] = Bs[dotIdx * BN + threadCol * TN + i];\n }\n // perform outer product on register cache, accumulate\n // into threadResults\n for (uint resIdxM = 0; resIdxM < TM; ++resIdxM) {\n for (uint resIdxN = 0; resIdxN < TN; ++resIdxN) {\n threadResults[resIdxM * TN + resIdxN] +=\n regM[resIdxM] * regN[resIdxN];\n }\n }\n }\n __syncthreads();\n}\n\nIn the inner loop, we can reduce the number of SMEM accesses by making dotIdx the outer loop, and explicitly loading the values we need for the two inner loops into registers.\nBelow is a drawing of the dotIdx loop across time, to visualize which SMEM entries get loaded into thread-local registers at each step:I had to reduce some dimensions to make it easier to draw. In the kernel: BK=TM=TN=8.\n\nResulting performance: 16TFLOPs, another 2x improvement.\nLet’s repeat the memory access calculation.\nWe’re now calculating TM*TN = 8*8 = 64 results per thread.\n\nSlowly performance is reaching acceptable levels, however, warp stalls due to memory pipeline congestion are still too frequent.\nFor kernel 6 we’ll take two measures to try to improve that: Transposing As to enable auto-vectorization of SMEM loads, and promising the compiler alignment on the GMEM accesses.\n\nKernel 6: Vectorize SMEM and GMEM Accesses\n\nThe first optimization that I already hinted at earlier is to transpose As.\nThis will allow us to load from As using vectorized SMEM loads (LDS.128 in SASS).\nBelow the same visualization of the three inner loops as for kernel 5, but now with As transposed in memory:\n\nLooking at the assemblyGodbolt link we see that loading As into the registers, which used to be a 32b LDS load, is now also a 128b LDS.128 load, just like it had already been for Bs.\nThis gives us a 500GFLOPs speedup, or ~3%.\n\nNext, we’ll vectorize all loads and stores from/to GMEM using vector datatypes, namely float4.\n\nThe code looks like this:Godbolt link for the full kernel\n\nfloat4 tmp =\n reinterpret_cast(&A[innerRowA * K + innerColA * 4])[0];\n// transpose A during the GMEM to SMEM transfer\nAs[(innerColA * 4 + 0) * BM + innerRowA] = tmp.x;\nAs[(innerColA * 4 + 1) * BM + innerRowA] = tmp.y;\nAs[(innerColA * 4 + 2) * BM + innerRowA] = tmp.z;\nAs[(innerColA * 4 + 3) * BM + innerRowA] = tmp.w;\n\nreinterpret_cast(&Bs[innerRowB * BN + innerColB * 4])[0] =\n reinterpret_cast(&B[innerRowB * N + innerColB * 4])[0];\n__syncthreads();\n\nThis leads to the 32b GMEM load instructions (LDG.E and STG.E) being replaced with 128b counterparts (LDG.E.128 and STG.E.128).\nInitially, I was confused as to why running this:\n\nreinterpret_cast(&Bs[innerRowB * BN + innerColB * 4])[0] =\n reinterpret_cast(&B[innerRowB * N + innerColB * 4])[0];\n\nwould be any faster than just manually unrolling the access (or using pragma unroll):\n\nBs[innerRowB * BN + innerColB * 4 + 0] = B[innerRowB * N + innerColB * 4 + 0];\nBs[innerRowB * BN + innerColB * 4 + 1] = B[innerRowB * N + innerColB * 4 + 1];\nBs[innerRowB * BN + innerColB * 4 + 2] = B[innerRowB * N + innerColB * 4 + 2];\nBs[innerRowB * BN + innerColB * 4 + 3] = B[innerRowB * N + innerColB * 4 + 3];\n\nShouldn’t the compiler just be able to coalesce the 2nd version and also generate 128b loads?\nI think the reason is that the compiler has no way to verify that the float* B pointer that is passed to the kernel is 128b aligned, which would be a requirement for using LDG.E.128.\nSo the reinterpret_cast’s only purpose is to promise the compiler that the float* B pointer will be aligned.Compare this to SMEM loads, where the compiler automatically generates vectorized loads because that memory is not user-managed.\n\nKernel 6 achieves 19TFLOPs.\nThe profiler still shows a bunch of problem areas and optimization opportunities: We’re running into shared-memory bank conflicts (which cuBLAS avoids), our occupancy is higher than necessary, and we haven’t implemented any double buffering (which the CUTLASS docs seem to suggest is pretty useful).\n\nBut before we get to those, let’s cover some more low-hanging fruit: Autotuning the kernel’s parameters.\n\nKernel 9: AutotuningI skipped kernels 7 and 8, which I wrote while figuring out how to best eliminate shared memory bank conflicts. They eliminate the conflicts but were overall still slower, so I won’t cover them here.\n\nWe’ve accumulated a total of five template parameters:\n\nFor kernel 6, these were set to BM=BN=128 and BK=TM=TN=8.\nI wrote a bash script that searches through all sensible combinations and benchmarks their runtime.\nThis required me to make sure that:\n\nThe necessary modifications to the code ended up taking quite some time to implement.\n\nIt turns out that the optimal parameters vary quite a bit depending on the GPU model.I guess that’s why compilers like Triton provide routines for autotuning. I wonder how this works for cuBLAS, they probably store a precomputed mapping from {GPU type, matrix size, dtype, …} to the optimal GEMM implementation inside the cuBLAS binary.\nOn my A6000, BM=BN=128 BK=16 TM=TN=8 increased performance by 5%, from 19 to 20 TFLOPs.\nOn an A100 SMX4 40GB, that same configuration reached 12 TFLOPs, 6% worse than the optimal setting found by the autotuner (BM=BN=64 BK=16 TM=TN=4), which reached 12.6 TFLOPs.The A100 has worse fp32 performance than the A6000, which is why the FLOPs numbers are lower (cuBLAS reaches 14.7 TFLOPs on the A100). Nvidia rates the A100 at 19.5 TFLOPs and the A6000 at 38.7 TFLOPs.\n\nI can’t explain why these specific parameters end up producing the optimal performance.\nAutotuning works, every high-performance library uses it, but it also feels very unsatisfying.I’m sure that with enough time, enough access to low-level performance counters and some facetime with Nvidia engineers, I’d eventually figure it out. It’s good have a strong belief that computers can be understood.\n\nKernel 10: Warptiling\n\nCurrently, our loop structure looks like this:\n\nWe’ll now add another hierarchy of tiling, in between our blocktiling and threadtiling loops: warptiling.\nWarptiling is somewhat confusing initially since unlike blocks and threads, warps don’t show up anywhere in the CUDA code explicitly.\nThey are a hardware feature that has no direct analog in the scalar CUDA-software world.\nWe can calculate a given thread’s warpId as warpId=threadIdx.x % warpSize, where warpSize is a built-in variable that is equal to 32 on any CUDA GPU I’ve ever worked with.\n\nWarps are relevant for performance since (among other reasons):\n\nWarptiling is elegant since we now make explicit all levels of parallelism:\n\nThe warptiling looks like this in the CUDA code:Godbolt link.\n\n// dotIdx loops over contents of SMEM\nfor (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {\n // populate registers for this thread's part of the warptile\n for (uint wSubRowIdx = 0; wSubRowIdx < WMITER; ++wSubRowIdx) {\n for (uint i = 0; i < TM; ++i) {\n regM[wSubRowIdx * TM + i] =\n As[(dotIdx * BM) + warpRow * WM + wSubRowIdx * WSUBM +\n threadRowInWarp * TM + i];\n }\n }\n for (uint wSubColIdx = 0; wSubColIdx < WNITER; ++wSubColIdx) {\n for (uint i = 0; i < TN; ++i) {\n regN[wSubColIdx * TN + i] =\n Bs[(dotIdx * BN) + warpCol * WN + wSubColIdx * WSUBN +\n threadColInWarp * TN + i];\n }\n }\n\n // execute warptile matmul. Later this will map well to\n // warp-wide matrix instructions, executed on tensor cores.\n for (uint wSubRowIdx = 0; wSubRowIdx < WMITER; ++wSubRowIdx) {\n for (uint wSubColIdx = 0; wSubColIdx < WNITER; ++wSubColIdx) {\n // calculate per-thread results with register-cache locality\n for (uint resIdxM = 0; resIdxM < TM; ++resIdxM) {\n for (uint resIdxN = 0; resIdxN < TN; ++resIdxN) {\n threadResults[(wSubRowIdx * TM + resIdxM) * (WNITER * TN) +\n (wSubColIdx * TN) + resIdxN] +=\n regM[wSubRowIdx * TM + resIdxM] *\n regN[wSubColIdx * TN + resIdxN];\n }\n }\n }\n }\n}\n\nI tried my best to visualize all three levels of tiling below, although the structure is getting quite complex.The CUTLASS docs about efficient GEMMs go even more in-depth into warptiling, and their visualizations are illuminating.\nEach warp will compute a chunk of size (WSUBN * WNITER) x (WSUBM * WMITER).\nEach thread computes WNITER * WMITER many chunks of size TM*TN.\n\nAfter autotuning the parameters, performance improves from 19.7 TFLOPs to 21.7 TFLOPs on an A100.\n\nHere’s a plot that compares our warptiling kernel against cuBLAS across increasing matrix sizes: I generated this plot on an A100, which is why the absolute FLOPs numbers are different.\n\nAt dimensions 2048 and 4096, our measured FLOPs are only a few percentage points slower than cuBLAS.\nHowever, for smaller matrices, we’re doing poorly in comparison to Nvidia’s library!\nThis happens because cuBLAS contains not one single implementation of SGEMM, but hundreds of them. There’s a reason I guess for why the library is 500MB of compiled code. To print all the kernels: cuobjdump --list-text .\nAt runtime, based on the dimensions, cuBLAS will pick which kernel to run.I launched matmuls for square matrices on all dimensions up to 4096 and found 16 different SGEMM kernels. Here’s a script for finding the kernel that was launched by cuBLAS (h/t Horace He).\nI traced the cuBLAS call and these are the kernels it’s calling at each size:I used the Nsight Systems CLI for this.\n\nAt dimension 256 it calls two kernels: a matmul kernel followed by a reduction kernel.Split-K refers to partitioning the K-dimension across multiple threadblocks. This means that each block will only compute part of the chunk of C, and cuBLAS follows up with a reduce kernel to accumulate the final result. This requires some extra memory space to store the intermediate results before the reduction. I imagine this looks like so (but I’m uncertain here):\nSo if we were trying to write a high-performance library that works for all shapes and sizes we would have specializations for different shapes, and at runtime dispatch to the one that’s the best fit.\n\nI also want to report a negative results: For this kernel, I additionally implemented an optimization called thread swizzling.\nThis technique assumes that threadblocks are launched in order of increasing blockIdx, and optimizes the mapping of blockIdx to C chunks in a way that should increase L2 locality.Remember that L2 is a cache for global memory that exists once for the whole GPU.\nThis Nvidia post has more info and visualizations.\nIt didn’t increase performance, presumably because L2 hit rate is already fairly high at 80%, so I ended up removing the swizzling code.The commit is here if anyone is interested.\n\nIt makes sense to move the loop over BK towards the outside, since it follows our maxim of “load some data, then do as much work on that data as possible”.\nIt further means that all computation that happens inside the BK loop will be independent and can be parallelized (for example using ILP).\n\nWe can now also start prefetching the data necessary for the next loop iteration already, a technique called double buffering.\n\nWork in Progress: Kernel 11\n\nIf I get back to working on this post, here’s what I’ll look at next:\n\nConclusion\n\nWriting this post was a similar experience to my previous post on optimizing SGEMM on CPU: Optimizing SGEMM iteratively is one of the best ways to deeply understand the performance characteristics of the hardware.\nFor writing the CUDA programs I was surprised by how easy it was to implement the code once I had made a good visualization of how I wanted the kernel to work.\n\nAlso: Powerlaws are everywhere.\nIt took me two weekends to write the first 6 kernels which reach 80% of peak FLOPs, and then 4 more weekends to do autotuning and warptiling to get to 94%.\nHow much I’m learning while writing this code has also seen diminishing results, hence I’m putting off hunting the last 6% until some future time.\n\nAll my code is available on Github.\n\nLastly, a big thanks to the creators of Godbolt.org (for looking at PTX and SASS assembly) and Excalidraw (for drawing the kernels)!\nBoth of these tools are a joy to use and have helped me learn much faster.\n\nIf you enjoy kernel work like this you’re likely a good fit for the Performance team at Anthropic. Come work with me! The team is headed by Tristan Hume who is the most capable & thoughtful manager I’ve ever had. We optimize Anthropic’s model for GPUs, TPUs and AWS Trainium. Feel free to reach out!\n\nFurther Resources and References\n\n"} {"content": "Fundamental Optimizations in CUDA\nPeng Wang, Developer Technology, NVIDIA\nOptimization Overview\nGPU architecture\nKernel optimization\n— Memory optimization\n— Latency optimization\n— Instruction optimization\nCPU-GPU interaction optimization\n— Overlapped execution using streams\nOptimization Overview\nGPU architecture\nKernel optimization\n— Memory optimization\n— Execution configuration\n— Instruction optimization\nCPU-GPU interaction optimization\n— Overlapped execution using streams\nGPU High Level View\nStreaming Multiprocessor\nGlobal memory\nFermi Multiprocessor\n2 Warp Scheduler\n— In-order dual-issue\n— Up to 1536 concurrent threads\n32 CUDA Cores\n— Full IEEE 754-2008 FP32 and FP64\n— 32 FP32 ops/clock, 16 FP64 ops/clock\nConfigurable 16/48 KB shared memory\nConfigurable 16/48 KB L1 cache \n4 SFUs\n32K 32-bit registers\nUniform Cache\n64K Configurable\nCache / Shared Mem\nLoad/Store Units x 16\nCore\nSpecial Func Units x 4\nInterconnect Network\nInstruction Cache\nScheduler\nScheduler\nDispatch\nDispatch\nRegister File\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nCore\nGPU and Programming Model\nWarp and SIMT\nBlock\n32 Threads\n32 Threads\n32 Threads\n...\nWarps\n=\n• Blocks divide into groups of 32 \nthreads called warps\n• Warps are basic scheduling units\n• Context switching is free\n• A lot of warps can hide memory \nlatency\n• Warps always perform the same \ninstruction (SIMT)\n• Each thread CAN execute its own \ncode path\nFermi Memory Hierarchy\nRegister\n— Spills to local memory\nCaches\n— Shared memory\n— L1 cache\n— L2 cache\n— Constant cache\n— Texture cache\nGlobal memory\nFermi Memory Hierarchy Review\nL2\nGlobal Memory\nRegisters\nL1\nSM-N\nSMEM\nRegisters\nL1\nSM-0\nSMEM\nRegisters\nL1\nSM-1\nSMEM\nGeneral Optimization Strategies: \nMeasurement\nFind out the limiting factor in kernel performance\n— Memory bandwidth bound (memory optimization)\n— Instruction throughput bound (instruction optimization)\n— Latency bound (configuration optimization)\nMeasure effective memory/instruction throughput\nOptimize for peak memory/instruction throughput\n— Finding out the bottleneck\n— Typically an iterative process\nOptimization Overview\nGPU architecture\nKernel optimization\n— Memory optimization\n— Latency optimization\n— Instruction optimization\nCPU-GPU interaction optimization\n— Overlapped execution using streams\nMemory Optimization\nIf the code is memory-bound and effective memory \nthroughput is much lower than the peak\nPurpose: access only data that are absolutely necessary\nMajor techniques\n— Improve access pattern to reduce wasted transactions: coalescing\n— Reduce redundant access: shared memory\nCoalescing\nGlobal memory latency: 400-800 cycles\n— The single most important performance consideration!\nCoalescing: global memory access from a warp can be \ncoalesced into a single transaction\nCriterion: requests from a warp falling in a L1 cache line, one \ntransaction\n# transaction = # L1 line accessed\nCaching or Non-caching?\nOn Fermi, by default all global memory access are cached in \nL1. \n— L1 can be by-passed by passing ―-Xptxas –dlcm=cg‖ to nvcc: cache \nonly in L2\nIf non-cached: same coalescing criterion\n— But transaction size can be reduced to 32B segment\nCaching or Non-caching?\nCaching\n— Help on some non-coalesced access, e.g. misaligned\n— May lead to lower performance for some uncoalesced access due to \nmore wasted bandwidth\nNon-caching\n— Reduce wasted bandwidth\n— Leave more space for register spilling\nCaching Load\nWarp requests 32 aligned, consecutive 4-byte words\nAddresses fall within 1 cache-line\n— Warp needs 128 bytes\n— 128 bytes move across the bus on a miss\n— Bus utilization: 100%\n...\naddresses from a warp\n96\n192\n128\n160\n224\n288\n256\n32\n64\n352\n320\n384\n448\n416\nMemory addresses\n0\nCaching Load\n...\n96\n192\n128\n160\n224\n288\n256\n32\n64\n352\n320\n384\n448\n416\nMemory addresses\naddresses from a warp\n0\nWarp requests 32 aligned, permuted 4-byte words\nAddresses fall within 1 cache-line\n— Warp needs 128 bytes\n— 128 bytes move across the bus on a miss\n— Bus utilization: 100%\nCaching Load\n96\n192\n128\n160\n224\n288\n256\n...\naddresses from a warp\n32\n64\n0\n352\n320\n384\n448\n416\nMemory addresses\nWarp requests 32 misaligned, consecutive 4-byte words\nAddresses fall within 2 cache-lines\n— Warp needs 128 bytes\n— 256 bytes move across the bus on misses\n— Bus utilization: 50%\nNon-caching Load\n96\n192\n128\n160\n224\n288\n256\n...\naddresses from a warp\n32\n64\n0\n352\n320\n384\n448\n416\nMemory addresses\nWarp requests 32 misaligned, consecutive 4-byte words\nAddresses fall within at most 5 segments\n— Warp needs 128 bytes\n— At most 160 bytes move across the bus\n— Bus utilization: at least 80%\nSome misaligned patterns will fall within 4 segments, so 100% utilization\nCaching Load\n...\naddresses from a warp\n96\n192\n128\n160\n224\n288\n256\n32\n64\n352\n320\n384\n448\n416\nMemory addresses\n0\nAll threads in a warp request the same 4-byte word\nAddresses fall within a single cache-line\n— Warp needs 4 bytes\n— 128 bytes move across the bus on a miss\n— Bus utilization: 3.125%\nNon-caching Load\n...\naddresses from a warp\n96\n192\n128\n160\n224\n288\n256\n32\n64\n352\n320\n384\n448\n416\nMemory addresses\n0\nAll threads in a warp request the same 4-byte word\nAddresses fall within a single segment\n— Warp needs 4 bytes\n— 32 bytes move across the bus on a miss\n— Bus utilization: 12.5%\nCaching Load\n...\naddresses from a warp\n96\n192\n128\n160\n224\n288\n256\n32\n64\n352\n320\n384\n448\n416\nMemory addresses\n0\nWarp requests 32 scattered 4-byte words\nAddresses fall within N cache-lines\n— Warp needs 128 bytes\n— N*128 bytes move across the bus on a miss\n— Bus utilization: 128 / (N*128)\nNon-caching Load\n...\naddresses from a warp\n96\n192\n128\n160\n224\n288\n256\n32\n64\n352\n320\n384\n448\n416\nMemory addresses\n0\nWarp requests 32 scattered 4-byte words\nAddresses fall within N segments\n— Warp needs 128 bytes\n— N*32 bytes move across the bus on a miss\n— Bus utilization: 128 / (N*32)\nShared Memory\nLow latency: a few cycles\nHigh throughput: 73.6 GB/s per SM (1.03 TB/s per GPU)\nMain use\n— Inter-block communication\n— User-managed cache to reduce redundant global memory accesses\n— Avoid non-coalesced access\nShared Memory Example: Matrix \nMultiplication\nA\nB\nC\nC=AxB\nEvery thread corresponds to one entry in C.\nNaive Kernel\n__global__ void simpleMultiply(float* a,\nfloat* b,\nfloat* c, \nint N)\n{\nint row = threadIdx.x + blockIdx.x*blockDim.x;\nint col = threadIdx.y + blockIdx.y*blockDim.y;\nfloat sum = 0.0f;\nfor (int i = 0; i < N; i++) {\nsum += a[row*N+i] * b[i*N+col];\n}\nc[row*N+col] = sum;\n}\nEvery thread corresponds to one entry in C.\nBlocked Matrix Multiplication\nA\nB\nC\nC=AxB\nData reuse in the blocked version\nBlocked and cached kernel\n__global__ void coalescedMultiply(double*a, \ndouble* b, \ndouble*c,\nint N)\n{\n__shared__ float aTile[TILE_DIM][TILE_DIM];\n__shared__ double bTile[TILE_DIM][TILE_DIM];\nint row = blockIdx.y * blockDim.y + threadIdx.y;\nint col = blockIdx.x * blockDim.x + threadIdx.x;\nfloat sum = 0.0f;\nfor (int k = 0; k < N; k += TILE_DIM) {\naTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];\nbTile[threadIdx.y][threadIdx.x] = b[threadIdx.y*N+col];\n__syncthreads();\nfor (int i = k; i < k+TILE_DIM; i++) \nsum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x];\n}\nc[row*N+col] = sum;\n}\nPerformance Results\nM=N=K=512\nBank Conflicts\nShared memory is divided into banks\n— Successive 32-bit words assigned to successive banks\n— Number of banks = 32 (Fermi)\nBank conflict: two R/W fall in the same \nbank, the access will be serialized.\nSpecial cases\n— If all threads in a warp access the same word, \none broadcast. Fermi can also do multi-broadcast.\n— If reading continuous byte/double, no conflict on Fermi\nBank 31\nBank 7\nBank 6\nBank 5\nBank 4\nBank 3\nBank 2\nBank 1\nBank 0\nShared memory\nBank Access Examples\nBank Access Examples\nOptimizing Bank Conflict\nMeasure whether it matters\n\nChange SMEM reads to the same value to see the impact\nAvoiding bank conflict\n— Change address patterns\n— Padding\nUse array[N_BANK][N_BANK+1]\nMemory Optimizations\nStrive for perfect coalescing\n— Transpose the data structure, e.g. AOS to SOA\n— Padding\n— Change parallelization scheme: 1-thread-per-task to 1-warp-per-task?\nUse shared memory to reduce global memory access, avoid \nnon-coalesced access\nBound to texture cache for unpredictable uncoalesced access\nUse constant cache if all threads in a warp will access the \nsame constant data\nGlobal Memory Throughput Metric\nMeasuring effective memory throughput:\n— From the app point of view (―useful‖ bytes): number of bytes \nneeded by the algorithm divided by kernel time\n— Compare to the theoretical bandwidth\n70-80% is very good\nFinding out bottleneck\n— Start with global memory operations, achieve good throughput\n— Add arithmetic, shared memory, etc, measuring perf as you go\nOptimization Overview\nGPU architecture\nKernel optimization\n— Memory optimization\n— Latency optimization\n— Instruction optimization\nCPU-GPU interaction optimization\n— Overlapped execution using streams\nLatency Optimization\nWhen the code is latency bound\n— Both the memory and instruction throughputs are far from the peak\nLatency hiding:\n— Instructions are issued in order\n— A thread blocks when one of the operands isn’t ready\n— Latency is hidden by switching threads\nGMEM: 400-800 cycles\nArithmetic: 18-22 cycles\nPurpose: have enough concurrency to hide latency\nMajor techniques: increase concurrency\n— Adjust resource usage to increase active warps (TLP)\nGrid/Block Size Heuristics\n# of blocks >> # of SM > 100 to scale well to future device\nBlock size should be a multiple of 32 (warp size)\nMinimum: 64. I generally use 128 or 256. But use whatever \nis best for your app.\nDepends on the problem, do experiments!\nOccupancy\nOccupancy: ratio of active warps per SM to the maximum \nnumber of allowed warps\n— Maximum number: 48 in Fermi\nWe need the occupancy to be high enough to hide latency\nOccupancy is limited by resource usage\nDynamical Partitioning of SM Resources\nShared memory is partitioned among blocks\nRegisters are partitioned among threads: <= 63\nThread block slots: <= 8\nThread slots: <= 1536\nAny of those can be the limiting factor on how many threads \ncan be launched at the same time on a SM\nIf adding a single instruction leads to significant perf drop, \noccupancy is the primary suspect\nLatency Hiding Occupancy Calculation\nAssume global memory takes 400 cycles, we need 400/2 = \n200 arithmetic instructions to hide the latency. \nAssume the code has 8 independent arithmetic instructions \nfor every one global memory access. Thus 200/8~26 warps \nwould be enough (54% occupancy).\nLessons:\n— Required occupancy depends on BOTH architecture and application\n— In this example , beyond 54%, higher occupancy won’t lead to \nfurther performance increase.\nOccupancy Optimizations\nKnow the current occupancy\n— Visual profiler\n— --ptxas-options=-v: output resource usage info; input to Occupancy \nCalculator\nAdjust resource usage to increase occupancy\n— Change block size\n— Limit register usage\nCompiler option –maxrregcount=n: per file\n__launch_bounds__: per kernel\nUse template to reduce register usage\n— Dynamical allocating shared memory\nOccupancy Calculator\nhttp://developer.download.nvidia.com/compute/cuda/CUDA_Occupancy_calculator.xls\nIncrease ILP of Each Thread\nLoad by itself doesn’t stall execution\nIncrement a 64M element array\n— Two accesses per thread (load then store, but they are dependent)\nThus, each warp (32 threads) has one outstanding transaction at a time\nSeveral independent \nsmaller accesses have the \nsame effect as one larger \none.\nFor example:\nFour 32-bit ~= one 128-bit\nOptimization Overview\nGPU architecture\nKernel optimization\n— Memory optimization\n— Latency optimization\n— Instruction optimization\nCPU-GPU interaction optimization\n— Overlapped execution using streams\nInstruction Optimization\nIf you find out the code is instruction bound\n— Compute-intensive algorithm can easily become memory-bound if \nnot careful enough\n— Typically, worry about instruction optimization after memory and \nexecution configuration optimizations\nPurpose: reduce instruction count\n— Use less instructions to get the same job done\nMajor techniques\n— Use high throughput instructions\n— Reduce wasted instructions: branch divergence, bank conflict, etc.\nFermi Arithmetic Instruction \nThroughputs\nThroughputs of common instructions\n— Int & fp32: 2 cycles\n— fp64: 2 cycles\n— Fp32 transendental: 8 cycles\n— Int divide and modulo are expensive\nDivide by 2^n, use ―>> n‖\nModulo 2^n, use ―& (2^n – 1)‖\nReduce Instruction Count\nAvoid automatic conversion of double to float\n— Adding ―f‖ to floating literals (e.g. 1.0f) because the default is \ndouble\nFermi default: -ftz=false, -prec-div=true, -prec-sqrt=true \nfor IEEE compliance\nFast math functions\n— Two types of runtime math library functions\nfunc(): slower but higher accuracy (5 ulp or less)\n__func(): fast but lower accuracy (see prog. guide for full details)\n— -use_fast_math: forces every func() to __func () \nControl Flow\nDivergent branches:\n— Threads within a single warp take different paths\n— Example with divergence: \n\nif (threadIdx.x > 2) {...} else {...}\n\nBranch granularity < warp size\n— Divergence inside a warp is processed by turning off the inactive threads\n\nDifferent if-else branches are both executes: serialized\nDifferent warps can execute different code with no impact on performance\nAvoid diverging within a warp\n— Example without divergence:\n\nif (threadIdx.x / WARP_SIZE > 2) {...} else {...}\n\nBranch granularity is a whole multiple of warp size\nKernel Optimization Workflow\nFind Limiter\nCompare to peak \nGB/s\nMemory \noptimization\nCompare to peak \nGinst/s\nInstruction \noptimization\nConfiguration \noptimization\nMemory bound\nInstruction \nbound\nLatency bound\nDone!\n<<\n<<\n~\n~\nOptimization Overview\nGPU architecture\nKernel optimization\n— Memory optimization\n— Latency optimization\n— Instruction optimization\nCPU-GPU interaction optimization\n— Overlapped execution using streams\nMinimizing CPU-GPU data transfer\nHost<->device data transfer has much lower bandwidth than \nglobal memory access.\n— 8 GB/s (PCIe x16 Gen2) vs 156 GB/s & 515 Ginst/s (C2050)\nMinimize transfer\n— Intermediate data directly on GPU\n— Recompute\n— Move CPU codes to GPU that do not have performance gains if it \ncan reduce data transfer\nGroup transfer\n— One large transfer much better than many small ones: 10 microsec\nlatency, 8 GB/s => latency dominated if data size < 80 KB\nStreams and Async API\nDefault API:\n— Kernel launches are asynchronous with CPU\n— Memcopies (D2H, H2D) block CPU thread\n— CUDA calls are serialized by the driver\nStreams and async functions provide:\n— Memcopies (D2H, H2D) asynchronous with CPU\n— Ability to concurrently execute a kernel and a memcopy\n— Concurrent kernel in Fermi\nStream = sequence of operations that execute in issue-order on GPU\n— Operations from different streams can be interleaved\n— A kernel and memcopy from different streams can be overlapped\nPinned (non-pageable) memory\nPinned memory enables:\n— memcopies asynchronous with CPU & GPU\nUsage\n— cudaHostAlloc / cudaFreeHost\ninstead of malloc / free\n— Additional flags if pinned region is to be shared between lightweight \nCPU threads\nNote:\n— pinned memory is essentially removed from virtual memory\n— cudaHostAlloc is typically very expensive\nOverlap kernel and memory copy\nRequirements:\n— D2H or H2D memcopy from pinned memory\n— Device with compute capability ≥ 1.1 (G84 and later)\n— Kernel and memcopy in different, non-0 streams\nCode:\ncudaStream_t\nstream1, stream2;\ncudaStreamCreate(&stream1);\ncudaStreamCreate(&stream2);\ncudaMemcpyAsync( dst, src, size, dir, stream1 );\nkernel<<>>(…);\npotentially\noverlapped\nSummary\nOptimization needs an understanding of GPU architecture\nMemory optimization: coalescing, shared memory\nExecution configuration: latency hiding\nInstruction throughput: use high throughput inst, reduce \nwasted cycles\nDo measurements!\n— Use the Profiler, simple code modifications\n— Compare to theoretical peaks"} {"content": "CUDA C++ Best Practices Guide\n\nThe programming guide to using the CUDA Toolkit to obtain the best performance from NVIDIA GPUs.\n\n1. Preface\n\nThis Best Practices Guide is a manual to help developers obtain the best performance from NVIDIA® CUDA® GPUs. It presents established parallelization and optimization techniques and explains coding metaphors and idioms that can greatly simplify programming for CUDA-capable GPU architectures.\n\nWhile the contents can be used as a reference manual, you should be aware that some topics are revisited in different contexts as various programming and configuration topics are explored. As a result, it is recommended that first-time readers proceed through the guide sequentially. This approach will greatly improve your understanding of effective programming practices and enable you to better use the guide for reference later.\n\n1.1. Who Should Read This Guide?\n\nThe discussions in this guide all use the C++ programming language, so you should be comfortable reading C++ code.\n\nThis guide refers to and relies on several other documents that you should have at your disposal for reference, all of which are available at no cost from the CUDA website https://docs.nvidia.com/cuda/. The following documents are especially important resources:\n\nCUDA Installation Guide\n\nCUDA C++ Programming Guide\n\nCUDA Toolkit Reference Manual\n\nIn particular, the optimization section of this guide assumes that you have already successfully downloaded and installed the CUDA Toolkit (if not, please refer to the relevant CUDA Installation Guide for your platform) and that you have a basic familiarity with the CUDA C++ programming language and environment (if not, please refer to the CUDA C++ Programming Guide).\n\n1.2. Assess, Parallelize, Optimize, Deploy\n\nThis guide introduces the Assess, Parallelize, Optimize, Deploy(APOD) design cycle for applications with the goal of helping application developers to rapidly identify the portions of their code that would most readily benefit from GPU acceleration, rapidly realize that benefit, and begin leveraging the resulting speedups in production as early as possible.\n\nAPOD is a cyclical process: initial speedups can be achieved, tested, and deployed with only minimal initial investment of time, at which point the cycle can begin again by identifying further optimization opportunities, seeing additional speedups, and then deploying the even faster versions of the application into production.\n\n1.2.1. Assess\n\nFor an existing project, the first step is to assess the application to locate the parts of the code that are responsible for the bulk of the execution time. Armed with this knowledge, the developer can evaluate these bottlenecks for parallelization and start to investigate GPU acceleration.\n\nBy understanding the end-user’s requirements and constraints and by applying Amdahl’s and Gustafson’s laws, the developer can determine the upper bound of performance improvement from acceleration of the identified portions of the application.\n\n1.2.2. Parallelize\n\nHaving identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler.\n\nOn the other hand, some applications’ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures will require exposing parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput.\n\n1.2.3. Optimize\n\nAfter each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the process as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. Instead, strategies can be applied incrementally as they are learned.\n\nOptimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developer’s optimization efforts and provide references into the relevant portions of the optimization section of this guide.\n\n1.2.4. Deploy\n\nHaving completed the GPU acceleration of one or more components of the application it is possible to compare the outcome with the original expectation. Recall that the initial assess step allowed the developer to determine an upper bound for the potential speedup attainable by accelerating given hotspots.\n\nBefore tackling other hotspots to improve the total speedup, the developer should consider taking the partially parallelized implementation and carry it through to production. This is important for a number of reasons; for example, it allows the user to profit from their investment as early as possible (the speedup may be partial but is still valuable), and it minimizes risk for the developer and the user by providing an evolutionary rather than revolutionary set of changes to the application.\n\n1.3. Recommendations and Best Practices\n\nThroughout this guide, specific recommendations are made regarding the design and implementation of CUDA C++ code. These recommendations are categorized by priority, which is a blend of the effect of the recommendation and its scope. Actions that present substantial improvements for most CUDA applications have the highest priority, while small optimizations that affect only very specific situations are given a lower priority.\n\nBefore implementing lower priority recommendations, it is good practice to make sure all higher priority recommendations that are relevant have already been applied. This approach will tend to provide the best results for the time invested and will avoid the trap of premature optimization.\n\nThe criteria of benefit and scope for establishing priority will vary depending on the nature of the program. In this guide, they represent a typical case. Your code might reflect different priority factors. Regardless of this possibility, it is good practice to verify that no higher-priority recommendations have been overlooked before undertaking lower-priority items.\n\nNote\n\nCode samples throughout the guide omit error checking for conciseness. Production code should, however, systematically check the error code returned by each API call and check for failures in kernel launches by calling cudaGetLastError().\n\n1.4. Assessing Your Application\n\nFrom supercomputers to mobile phones, modern processors increasingly rely on parallelism to provide performance. The core computational unit, which includes control, arithmetic, registers and typically some cache, is replicated some number of times and connected to memory via a network. As a result, all modern processors require parallel code in order to achieve good utilization of their computational power.\n\nWhile processors are evolving to expose more fine-grained parallelism to the programmer, many existing applications have evolved either as serial codes or as coarse-grained parallel codes (for example, where the data is decomposed into regions processed in parallel, with sub-regions shared using MPI). In order to profit from any modern processor architecture, GPUs included, the first steps are to assess the application to identify the hotspots, determine whether they can be parallelized, and understand the relevant workloads both now and in the future.\n\n2. Heterogeneous Computing\n\nCUDA programming involves running code on two different platforms concurrently: a host system with one or more CPUs and one or more CUDA-enabled NVIDIA GPU devices.\n\nWhile NVIDIA GPUs are frequently associated with graphics, they are also powerful arithmetic engines capable of running thousands of lightweight threads in parallel. This capability makes them well suited to computations that can leverage parallel execution.\n\nHowever, the device is based on a distinctly different design from the host system, and it’s important to understand those differences and how they determine the performance of CUDA applications in order to use CUDA effectively.\n\n2.1. Differences between Host and Device\n\nThe primary differences are in threading model and in separate physical memories:\n\nExecution pipelines on host systems can support a limited number of concurrent threads. For example, servers that have two 32 core processors can run only 64 threads concurrently (or small multiple of that if the CPUs support simultaneous multithreading). By comparison, the smallest executable unit of parallelism on a CUDA device comprises 32 threads (termed a warp of threads). Modern NVIDIA GPUs can support up to 2048 active threads concurrently per multiprocessor (see Features and Specifications of the CUDA C++ Programming Guide) On GPUs with 80 multiprocessors, this leads to more than 160,000 concurrently active threads.\n\nThreads on a CPU are generally heavyweight entities. The operating system must swap threads on and off CPU execution channels to provide multithreading capability. Context switches (when two threads are swapped) are therefore slow and expensive. By comparison, threads on GPUs are extremely lightweight. In a typical system, thousands of threads are queued up for work (in warps of 32 threads each). If the GPU must wait on one warp of threads, it simply begins executing work on another. Because separate registers are allocated to all active threads, no swapping of registers or other state need occur when switching among GPU threads. Resources stay allocated to each thread until it completes its execution. In short, CPU cores are designed to minimize latency for a small number of threads at a time each, whereas GPUs are designed to handle a large number of concurrent, lightweight threads in order to maximize throughput.\n\nThe host system and the device each have their own distinct attached physical memories 1. As the host and device memories are separated, items in the host memory must occasionally be communicated between device memory and host memory as described in What Runs on a CUDA-Enabled Device?.\n\nThese are the primary hardware differences between CPU hosts and GPU devices with respect to parallel programming. Other differences are discussed as they arise elsewhere in this document. Applications composed with these differences in mind can treat the host and device together as a cohesive heterogeneous system wherein each processing unit is leveraged to do the kind of work it does best: sequential work on the host and parallel work on the device.\n\n2.2. What Runs on a CUDA-Enabled Device?\n\nThe following issues should be considered when determining what parts of an application to run on the device:\n\nThe device is ideally suited for computations that can be run on numerous data elements simultaneously in parallel. This typically involves arithmetic on large data sets (such as matrices) where the same operation can be performed across thousands, if not millions, of elements at the same time. This is a requirement for good performance on CUDA: the software must use a large number (generally thousands or tens of thousands) of concurrent threads. The support for running numerous threads in parallel derives from CUDA’s use of a lightweight threading model described above.\n\nTo use CUDA, data values must be transferred from the host to the device. These transfers are costly in terms of performance and should be minimized. (See Data Transfer Between Host and Device.) This cost has several ramifications:\n\nThe complexity of operations should justify the cost of moving data to and from the device. Code that transfers data for brief use by a small number of threads will see little or no performance benefit. The ideal scenario is one in which many threads perform a substantial amount of work.\n\nFor example, transferring two matrices to the device to perform a matrix addition and then transferring the results back to the host will not realize much performance benefit. The issue here is the number of operations performed per data element transferred. For the preceding procedure, assuming matrices of size NxN, there are N2 operations (additions) and 3N2 elements transferred, so the ratio of operations to elements transferred is 1:3 or O(1). Performance benefits can be more readily achieved when this ratio is higher. For example, a matrix multiplication of the same matrices requires N3 operations (multiply-add), so the ratio of operations to elements transferred is O(N), in which case the larger the matrix the greater the performance benefit. The types of operations are an additional factor, as additions have different complexity profiles than, for example, trigonometric functions. It is important to include the overhead of transferring data to and from the device in determining whether operations should be performed on the host or on the device.\n\nData should be kept on the device as long as possible. Because transfers should be minimized, programs that run multiple kernels on the same data should favor leaving the data on the device between kernel calls, rather than transferring intermediate results to the host and then sending them back to the device for subsequent calculations. So, in the previous example, had the two matrices to be added already been on the device as a result of some previous calculation, or if the results of the addition would be used in some subsequent calculation, the matrix addition should be performed locally on the device. This approach should be used even if one of the steps in a sequence of calculations could be performed faster on the host. Even a relatively slow kernel may be advantageous if it avoids one or more transfers between host and device memory. Data Transfer Between Host and Device provides further details, including the measurements of bandwidth between the host and the device versus within the device proper.\n\nFor best performance, there should be some coherence in memory access by adjacent threads running on the device. Certain memory access patterns enable the hardware to coalesce groups of reads or writes of multiple data items into one operation. Data that cannot be laid out so as to enable coalescing, or that doesn’t have enough locality to use the L1 or texture caches effectively, will tend to see lesser speedups when used in computations on GPUs. A noteworthy exception to this are completely random memory access patterns. In general, they should be avoided, because compared to peak capabilities any architecture processes these memory access patterns at a low efficiency. However, compared to cache based architectures, like CPUs, latency hiding architectures, like GPUs, tend to cope better with completely random memory access patterns.\n\nOn Systems on a Chip with integrated GPUs, such as NVIDIA® Tegra®, host and device memory are physically the same, but there is still a logical distinction between host and device memory. See the Application Note on CUDA for Tegra for details.\n\n3. Application Profiling\n\n3.1. Profile\n\nMany codes accomplish a significant portion of the work with a relatively small amount of code. Using a profiler, the developer can identify such hotspots and start to compile a list of candidates for parallelization.\n\n3.1.1. Creating the Profile\n\nThere are many possible approaches to profiling the code, but in all cases the objective is the same: to identify the function or functions in which the application is spending most of its execution time.\n\nNote\n\nHigh Priority: To maximize developer productivity, profile the application to determine hotspots and bottlenecks.\n\nThe most important consideration with any profiling activity is to ensure that the workload is realistic - i.e., that information gained from the test and decisions based upon that information are relevant to real data. Using unrealistic workloads can lead to sub-optimal results and wasted effort both by causing developers to optimize for unrealistic problem sizes and by causing developers to concentrate on the wrong functions.\n\nThere are a number of tools that can be used to generate the profile. The following example is based on gprof, which is an open-source profiler for Linux platforms from the GNU Binutils collection.\n\n$ gcc -O2 -g -pg myprog.c\n$ gprof ./a.out > profile.txt\nEach sample counts as 0.01 seconds.\n % cumulative self self total\n time seconds seconds calls ms/call ms/call name\n 33.34 0.02 0.02 7208 0.00 0.00 genTimeStep\n 16.67 0.03 0.01 240 0.04 0.12 calcStats\n 16.67 0.04 0.01 8 1.25 1.25 calcSummaryData\n 16.67 0.05 0.01 7 1.43 1.43 write\n 16.67 0.06 0.01 mcount\n 0.00 0.06 0.00 236 0.00 0.00 tzset\n 0.00 0.06 0.00 192 0.00 0.00 tolower\n 0.00 0.06 0.00 47 0.00 0.00 strlen\n 0.00 0.06 0.00 45 0.00 0.00 strchr\n 0.00 0.06 0.00 1 0.00 50.00 main\n 0.00 0.06 0.00 1 0.00 0.00 memcpy\n 0.00 0.06 0.00 1 0.00 10.11 print\n 0.00 0.06 0.00 1 0.00 0.00 profil\n 0.00 0.06 0.00 1 0.00 50.00 report\n\n3.1.2. Identifying Hotspots\n\nIn the example above, we can clearly see that the function genTimeStep() takes one-third of the total running time of the application. This should be our first candidate function for parallelization. Understanding Scaling discusses the potential benefit we might expect from such parallelization.\n\nIt is worth noting that several of the other functions in the above example also take up a significant portion of the overall running time, such as calcStats() and calcSummaryData(). Parallelizing these functions as well should increase our speedup potential. However, since APOD is a cyclical process, we might opt to parallelize these functions in a subsequent APOD pass, thereby limiting the scope of our work in any given pass to a smaller set of incremental changes.\n\n3.1.3. Understanding Scaling\n\nThe amount of performance benefit an application will realize by running on CUDA depends entirely on the extent to which it can be parallelized. Code that cannot be sufficiently parallelized should run on the host, unless doing so would result in excessive transfers between the host and the device.\n\nNote\n\nHigh Priority: To get the maximum benefit from CUDA, focus first on finding ways to parallelize sequential code.\n\nBy understanding how applications can scale it is possible to set expectations and plan an incremental parallelization strategy. Strong Scaling and Amdahl’s Law describes strong scaling, which allows us to set an upper bound for the speedup with a fixed problem size. Weak Scaling and Gustafson’s Law describes weak scaling, where the speedup is attained by growing the problem size. In many applications, a combination of strong and weak scaling is desirable.\n\n3.1.3.1. Strong Scaling and Amdahl’s Law\n\nStrong scaling is a measure of how, for a fixed overall problem size, the time to solution decreases as more processors are added to a system. An application that exhibits linear strong scaling has a speedup equal to the number of processors used.\n\nStrong scaling is usually equated with Amdahl’s Law, which specifies the maximum speedup that can be expected by parallelizing portions of a serial program. Essentially, it states that the maximum speedup S of a program is:\n\n\\(S = \\frac{1}{(1 - P) + \\frac{P}{N}}\\)\n\nHere P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs.\n\nThe larger N is(that is, the greater the number of processors), the smaller the P/N fraction. It can be simpler to view N as a very large number, which essentially transforms the equation into \\(S = 1/(1 - P)\\). Now, if 3/4 of the running time of a sequential program is parallelized, the maximum speedup over serial code is 1 / (1 - 3/4) = 4.\n\nIn reality, most applications do not exhibit perfectly linear strong scaling, even if they do exhibit some degree of strong scaling. For most purposes, the key point is that the larger the parallelizable portion P is, the greater the potential speedup. Conversely, if P is a small number (meaning that the application is not substantially parallelizable), increasing the number of processors N does little to improve performance. Therefore, to get the largest speedup for a fixed problem size, it is worthwhile to spend effort on increasing P, maximizing the amount of code that can be parallelized.\n\n3.1.3.2. Weak Scaling and Gustafson’s Law\n\nWeak scaling is a measure of how the time to solution changes as more processors are added to a system with a fixed problem size per processor; i.e., where the overall problem size increases as the number of processors is increased.\n\nWeak scaling is often equated with Gustafson’s Law, which states that in practice, the problem size scales with the number of processors. Because of this, the maximum speedup S of a program is:\n\n\\(S = N + (1 - P)(1 - N)\\)\n\nHere P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs.\n\nAnother way of looking at Gustafson’s Law is that it is not the problem size that remains constant as we scale up the system but rather the execution time. Note that Gustafson’s Law assumes that the ratio of serial to parallel execution remains constant, reflecting additional cost in setting up and handling the larger problem.\n\n3.1.3.3. Applying Strong and Weak Scaling\n\nUnderstanding which type of scaling is most applicable to an application is an important part of estimating speedup. For some applications the problem size will remain constant and hence only strong scaling is applicable. An example would be modeling how two molecules interact with each other, where the molecule sizes are fixed.\n\nFor other applications, the problem size will grow to fill the available processors. Examples include modeling fluids or structures as meshes or grids and some Monte Carlo simulations, where increasing the problem size provides increased accuracy.\n\nHaving understood the application profile, the developer should understand how the problem size would change if the computational performance changes and then apply either Amdahl’s or Gustafson’s Law to determine an upper bound for the speedup.\n\n4. Parallelizing Your Application\n\nHaving identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler.\n\nOn the other hand, some applications’ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures require exposing this parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput.\n\n5. Getting Started\n\nThere are several key strategies for parallelizing sequential code. While the details of how to apply these strategies to a particular application is a complex and problem-specific topic, the general themes listed here apply regardless of whether we are parallelizing code to run on for multicore CPUs or for use on CUDA GPUs.\n\n5.1. Parallel Libraries\n\nThe most straightforward approach to parallelizing an application is to leverage existing libraries that take advantage of parallel architectures on our behalf. The CUDA Toolkit includes a number of such libraries that have been fine-tuned for NVIDIA CUDA GPUs, such as cuBLAS, cuFFT, and so on.\n\nThe key here is that libraries are most useful when they match well with the needs of the application. Applications already using other BLAS libraries can often quite easily switch to cuBLAS, for example, whereas applications that do little to no linear algebra will have little use for cuBLAS. The same goes for other CUDA Toolkit libraries: cuFFT has an interface similar to that of FFTW, etc.\n\nAlso of note is the Thrust library, which is a parallel C++ template library similar to the C++ Standard Template Library. Thrust provides a rich collection of data parallel primitives such as scan, sort, and reduce, which can be composed together to implement complex algorithms with concise, readable source code. By describing your computation in terms of these high-level abstractions you provide Thrust with the freedom to select the most efficient implementation automatically. As a result, Thrust can be utilized in rapid prototyping of CUDA applications, where programmer productivity matters most, as well as in production, where robustness and absolute performance are crucial.\n\n5.2. Parallelizing Compilers\n\nAnother common approach to parallelization of sequential codes is to make use of parallelizing compilers. Often this means the use of directives-based approaches, where the programmer uses a pragma or other similar notation to provide hints to the compiler about where parallelism can be found without needing to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow the compiler to do the detailed work of mapping the computation onto the parallel architecture.\n\nThe OpenACC standard provides a set of compiler directives to specify loops and regions of code in standard C, C++ and Fortran that should be offloaded from a host CPU to an attached accelerator such as a CUDA GPU. The details of managing the accelerator device are handled implicitly by an OpenACC-enabled compiler and runtime.\n\nSee http://www.openacc.org/ for details.\n\n5.3. Coding to Expose Parallelism\n\nFor applications that need additional functionality or performance beyond what existing parallel libraries or parallelizing compilers can provide, parallel programming languages such as CUDA C++ that integrate seamlessly with existing sequential code are essential.\n\nOnce we have located a hotspot in our application’s profile assessment and determined that custom code is the best approach, we can use CUDA C++ to expose the parallelism in that portion of our code as a CUDA kernel. We can then launch this kernel onto the GPU and retrieve the results without requiring major rewrites to the rest of our application.\n\nThis approach is most straightforward when the majority of the total running time of our application is spent in a few relatively isolated portions of the code. More difficult to parallelize are applications with a very flat profile - i.e., applications where the time spent is spread out relatively evenly across a wide portion of the code base. For the latter variety of application, some degree of code refactoring to expose the inherent parallelism in the application might be necessary, but keep in mind that this refactoring work will tend to benefit all future architectures, CPU and GPU alike, so it is well worth the effort should it become necessary.\n\n6. Getting the Right Answer\n\nObtaining the right answer is clearly the principal goal of all computation. On parallel systems, it is possible to run into difficulties not typically found in traditional serial-oriented programming. These include threading issues, unexpected values due to the way floating-point values are computed, and challenges arising from differences in the way CPU and GPU processors operate. This chapter examines issues that can affect the correctness of returned data and points to appropriate solutions.\n\n6.1. Verification\n\n6.1.1. Reference Comparison\n\nA key aspect of correctness verification for modifications to any existing program is to establish some mechanism whereby previous known-good reference outputs from representative inputs can be compared to new results. After each change is made, ensure that the results match using whatever criteria apply to the particular algorithm. Some will expect bitwise identical results, which is not always possible, especially where floating-point arithmetic is concerned; see Numerical Accuracy and Precision regarding numerical accuracy. For other algorithms, implementations may be considered correct if they match the reference within some small epsilon.\n\nNote that the process used for validating numerical results can easily be extended to validate performance results as well. We want to ensure that each change we make is correct and that it improves performance (and by how much). Checking these things frequently as an integral part of our cyclical APOD process will help ensure that we achieve the desired results as rapidly as possible.\n\n6.1.2. Unit Testing\n\nA useful counterpart to the reference comparisons described above is to structure the code itself in such a way that is readily verifiable at the unit level. For example, we can write our CUDA kernels as a collection of many short __device__ functions rather than one large monolithic __global__ function; each device function can be tested independently before hooking them all together.\n\nFor example, many kernels have complex addressing logic for accessing memory in addition to their actual computation. If we validate our addressing logic separately prior to introducing the bulk of the computation, then this will simplify any later debugging efforts. (Note that the CUDA compiler considers any device code that does not contribute to a write to global memory as dead code subject to elimination, so we must at least write something out to global memory as a result of our addressing logic in order to successfully apply this strategy.)\n\nGoing a step further, if most functions are defined as __host__ __device__ rather than just __device__ functions, then these functions can be tested on both the CPU and the GPU, thereby increasing our confidence that the function is correct and that there will not be any unexpected differences in the results. If there are differences, then those differences will be seen early and can be understood in the context of a simple function.\n\nAs a useful side effect, this strategy will allow us a means to reduce code duplication should we wish to include both CPU and GPU execution paths in our application: if the bulk of the work of our CUDA kernels is done in __host__ __device__ functions, we can easily call those functions from both the host code and the device code without duplication.\n\n6.2. Debugging\n\nCUDA-GDB is a port of the GNU Debugger that runs on Linux and Mac; see: https://developer.nvidia.com/cuda-gdb.\n\nThe NVIDIA Nsight Visual Studio Edition is available as a free plugin for Microsoft Visual Studio; see: https://developer.nvidia.com/nsight-visual-studio-edition.\n\nSeveral third-party debuggers support CUDA debugging as well; see: https://developer.nvidia.com/debugging-solutions for more details.\n\n6.3. Numerical Accuracy and Precision\n\nIncorrect or unexpected results arise principally from issues of floating-point accuracy due to the way floating-point values are computed and stored. The following sections explain the principal items of interest. Other peculiarities of floating-point arithmetic are presented in Features and Technical Specifications of the CUDA C++ Programming Guide as well as in a whitepaper and accompanying webinar on floating-point precision and performance available from https://developer.nvidia.com/content/precision-performance-floating-point-and-ieee-754-compliance-nvidia-gpus.\n\n6.3.1. Single vs. Double Precision\n\nDevices of CUDA Compute Capability 1.3 and higher provide native support for double-precision floating-point values (that is, values 64 bits wide). Results obtained using double-precision arithmetic will frequently differ from the same operation performed via single-precision arithmetic due to the greater precision of the former and due to rounding issues. Therefore, it is important to be sure to compare values of like precision and to express the results within a certain tolerance rather than expecting them to be exact.\n\n6.3.2. Floating Point Math Is Not Associative\n\nEach floating-point arithmetic operation involves a certain amount of rounding. Consequently, the order in which arithmetic operations are performed is important. If A, B, and C are floating-point values, (A+B)+C is not guaranteed to equal A+(B+C) as it is in symbolic math. When you parallelize computations, you potentially change the order of operations and therefore the parallel results might not match sequential results. This limitation is not specific to CUDA, but an inherent part of parallel computation on floating-point values.\n\n6.3.3. IEEE 754 Compliance\n\nAll CUDA compute devices follow the IEEE 754 standard for binary floating-point representation, with some small exceptions. These exceptions, which are detailed in Features and Technical Specifications of the CUDA C++ Programming Guide, can lead to results that differ from IEEE 754 values computed on the host system.\n\nOne of the key differences is the fused multiply-add (FMA) instruction, which combines multiply-add operations into a single instruction execution. Its result will often differ slightly from results obtained by doing the two operations separately.\n\n6.3.4. x86 80-bit Computations\n\nx86 processors can use an 80-bit double extended precision math when performing floating-point calculations. The results of these calculations can frequently differ from pure 64-bit operations performed on the CUDA device. To get a closer match between values, set the x86 host processor to use regular double or single precision (64 bits and 32 bits, respectively). This is done with the FLDCW x86 assembly instruction or the equivalent operating system API.\n\n7. Optimizing CUDA Applications\n\nAfter each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the process as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. Instead, strategies can be applied incrementally as they are learned.\n\nOptimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developer’s optimization efforts and provide references into the relevant portions of the optimization section of this guide.\n\n8. Performance Metrics\n\nWhen attempting to optimize CUDA code, it pays to know how to measure performance accurately and to understand the role that bandwidth plays in performance measurement. This chapter discusses how to correctly measure performance using CPU timers and CUDA events. It then explores how bandwidth affects performance metrics and how to mitigate some of the challenges it poses.\n\n8.1. Timing\n\nCUDA calls and kernel executions can be timed using either CPU or GPU timers. This section examines the functionality, advantages, and pitfalls of both approaches.\n\n8.1.1. Using CPU Timers\n\nAny CPU timer can be used to measure the elapsed time of a CUDA call or kernel execution. The details of various CPU timing approaches are outside the scope of this document, but developers should always be aware of the resolution their timing calls provide.\n\nWhen using CPU timers, it is critical to remember that many CUDA API functions are asynchronous; that is, they return control back to the calling CPU thread prior to completing their work. All kernel launches are asynchronous, as are memory-copy functions with the Async suffix on their names. Therefore, to accurately measure the elapsed time for a particular call or sequence of CUDA calls, it is necessary to synchronize the CPU thread with the GPU by calling cudaDeviceSynchronize() immediately before starting and stopping the CPU timer. cudaDeviceSynchronize()blocks the calling CPU thread until all CUDA calls previously issued by the thread are completed.\n\nAlthough it is also possible to synchronize the CPU thread with a particular stream or event on the GPU, these synchronization functions are not suitable for timing code in streams other than the default stream. cudaStreamSynchronize() blocks the CPU thread until all CUDA calls previously issued into the given stream have completed. cudaEventSynchronize() blocks until a given event in a particular stream has been recorded by the GPU. Because the driver may interleave execution of CUDA calls from other non-default streams, calls in other streams may be included in the timing.\n\nBecause the default stream, stream 0, exhibits serializing behavior for work on the device (an operation in the default stream can begin only after all preceding calls in any stream have completed; and no subsequent operation in any stream can begin until it finishes), these functions can be used reliably for timing in the default stream.\n\nBe aware that CPU-to-GPU synchronization points such as those mentioned in this section imply a stall in the GPU’s processing pipeline and should thus be used sparingly to minimize their performance impact.\n\n8.1.2. Using CUDA GPU Timers\n\nThe CUDA event API provides calls that create and destroy events, record events (including a timestamp), and convert timestamp differences into a floating-point value in milliseconds. How to time code using CUDA events illustrates their use.\n\nHow to time code using CUDA events\n\ncudaEvent_t start, stop;\nfloat time;\n\ncudaEventCreate(&start);\ncudaEventCreate(&stop);\n\ncudaEventRecord( start, 0 );\nkernel<<>> ( d_odata, d_idata, size_x, size_y,\n NUM_REPS);\ncudaEventRecord( stop, 0 );\ncudaEventSynchronize( stop );\n\ncudaEventElapsedTime( &time, start, stop );\ncudaEventDestroy( start );\ncudaEventDestroy( stop );\n\nHere cudaEventRecord() is used to place the start and stop events into the default stream, stream 0. The device will record a timestamp for the event when it reaches that event in the stream. The cudaEventElapsedTime() function returns the time elapsed between the recording of the start and stop events. This value is expressed in milliseconds and has a resolution of approximately half a microsecond. Like the other calls in this listing, their specific operation, parameters, and return values are described in the CUDA Toolkit Reference Manual. Note that the timings are measured on the GPU clock, so the timing resolution is operating-system-independent.\n\n8.2. Bandwidth\n\nBandwidth - the rate at which data can be transferred - is one of the most important gating factors for performance. Almost all changes to code should be made in the context of how they affect bandwidth. As described in Memory Optimizations of this guide, bandwidth can be dramatically affected by the choice of memory in which data is stored, how the data is laid out and the order in which it is accessed, as well as other factors.\n\nTo measure performance accurately, it is useful to calculate theoretical and effective bandwidth. When the latter is much lower than the former, design or implementation details are likely to reduce bandwidth, and it should be the primary goal of subsequent optimization efforts to increase it.\n\nNote\n\nHigh Priority: Use the effective bandwidth of your computation as a metric when measuring performance and optimization benefits.\n\n8.2.1. Theoretical Bandwidth Calculation\n\nTheoretical bandwidth can be calculated using hardware specifications available in the product literature. For example, the NVIDIA Tesla V100 uses HBM2 (double data rate) RAM with a memory clock rate of 877 MHz and a 4096-bit-wide memory interface.\n\nUsing these data items, the peak theoretical memory bandwidth of the NVIDIA Tesla V100 is 898 GB/s:\n\n\\(\\left. \\left( 0.877 \\times 10^{9} \\right. \\times (4096/8) \\times 2 \\right) \\div 10^{9} = 898\\text{GB/s}\\)\n\nIn this calculation, the memory clock rate is converted in to Hz, multiplied by the interface width (divided by 8, to convert bits to bytes) and multiplied by 2 due to the double data rate. Finally, this product is divided by 109 to convert the result to GB/s.\n\nNote\n\nSome calculations use 10243 instead of 109 for the final calculation. In such a case, the bandwidth would be 836.4 GiB/s. It is important to use the same divisor when calculating theoretical and effective bandwidth so that the comparison is valid.\n\nNote\n\nOn GPUs with GDDR memory with ECC enabled the available DRAM is reduced by 6.25% to allow for the storage of ECC bits. Fetching ECC bits for each memory transaction also reduced the effective bandwidth by approximately 20% compared to the same GPU with ECC disabled, though the exact impact of ECC on bandwidth can be higher and depends on the memory access pattern. HBM2 memories, on the other hand, provide dedicated ECC resources, allowing overhead-free ECC protection.2\n\n8.2.2. Effective Bandwidth Calculation\n\nEffective bandwidth is calculated by timing specific program activities and by knowing how data is accessed by the program. To do so, use this equation:\n\n\\(\\text{Effective\\ bandwidth} = \\left( {\\left( B_{r} + B_{w} \\right) \\div 10^{9}} \\right) \\div \\text{time}\\)\n\nHere, the effective bandwidth is in units of GB/s, Br is the number of bytes read per kernel, Bw is the number of bytes written per kernel, and time is given in seconds.\n\nFor example, to compute the effective bandwidth of a 2048 x 2048 matrix copy, the following formula could be used:\n\n\\(\\text{Effective\\ bandwidth} = \\left( {\\left( 2048^{2} \\times 4 \\times 2 \\right) \\div 10^{9}} \\right) \\div \\text{time}\\)\n\nThe number of elements is multiplied by the size of each element (4 bytes for a float), multiplied by 2 (because of the read and write), divided by 109 (or 1,0243) to obtain GB of memory transferred. This number is divided by the time in seconds to obtain GB/s.\n\n8.2.3. Throughput Reported by Visual Profiler\n\nFor devices with compute capability of 2.0 or greater, the Visual Profiler can be used to collect several different memory throughput measures. The following throughput metrics can be displayed in the Details or Detail Graphs view:\n\nRequested Global Load Throughput\n\nRequested Global Store Throughput\n\nGlobal Load Throughput\n\nGlobal Store Throughput\n\nDRAM Read Throughput\n\nDRAM Write Throughput\n\nThe Requested Global Load Throughput and Requested Global Store Throughput values indicate the global memory throughput requested by the kernel and therefore correspond to the effective bandwidth obtained by the calculation shown under Effective Bandwidth Calculation.\n\nBecause the minimum memory transaction size is larger than most word sizes, the actual memory throughput required for a kernel can include the transfer of data not used by the kernel. For global memory accesses, this actual throughput is reported by the Global Load Throughput and Global Store Throughput values.\n\nIt’s important to note that both numbers are useful. The actual memory throughput shows how close the code is to the hardware limit, and a comparison of the effective or requested bandwidth to the actual bandwidth presents a good estimate of how much bandwidth is wasted by suboptimal coalescing of memory accesses (see Coalesced Access to Global Memory). For global memory accesses, this comparison of requested memory bandwidth to actual memory bandwidth is reported by the Global Memory Load Efficiency and Global Memory Store Efficiency metrics.\n\nAs an exception, scattered writes to HBM2 see some overhead from ECC but much less than the overhead with similar access patterns on ECC-protected GDDR5 memory.\n\n9. Memory Optimizations\n\nMemory optimizations are the most important area for performance. The goal is to maximize the use of the hardware by maximizing bandwidth. Bandwidth is best served by using as much fast memory and as little slow-access memory as possible. This chapter discusses the various kinds of memory on the host and device and how best to set up data items to use the memory effectively.\n\n9.1. Data Transfer Between Host and Device\n\nThe peak theoretical bandwidth between the device memory and the GPU is much higher (898 GB/s on the NVIDIA Tesla V100, for example) than the peak theoretical bandwidth between host memory and device memory (16 GB/s on the PCIe x16 Gen3). Hence, for best overall application performance, it is important to minimize data transfer between the host and the device, even if that means running kernels on the GPU that do not demonstrate any speedup compared with running them on the host CPU.\n\nNote\n\nHigh Priority: Minimize data transfer between the host and the device, even if it means running some kernels on the device that do not show performance gains when compared with running them on the host CPU.\n\nIntermediate data structures should be created in device memory, operated on by the device, and destroyed without ever being mapped by the host or copied to host memory.\n\nAlso, because of the overhead associated with each transfer, batching many small transfers into one larger transfer performs significantly better than making each transfer separately, even if doing so requires packing non-contiguous regions of memory into a contiguous buffer and then unpacking after the transfer.\n\nFinally, higher bandwidth between the host and the device is achieved when using page-locked (or pinned) memory, as discussed in the CUDA C++ Programming Guide and the Pinned Memory section of this document.\n\n9.1.1. Pinned Memory\n\nPage-locked or pinned memory transfers attain the highest bandwidth between the host and the device. On PCIe x16 Gen3 cards, for example, pinned memory can attain roughly 12 GB/s transfer rates.\n\nPinned memory is allocated using the cudaHostAlloc() functions in the Runtime API. The bandwidthTest CUDA Sample shows how to use these functions as well as how to measure memory transfer performance.\n\nFor regions of system memory that have already been pre-allocated, cudaHostRegister() can be used to pin the memory on-the-fly without the need to allocate a separate buffer and copy the data into it.\n\nPinned memory should not be overused. Excessive use can reduce overall system performance because pinned memory is a scarce resource, but how much is too much is difficult to know in advance. Furthermore, the pinning of system memory is a heavyweight operation compared to most normal system memory allocations, so as with all optimizations, test the application and the systems it runs on for optimal performance parameters.\n\n9.1.2. Asynchronous and Overlapping Transfers with Computation\n\nData transfers between the host and the device using cudaMemcpy() are blocking transfers; that is, control is returned to the host thread only after the data transfer is complete. The cudaMemcpyAsync() function is a non-blocking variant of cudaMemcpy() in which control is returned immediately to the host thread. In contrast with cudaMemcpy(), the asynchronous transfer version requires pinned host memory (see Pinned Memory), and it contains an additional argument, a stream ID. A stream is simply a sequence of operations that are performed in order on the device. Operations in different streams can be interleaved and in some cases overlapped - a property that can be used to hide data transfers between the host and the device.\n\nAsynchronous transfers enable overlap of data transfers with computation in two different ways. On all CUDA-enabled devices, it is possible to overlap host computation with asynchronous data transfers and with device computations. For example, Asynchronous and Overlapping Transfers with Computation demonstrates how host computation in the routine cpuFunction() is performed while data is transferred to the device and a kernel using the device is executed.\n\nOverlapping computation and data transfers\n\ncudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, 0);\nkernel<<>>(a_d);\ncpuFunction();\n\nThe last argument to the cudaMemcpyAsync() function is the stream ID, which in this case uses the default stream, stream 0. The kernel also uses the default stream, and it will not begin execution until the memory copy completes; therefore, no explicit synchronization is needed. Because the memory copy and the kernel both return control to the host immediately, the host function cpuFunction() overlaps their execution.\n\nIn Asynchronous and Overlapping Transfers with Computation, the memory copy and kernel execution occur sequentially. On devices that are capable of concurrent copy and compute, it is possible to overlap kernel execution on the device with data transfers between the host and the device. Whether a device has this capability is indicated by the asyncEngineCount field of the cudaDeviceProp structure (or listed in the output of the deviceQuery CUDA Sample). On devices that have this capability, the overlap once again requires pinned host memory, and, in addition, the data transfer and kernel must use different, non-default streams (streams with non-zero stream IDs). Non-default streams are required for this overlap because memory copy, memory set functions, and kernel calls that use the default stream begin only after all preceding calls on the device (in any stream) have completed, and no operation on the device (in any stream) commences until they are finished.\n\nAsynchronous and Overlapping Transfers with Computation illustrates the basic technique.\n\nConcurrent copy and execute\n\ncudaStreamCreate(&stream1);\ncudaStreamCreate(&stream2);\ncudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, stream1);\nkernel<<>>(otherData_d);\n\nIn this code, two streams are created and used in the data transfer and kernel executions as specified in the last arguments of the cudaMemcpyAsync call and the kernel’s execution configuration.\n\nAsynchronous and Overlapping Transfers with Computation demonstrates how to overlap kernel execution with asynchronous data transfer. This technique could be used when the data dependency is such that the data can be broken into chunks and transferred in multiple stages, launching multiple kernels to operate on each chunk as it arrives. Sequential copy and execute and Staged concurrent copy and execute demonstrate this. They produce equivalent results. The first segment shows the reference sequential implementation, which transfers and operates on an array of N floats (where N is assumed to be evenly divisible by nThreads).\n\nSequential copy and execute\n\ncudaMemcpy(a_d, a_h, N*sizeof(float), dir);\nkernel<<>>(a_d);\n\nStaged concurrent copy and execute shows how the transfer and kernel execution can be broken up into nStreams stages. This approach permits some overlapping of the data transfer and execution.\n\nStaged concurrent copy and execute\n\nsize=N*sizeof(float)/nStreams;\nfor (i=0; i>>(a_d+offset);\n}\n\n(In Staged concurrent copy and execute, it is assumed that N is evenly divisible by nThreads*nStreams.) Because execution within\na stream occurs sequentially, none of the kernels will launch until the data transfers in their respective streams complete. Current GPUs can\nsimultaneously process asynchronous data transfers and execute kernels. GPUs with a single copy engine can perform one asynchronous data\ntransfer and execute kernels whereas GPUs with two copy engines can simultaneously perform one asynchronous data transfer from the host to\nthe device, one asynchronous data transfer from the device to the host, and execute kernels. The number of copy engines on a GPU is given\nby the asyncEngineCount field of the cudaDeviceProp structure, which is also listed in the output of the deviceQuery CUDA\nSample. (It should be mentioned that it is not possible to overlap a blocking transfer with an asynchronous transfer, because the blocking\ntransfer occurs in the default stream, so it will not begin until all previous CUDA calls complete. It will not allow any other CUDA call\nto begin until it has completed.) A diagram depicting the timeline of execution for the two code segments is shown\nin Figure 1, and nStreams is equal to 4\nfor Staged concurrent copy and execute in the bottom half of the figure.\n\nFigure 1 Timeline comparison for copy and kernel execution\n\nSequential\n\nConcurrent\n\nFor this example, it is assumed that the data transfer and kernel execution times are comparable. In such cases, and when the execution time (tE) exceeds the transfer time (tT), a rough estimate for the overall time is tE + tT/nStreams for the staged version versus tE + tT for the sequential version. If the transfer time exceeds the execution time, a rough estimate for the overall time is tT + tE/nStreams.\n\n9.1.3. Zero Copy\n\nZero copy is a feature that was added in version 2.2 of the CUDA Toolkit. It enables GPU threads to directly access host memory. For this purpose, it requires mapped pinned (non-pageable) memory. On integrated GPUs (i.e., GPUs with the integrated field of the CUDA device properties structure set to 1), mapped pinned memory is always a performance gain because it avoids superfluous copies as integrated GPU and CPU memory are physically the same. On discrete GPUs, mapped pinned memory is advantageous only in certain cases. Because the data is not cached on the GPU, mapped pinned memory should be read or written only once, and the global loads and stores that read and write the memory should be coalesced. Zero copy can be used in place of streams because kernel-originated data transfers automatically overlap kernel execution without the overhead of setting up and determining the optimal number of streams.\n\nNote\n\nLow Priority: Use zero-copy operations on integrated GPUs for CUDA Toolkit version 2.2 and later.\n\nThe host code in Zero-copy host code shows how zero copy is typically set up.\n\nZero-copy host code\n\nfloat *a_h, *a_map;\n...\ncudaGetDeviceProperties(&prop, 0);\nif (!prop.canMapHostMemory)\n exit(0);\ncudaSetDeviceFlags(cudaDeviceMapHost);\ncudaHostAlloc(&a_h, nBytes, cudaHostAllocMapped);\ncudaHostGetDevicePointer(&a_map, a_h, 0);\nkernel<<>>(a_map);\n\nIn this code, the canMapHostMemory field of the structure returned by cudaGetDeviceProperties() is used to check that the device\nsupports mapping host memory to the device’s address space. Page-locked memory mapping is enabled by calling cudaSetDeviceFlags()\nwith cudaDeviceMapHost. Note that cudaSetDeviceFlags() must be called prior to setting a device or making a CUDA call that\nrequires state (that is, essentially, before a context is created). Page-locked mapped host memory is allocated using cudaHostAlloc(),\nand the pointer to the mapped device address space is obtained via the function cudaHostGetDevicePointer(). In the code\nin Zero-copy host code, kernel() can reference the mapped pinned host memory using the pointer a_map in exactly the\nsame was as it would if a_map referred to a location in device memory.\n\nNote\n\nMapped pinned host memory allows you to overlap CPU-GPU memory transfers with computation while avoiding the use of CUDA streams. But since any repeated access to such memory areas causes repeated CPU-GPU transfers, consider creating a second area in device memory to manually cache the previously read host memory data.\n\n9.1.4. Unified Virtual Addressing\n\nDevices of compute capability 2.0 and later support a special addressing mode called Unified Virtual Addressing (UVA) on 64-bit Linux and Windows. With UVA, the host memory and the device memories of all installed supported devices share a single virtual address space.\n\nPrior to UVA, an application had to keep track of which pointers referred to device memory (and for which device) and which referred to host memory as a separate bit of metadata (or as hard-coded information in the program) for each pointer. Using UVA, on the other hand, the physical memory space to which a pointer points can be determined simply by inspecting the value of the pointer using cudaPointerGetAttributes().\n\nUnder UVA, pinned host memory allocated with cudaHostAlloc() will have identical host and device pointers, so it is not necessary to call cudaHostGetDevicePointer() for such allocations. Host memory allocations pinned after-the-fact via cudaHostRegister(), however, will continue to have different device pointers than their host pointers, so cudaHostGetDevicePointer() remains necessary in that case.\n\nUVA is also a necessary precondition for enabling peer-to-peer (P2P) transfer of data directly across the PCIe bus or NVLink for supported GPUs in supported configurations, bypassing host memory.\n\nSee the CUDA C++ Programming Guide for further explanations and software requirements for UVA and P2P.\n\n9.2. Device Memory Spaces\n\nCUDA devices use several memory spaces, which have different characteristics that reflect their distinct usages in CUDA applications. These\nmemory spaces include global, local, shared, texture, and registers, as shown in Figure 2.\n\nFigure 2 Memory spaces on a CUDA device\n\nOf these different memory spaces, global memory is the most plentiful; see Features and Technical Specifications of the CUDA C++ Programming Guide for the amounts of memory available in each memory space at each compute capability level. Global, local, and texture memory have the greatest access latency, followed by constant memory, shared memory, and the register file.\n\nThe various principal traits of the memory types are shown in Table 1.\n\nMemory\n\nLocation on/off chip\n\nCached\n\nAccess\n\nScope\n\nLifetime\n\nRegister\n\nOn\n\nn/a\n\nR/W\n\n1 thread\n\nThread\n\nLocal\n\nOff\n\nYes†â€\n\nR/W\n\n1 thread\n\nThread\n\nShared\n\nOn\n\nn/a\n\nR/W\n\nAll threads in block\n\nBlock\n\nGlobal\n\nOff\n\nâ€\n\nR/W\n\nAll threads + host\n\nHost allocation\n\nConstant\n\nOff\n\nYes\n\nR\n\nAll threads + host\n\nHost allocation\n\nTexture\n\nOff\n\nYes\n\nR\n\nAll threads + host\n\nHost allocation\n\n† Cached in L1 and L2 by default on devices of compute capability 6.0 and 7.x; cached only in L2 by default on devices of lower compute capabilities, though some allow opt-in to caching in L1 as well via compilation flags.\n\n†† Cached in L1 and L2 by default except on devices of compute capability 5.x; devices of compute capability 5.x cache locals only in L2.\n\nIn the case of texture access, if a texture reference is bound to a linear array in global memory, then the device code can write to the underlying array. Texture references that are bound to CUDA arrays can be written to via surface-write operations by binding a surface to the same underlying CUDA array storage). Reading from a texture while writing to its underlying global memory array in the same kernel launch should be avoided because the texture caches are read-only and are not invalidated when the associated global memory is modified.\n\n9.2.1. Coalesced Access to Global Memory\n\nA very important performance consideration in programming for CUDA-capable GPU architectures is the coalescing of global memory accesses. Global memory loads and stores by threads of a warp are coalesced by the device into as few as possible transactions.\n\nNote\n\nHigh Priority: Ensure global memory accesses are coalesced whenever possible.\n\nThe access requirements for coalescing depend on the compute capability of the device and are documented in the CUDA C++ Programming Guide.\n\nFor devices of compute capability 6.0 or higher, the requirements can be summarized quite easily: the concurrent accesses of the threads of a warp will coalesce into a number of transactions equal to the number of 32-byte transactions necessary to service all of the threads of the warp.\n\nFor certain devices of compute capability 5.2, L1-caching of accesses to global memory can be optionally enabled. If L1-caching is enabled on these devices, the number of required transactions is equal to the number of required 128-byte aligned segments.\n\nNote\n\nOn devices of compute capability 6.0 or higher, L1-caching is the default, however the data access unit is 32-byte regardless of whether global loads are cached in L1 or not.\n\nOn devices with GDDR memory, accessing memory in a coalesced way is even more important when ECC is turned on. Scattered accesses increase ECC memory transfer overhead, especially when writing data to global memory.\n\nCoalescing concepts are illustrated in the following simple examples. These examples assume compute capability 6.0 or higher and that accesses are for 4-byte words, unless otherwise noted.\n\n9.2.1.1. A Simple Access Pattern\n\nThe first and simplest case of coalescing can be achieved by any CUDA-enabled device of compute capability 6.0 or higher: the k-th thread accesses the k-th word in a 32-byte aligned array. Not all threads need to participate.\n\nFor example, if the threads of a warp access adjacent 4-byte words (e.g., adjacent float values), four coalesced 32-byte\ntransactions will service that memory access. Such a pattern is shown in Figure 3 .\n\nFigure 3 Coalesced access\n\nThis access pattern results in four 32-byte transactions, indicated by the red rectangles.\n\nIf from any of the four 32-byte segments only a subset of the words are requested (e.g. if several threads had accessed the\nsame word or if some threads did not participate in the access), the full segment is fetched anyway. Furthermore, if accesses\nby the threads of the warp had been permuted within or accross the four segments, still only four 32-byte transactions would\nhave been performed by a device with compute capability 6.0 or higher.\n\n9.2.1.2. A Sequential but Misaligned Access Pattern\n\nIf sequential threads in a warp access memory that is sequential but not aligned with a 32-byte segment, five 32-byte segments\nwill be requested, as shown in Figure 4.\n\nFigure 4 Misaligned sequential addresses that fall within five 32-byte segments\n\nMemory allocated through the CUDA Runtime API, such as via cudaMalloc(), is guaranteed to be aligned to at least 256 bytes. Therefore, choosing sensible thread block sizes, such as multiples of the warp size (i.e., 32 on current GPUs), facilitates memory accesses by warps that are properly aligned. (Consider what would happen to the memory addresses accessed by the second, third, and subsequent thread blocks if the thread block size was not a multiple of warp size, for example.)\n\n9.2.1.3. Effects of Misaligned Accesses\n\nIt is easy and informative to explore the ramifications of misaligned accesses using a simple copy kernel, such as the one\nin A copy kernel that illustrates misaligned accesses.\n\nA copy kernel that illustrates misaligned accesses\n\n__global__ void offsetCopy(float *odata, float* idata, int offset)\n{\n int xid = blockIdx.x * blockDim.x + threadIdx.x + offset;\n odata[xid] = idata[xid];\n}\n\nIn A copy kernel that illustrates misaligned accesses, data is copied from the input array idata to the output array, both\nof which exist in global memory. The kernel is executed within a loop in host code that varies the parameter offset from 0 to 32\n(for example, Figure 4 corresponds to this misalignments).\nThe effective bandwidth for the copy with various offsets on an NVIDIA Tesla V100 (compute capability 7.0)\nis shown in Figure 5.\n\nFigure 5 Performance of offsetCopy kernel\n\nFor the NVIDIA Tesla V100, global memory accesses with no offset or with offsets that are multiples of 8 words result in four 32-byte transactions. The achieved bandwidth is approximately 790 GB/s. Otherwise, five 32-byte segments are loaded per warp, and we would expect approximately 4/5th of the memory throughput achieved with no offsets.\n\nIn this particular example, the offset memory throughput achieved is, however, approximately 9/10th, because adjacent warps reuse the cache lines their neighbors fetched. So while the impact is still evident it is not as large as we might have expected. It would have been more so if adjacent warps had not exhibited such a high degree of reuse of the over-fetched cache lines.\n\n9.2.1.4. Strided Accesses\n\nAs seen above, in the case of misaligned sequential accesses, caches help to alleviate the performance impact. It may be different with non-unit-strided accesses, however, and this is a pattern that occurs frequently when dealing with multidimensional data or matrices. For this reason, ensuring that as much as possible of the data in each cache line fetched is actually used is an important part of performance optimization of memory accesses on these devices.\n\nTo illustrate the effect of strided access on effective bandwidth, see the kernel strideCopy() in A kernel to illustrate non-unit stride data copy, which copies data with a stride of stride elements between threads from idata to odata.\n\nA kernel to illustrate non-unit stride data copy\n\n__global__ void strideCopy(float *odata, float* idata, int stride)\n{\n int xid = (blockIdx.x*blockDim.x + threadIdx.x)*stride;\n odata[xid] = idata[xid];\n}\n\nFigure 6 illustrates such a situation; in this case, threads within a warp access words in memory with a stride of 2. This action leads to a load of eight L2 cache segments per warp on the Tesla V100 (compute capability 7.0).\n\nFigure 6 Adjacent threads accessing memory with a stride of 2\n\nA stride of 2 results in a 50% of load/store efficiency since half the elements in the transaction are not used and represent\nwasted bandwidth. As the stride increases, the effective bandwidth decreases until the point where 32 32-byte segments are loaded\nfor the 32 threads in a warp, as indicated in Figure 7.\n\nFigure 7 Performance of strideCopy kernel\n\nAs illustrated in Figure 7, non-unit-stride global memory accesses should be avoided whenever possible. One method for doing so utilizes shared memory, which is discussed in the next section.\n\n9.2.2. L2 Cache\n\nStarting with CUDA 11.0, devices of compute capability 8.0 and above have the capability to influence persistence of data in the L2 cache. Because L2 cache is on-chip, it potentially provides higher bandwidth and lower latency accesses to global memory.\n\nFor more details refer to the L2 Access Management section in the CUDA C++ Programming Guide.\n\n9.2.2.1. L2 Cache Access Window\n\nWhen a CUDA kernel accesses a data region in the global memory repeatedly, such data accesses can be considered to be persisting. On the other hand, if the data is only accessed once, such data accesses can be considered to be streaming. A portion of the L2 cache can be set aside for persistent accesses to a data region in global memory. If this set-aside portion is not used by persistent accesses, then streaming or normal data accesses can use it.\n\nThe L2 cache set-aside size for persisting accesses may be adjusted, within limits:\n\ncudaGetDeviceProperties(&prop, device_id);\ncudaDeviceSetLimit(cudaLimitPersistingL2CacheSize, prop.persistingL2CacheMaxSize); /* Set aside max possible size of L2 cache for persisting accesses */\n\nMapping of user data to L2 set-aside portion can be controlled using an access policy window on a CUDA stream or CUDA graph kernel node. The example below shows how to use the access policy window on a CUDA stream.\n\ncudaStreamAttrValue stream_attribute; // Stream level attributes data structure\nstream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast(ptr); // Global Memory data pointer\nstream_attribute.accessPolicyWindow.num_bytes = num_bytes; // Number of bytes for persisting accesses.\n // (Must be less than cudaDeviceProp::accessPolicyMaxWindowSize)\nstream_attribute.accessPolicyWindow.hitRatio = 1.0; // Hint for L2 cache hit ratio for persisting accesses in the num_bytes region\nstream_attribute.accessPolicyWindow.hitProp = cudaAccessPropertyPersisting; // Type of access property on cache hit\nstream_attribute.accessPolicyWindow.missProp = cudaAccessPropertyStreaming; // Type of access property on cache miss.\n\n//Set the attributes to a CUDA stream of type cudaStream_t\ncudaStreamSetAttribute(stream, cudaStreamAttributeAccessPolicyWindow, &stream_attribute);\n\nThe access policy window requires a value for hitRatio and num_bytes. Depending on the value of the num_bytes parameter and the size of L2 cache, one may need to tune the value of hitRatio to avoid thrashing of L2 cache lines.\n\n9.2.2.2. Tuning the Access Window Hit-Ratio\n\nThe hitRatio parameter can be used to specify the fraction of accesses that receive the hitProp property. For example, if the hitRatio value is 0.6, 60% of the memory accesses in the global memory region [ptr..ptr+num_bytes) have the persisting property and 40% of the memory accesses have the streaming property. To understand the effect of hitRatio and num_bytes, we use a sliding window micro benchmark.\n\nThis microbenchmark uses a 1024 MB region in GPU global memory. First, we set aside 30 MB of the L2 cache for persisting accesses using cudaDeviceSetLimit(), as discussed above. Then, as shown in the figure below, we specify that the accesses to the first freqSize * sizeof(int) bytes of the memory region are persistent. This data will thus use the L2 set-aside portion. In our experiment, we vary the size of this persistent data region from 10 MB to 60 MB to model various scenarios where data fits in or exceeds the available L2 set-aside portion of 30 MB. Note that the NVIDIA Tesla A100 GPU has 40 MB of total L2 cache capacity. Accesses to the remaining data of the memory region (i.e., streaming data) are considered normal or streaming accesses and will thus use the remaining 10 MB of the non set-aside L2 portion (unless part of the L2 set-aside portion is unused).\n\nFigure 8 Mapping Persistent data accesses to set-aside L2 in sliding window experiment\n\nConsider the following kernel code and access window parameters, as the implementation of the sliding window experiment.\n\n__global__ void kernel(int *data_persistent, int *data_streaming, int dataSize, int freqSize) {\n int tid = blockIdx.x * blockDim.x + threadIdx.x;\n\n /*Each CUDA thread accesses one element in the persistent data section\n and one element in the streaming data section.\n Because the size of the persistent memory region (freqSize * sizeof(int) bytes) is much\n smaller than the size of the streaming memory region (dataSize * sizeof(int) bytes), data\n in the persistent region is accessed more frequently*/\n\n data_persistent[tid % freqSize] = 2 * data_persistent[tid % freqSize];\n data_streaming[tid % dataSize] = 2 * data_streaming[tid % dataSize];\n}\n\nstream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast(data_persistent);\nstream_attribute.accessPolicyWindow.num_bytes = freqSize * sizeof(int); //Number of bytes for persisting accesses in range 10-60 MB\nstream_attribute.accessPolicyWindow.hitRatio = 1.0; //Hint for cache hit ratio. Fixed value 1.0\n\nThe performance of the above kernel is shown in the chart below. When the persistent data region fits well into the 30 MB set-aside portion of the L2 cache, a performance increase of as much as 50% is observed. However, once the size of this persistent data region exceeds the size of the L2 set-aside cache portion, approximately 10% performance drop is observed due to thrashing of L2 cache lines.\n\nFigure 9 The performance of the sliding-window benchmark with fixed hit-ratio of 1.0\n\nIn order to optimize the performance, when the size of the persistent data is more than the size of the set-aside L2 cache portion, we tune the num_bytes and hitRatio parameters in the access window as below.\n\nstream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast(data_persistent);\nstream_attribute.accessPolicyWindow.num_bytes = 20*1024*1024; //20 MB\nstream_attribute.accessPolicyWindow.hitRatio = (20*1024*1024)/((float)freqSize*sizeof(int)); //Such that up to 20MB of data is resident.\n\nWe fix the num_bytes in the access window to 20 MB and tune the hitRatio such that a random 20 MB of the total persistent data is resident in the L2 set-aside cache portion. The remaining portion of this persistent data will be accessed using the streaming property. This helps in reducing cache thrashing. The results are shown in the chart below, where we see good performance regardless of whether the persistent data fits in the L2 set-aside or not.\n\nFigure 10 The performance of the sliding-window benchmark with tuned hit-ratio\n\n9.2.3. Shared Memory\n\nBecause it is on-chip, shared memory has much higher bandwidth and lower latency than local and global memory - provided there are no bank conflicts between the threads, as detailed in the following section.\n\n9.2.3.1. Shared Memory and Memory Banks\n\nTo achieve high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (banks) that can be accessed simultaneously. Therefore, any memory load or store of n addresses that spans n distinct memory banks can be serviced simultaneously, yielding an effective bandwidth that is n times as high as the bandwidth of a single bank.\n\nHowever, if multiple addresses of a memory request map to the same memory bank, the accesses are serialized. The hardware splits a memory request that has bank conflicts into as many separate conflict-free requests as necessary, decreasing the effective bandwidth by a factor equal to the number of separate memory requests. The one exception here is when multiple threads in a warp address the same shared memory location, resulting in a broadcast. In this case, multiple broadcasts from different banks are coalesced into a single multicast from the requested shared memory locations to the threads.\n\nTo minimize bank conflicts, it is important to understand how memory addresses map to memory banks and how to optimally schedule memory requests.\n\nOn devices of compute capability 5.x or newer, each bank has a bandwidth of 32 bits every clock cycle, and successive 32-bit words are assigned to successive banks. The warp size is 32 threads and the number of banks is also 32, so bank conflicts can occur between any threads in the warp. See Compute Capability 5.x for further details.\n\n9.2.3.2. Shared Memory in Matrix Multiplication (C=AB)\n\nShared memory enables cooperation between threads in a block. When multiple threads in a block use the same data from global memory, shared memory can be used to access the data from global memory only once. Shared memory can also be used to avoid uncoalesced memory accesses by loading and storing data in a coalesced pattern from global memory and then reordering it in shared memory. Aside from memory bank conflicts, there is no penalty for non-sequential or unaligned accesses by a warp in shared memory.\n\nThe use of shared memory is illustrated via the simple example of a matrix multiplication C = AB for the case with A of dimension Mxw, B of dimension wxN, and C of dimension MxN. To keep the kernels simple, M and N are multiples of 32, since the warp size (w) is 32 for current devices.\n\nA natural decomposition of the problem is to use a block and tile size of wxw threads. Therefore, in terms of wxw tiles, A is a column matrix, B is a row matrix, and C is their outer product; see Figure 11. A grid of N/w by M/w blocks is launched, where each thread block calculates the elements of a different tile in C from a single tile of A and a single tile of B.\n\nFigure 11 Block-column matrix multiplied by block-row matrix. Block-column matrix (A) multiplied by block-row matrix (B) with resulting product matrix (C).\n\nTo do this, the simpleMultiply kernel (Unoptimized matrix multiplication) calculates the output elements of a tile of matrix C.\n\nUnoptimized matrix multiplication\n\n__global__ void simpleMultiply(float *a, float* b, float *c,\n int N)\n{\n int row = blockIdx.y * blockDim.y + threadIdx.y;\n int col = blockIdx.x * blockDim.x + threadIdx.x;\n float sum = 0.0f;\n for (int i = 0; i < TILE_DIM; i++) {\n sum += a[row*TILE_DIM+i] * b[i*N+col];\n }\n c[row*N+col] = sum;\n}\n\nIn Unoptimized matrix multiplication, a, b, and c are pointers to global memory for the matrices A, B, and C, respectively; blockDim.x, blockDim.y, and TILE_DIM are all equal to w. Each thread in the wxw-thread block calculates one element in a tile of C. row and col are the row and column of the element in C being calculated by a particular thread. The for loop over i multiplies a row of A by a column of B, which is then written to C.\n\nThe effective bandwidth of this kernel is 119.9 GB/s on an NVIDIA Tesla V100. To analyze performance, it is necessary to consider how warps access global memory in the for loop. Each warp of threads calculates one row of a tile of C, which depends on a single row of A and an entire tile of B as illustrated in Figure 12.\n\nFigure 12 Computing a row of a tile. Computing a row of a tile in C using one row of A and an entire tile of B.\n\nFor each iteration i of the for loop, the threads in a warp read a row of the B tile, which is a sequential and coalesced access for all compute capabilities.\n\nHowever, for each iteration i, all threads in a warp read the same value from global memory for matrix A, as the index row*TILE_DIM+i is constant within a warp. Even though such an access requires only 1 transaction on devices of compute capability 2.0 or higher, there is wasted bandwidth in the transaction, because only one 4-byte word out of 8 words in a 32-byte cache segment is used. We can reuse this cache line in subsequent iterations of the loop, and we would eventually utilize all 8 words; however, when many warps execute on the same multiprocessor simultaneously, as is generally the case, the cache line may easily be evicted from the cache between iterations i and i+1.\n\nThe performance on a device of any compute capability can be improved by reading a tile of A into shared memory as shown\nin Using shared memory to improve the global memory load efficiency in matrix multiplication.\n\nUsing shared memory to improve the global memory load efficiency in matrix multiplication\n\n__global__ void coalescedMultiply(float *a, float* b, float *c,\n int N)\n{\n __shared__ float aTile[TILE_DIM][TILE_DIM];\n\n int row = blockIdx.y * blockDim.y + threadIdx.y;\n int col = blockIdx.x * blockDim.x + threadIdx.x;\n float sum = 0.0f;\n aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];\n __syncwarp();\n for (int i = 0; i < TILE_DIM; i++) {\n sum += aTile[threadIdx.y][i]* b[i*N+col];\n }\n c[row*N+col] = sum;\n}\n\nIn Using shared memory to improve the global memory load efficiency in matrix multiplication, each element in a tile of A is read from global memory only once, in a fully coalesced fashion (with no wasted bandwidth), to shared memory. Within each iteration of the for loop, a value in shared memory is broadcast to all threads in a warp. Instead of a __syncthreads()synchronization barrier call, a __syncwarp() is sufficient after reading the tile of A into shared memory because only threads within the warp that write the data into shared memory read this data. This kernel has an effective bandwidth of 144.4 GB/s on an NVIDIA Tesla V100. This illustrates the use of the shared memory as a user-managed cache when the hardware L1 cache eviction policy does not match up well with the needs of the application or when L1 cache is not used for reads from global memory.\n\nA further improvement can be made to how Using shared memory to improve the global memory load efficiency in matrix multiplication deals with matrix B. In calculating each of the rows of a tile of matrix C, the entire tile of B is read. The repeated reading of the B tile can be eliminated by reading it into shared memory once (Improvement by reading additional data into shared memory).\n\nImprovement by reading additional data into shared memory\n\n__global__ void sharedABMultiply(float *a, float* b, float *c,\n int N)\n{\n __shared__ float aTile[TILE_DIM][TILE_DIM],\n bTile[TILE_DIM][TILE_DIM];\n int row = blockIdx.y * blockDim.y + threadIdx.y;\n int col = blockIdx.x * blockDim.x + threadIdx.x;\n float sum = 0.0f;\n aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];\n bTile[threadIdx.y][threadIdx.x] = b[threadIdx.y*N+col];\n __syncthreads();\n for (int i = 0; i < TILE_DIM; i++) {\n sum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x];\n }\n c[row*N+col] = sum;\n}\n\nNote that in Improvement by reading additional data into shared memory, a __syncthreads() call is required after reading the B tile because a warp reads data from shared memory that were written to shared memory by different warps. The effective bandwidth of this routine is 195.5 GB/s on an NVIDIA Tesla V100. Note that the performance improvement is not due to improved coalescing in either case, but to avoiding redundant transfers from global memory.\n\nThe results of the various optimizations are summarized in Table 2.\n\nOptimization\n\nNVIDIA Tesla V100\n\nNo optimization\n\n119.9 GB/s\n\nCoalesced using shared memory to store a tile of A\n\n144.4 GB/s\n\nUsing shared memory to eliminate redundant reads of a tile of B\n\n195.5 GB/s\n\nNote\n\nMedium Priority: Use shared memory to avoid redundant transfers from global memory.\n\n9.2.3.3. Shared Memory in Matrix Multiplication (C=AAT)\n\nA variant of the previous matrix multiplication can be used to illustrate how strided accesses to global memory, as well as shared memory bank conflicts, are handled. This variant simply uses the transpose of A in place of B, so C = AAT.\n\nA simple implementation for C = AAT is shown in Unoptimized handling of strided accesses to global memory.\n\nUnoptimized handling of strided accesses to global memory\n\n__global__ void simpleMultiply(float *a, float *c, int M)\n{\n int row = blockIdx.y * blockDim.y + threadIdx.y;\n int col = blockIdx.x * blockDim.x + threadIdx.x;\n float sum = 0.0f;\n for (int i = 0; i < TILE_DIM; i++) {\n sum += a[row*TILE_DIM+i] * a[col*TILE_DIM+i];\n }\n c[row*M+col] = sum;\n}\n\nIn the example above, the row-th, col-th element of C is obtained by taking the dot product of the row-th and col-th rows of A. The effective bandwidth for this kernel is 12.8 GB/s on an NVIDIA Tesla V100. These results are substantially lower than the corresponding measurements for the C = AB kernel. The difference is in how threads in a half warp access elements of A in the second term, a[col*TILE_DIM+i], for each iteration i. For a warp of threads, col represents sequential columns of the transpose of A, and therefore col*TILE_DIM represents a strided access of global memory with a stride of w, resulting in plenty of wasted bandwidth.\n\nThe way to avoid strided access is to use shared memory as before, except in this case a warp reads a row of A into a column of a shared memory tile, as\nshown in An optimized handling of strided accesses using coalesced reads from global memory.\n\nAn optimized handling of strided accesses using coalesced reads from global memory\n\n__global__ void coalescedMultiply(float *a, float *c, int M)\n{\n __shared__ float aTile[TILE_DIM][TILE_DIM],\n transposedTile[TILE_DIM][TILE_DIM];\n int row = blockIdx.y * blockDim.y + threadIdx.y;\n int col = blockIdx.x * blockDim.x + threadIdx.x;\n float sum = 0.0f;\n aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];\n transposedTile[threadIdx.x][threadIdx.y] =\n a[(blockIdx.x*blockDim.x + threadIdx.y)*TILE_DIM +\n threadIdx.x];\n __syncthreads();\n for (int i = 0; i < TILE_DIM; i++) {\n sum += aTile[threadIdx.y][i]* transposedTile[i][threadIdx.x];\n }\n c[row*M+col] = sum;\n}\n\nAn optimized handling of strided accesses using coalesced reads from global memory uses the shared transposedTile to avoid uncoalesced accesses in the second term in the dot product and the shared aTile technique from the previous example to avoid uncoalesced accesses in the first term. The effective bandwidth of this kernel is 140.2 GB/s on an NVIDIA Tesla V100.These results are lower than those obtained by the final kernel for C = AB. The cause of the difference is shared memory bank conflicts.\n\nThe reads of elements in transposedTile within the for loop are free of conflicts, because threads of each half warp read across rows of the tile, resulting in unit stride across the banks. However, bank conflicts occur when copying the tile from global memory into shared memory. To enable the loads from global memory to be coalesced, data are read from global memory sequentially. However, this requires writing to shared memory in columns, and because of the use of wxw tiles in shared memory, this results in a stride between threads of w banks - every thread of the warp hits the same bank (Recall that w is selected as 32). These many-way bank conflicts are very expensive. The simple remedy is to pad the shared memory array so that it has an extra column, as in the following line of code.\n\n__shared__ float transposedTile[TILE_DIM][TILE_DIM+1];\n\nThis padding eliminates the conflicts entirely, because now the stride between threads is w+1 banks (i.e., 33 for current devices), which, due to modulo arithmetic used to compute bank indices, is equivalent to a unit stride. After this change, the effective bandwidth is 199.4 GB/s on an NVIDIA Tesla V100, which is comparable to the results from the last C = AB kernel.\n\nThe results of these optimizations are summarized in Table 3.\n\nOptimization\n\nNVIDIA Tesla V100\n\nNo optimization\n\n12.8 GB/s\n\nUsing shared memory to coalesce global reads\n\n140.2 GB/s\n\nRemoving bank conflicts\n\n199.4 GB/s\n\nThese results should be compared with those in Table 2. As can be seen from these tables, judicious use of shared memory can dramatically improve performance.\n\nThe examples in this section have illustrated three reasons to use shared memory:\n\nTo enable coalesced accesses to global memory, especially to avoid large strides (for general matrices, strides are much larger than 32)\n\nTo eliminate (or reduce) redundant loads from global memory\n\nTo avoid wasted bandwidth\n\n9.2.3.4. Asynchronous Copy from Global Memory to Shared Memory\n\nCUDA 11.0 introduces an async-copy feature that can be used within device code to explicitly manage the asynchronous copying of data from global memory to shared memory. This feature enables CUDA kernels to overlap copying data from global to shared memory with computation. It also avoids an intermediary register file access traditionally present between the global memory read and the shared memory write.\n\nFor more details refer to the memcpy_async section in the CUDA C++ Programming Guide.\n\nTo understand the performance difference between synchronous copy and asynchronous copy of data from global memory to shared memory, consider the following micro benchmark CUDA kernels for demonstrating the synchronous and asynchronous approaches. Asynchronous copies are hardware accelerated for NVIDIA A100 GPU.\n\ntemplate \n__global__ void pipeline_kernel_sync(T *global, uint64_t *clock, size_t copy_count) {\n extern __shared__ char s[];\n T *shared = reinterpret_cast(s);\n\n uint64_t clock_start = clock64();\n\n for (size_t i = 0; i < copy_count; ++i) {\n shared[blockDim.x * i + threadIdx.x] = global[blockDim.x * i + threadIdx.x];\n }\n\n uint64_t clock_end = clock64();\n\n atomicAdd(reinterpret_cast(clock),\n clock_end - clock_start);\n}\n\ntemplate \n__global__ void pipeline_kernel_async(T *global, uint64_t *clock, size_t copy_count) {\n extern __shared__ char s[];\n T *shared = reinterpret_cast(s);\n\n uint64_t clock_start = clock64();\n\n //pipeline pipe;\n for (size_t i = 0; i < copy_count; ++i) {\n __pipeline_memcpy_async(&shared[blockDim.x * i + threadIdx.x],\n &global[blockDim.x * i + threadIdx.x], sizeof(T));\n }\n __pipeline_commit();\n __pipeline_wait_prior(0);\n\n uint64_t clock_end = clock64();\n\n atomicAdd(reinterpret_cast(clock),\n clock_end - clock_start);\n}\n\nThe synchronous version for the kernel loads an element from global memory to an intermediate register and then stores the intermediate register value to shared memory. In the asynchronous version of the kernel, instructions to load from global memory and store directly into shared memory are issued as soon as __pipeline_memcpy_async() function is called. The __pipeline_wait_prior(0) will wait until all the instructions in the pipe object have been executed. Using asynchronous copies does not use any intermediate register. Not using intermediate registers can help reduce register pressure and can increase kernel occupancy. Data copied from global memory to shared memory using asynchronous copy instructions can be cached in the L1 cache or the L1 cache can be optionally bypassed. If individual CUDA threads are copying elements of 16 bytes, the L1 cache can be bypassed. This difference is illustrated in Figure 13.\n\nFigure 13 Comparing Synchronous vs Asynchronous Copy from Global Memory to Shared Memory\n\nWe evaluate the performance of both kernels using elements of size 4B, 8B and 16B per thread i.e., using int, int2 and int4 for the template parameter. We adjust the copy_count in the kernels such that each thread block copies from 512 bytes up to 48 MB. The performance of the kernels is shown in Figure 14.\n\nFigure 14 Comparing Performance of Synchronous vs Asynchronous Copy from Global Memory to Shared Memory\n\nFrom the performance chart, the following observations can be made for this experiment.\n\nBest performance with synchronous copy is achieved when the copy_count parameter is a multiple of 4 for all three element sizes. The compiler can optimize groups of 4 load and store instructions. This is evident from the saw tooth curves.\n\nAsynchronous copy achieves better performance in nearly all cases.\n\nThe async-copy does not require the copy_count parameter to be a multiple of 4, to maximize performance through compiler optimizations.\n\nOverall, best performance is achieved when using asynchronous copies with an element of size 8 or 16 bytes.\n\n9.2.4. Local Memory\n\nLocal memory is so named because its scope is local to the thread, not because of its physical location. In fact, local memory is off-chip. Hence, access to local memory is as expensive as access to global memory. In other words, the term local in the name does not imply faster access.\n\nLocal memory is used only to hold automatic variables. This is done by the nvcc compiler when it determines that there is insufficient register space to hold the variable. Automatic variables that are likely to be placed in local memory are large structures or arrays that would consume too much register space and arrays that the compiler determines may be indexed dynamically.\n\nInspection of the PTX assembly code (obtained by compiling with -ptx or -keep command-line options to nvcc) reveals whether a variable has been placed in local memory during the first compilation phases. If it has, it will be declared using the .local mnemonic and accessed using the ld.local and st.local mnemonics. If it has not, subsequent compilation phases might still decide otherwise, if they find the variable consumes too much register space for the targeted architecture. There is no way to check this for a specific variable, but the compiler reports total local memory usage per kernel (lmem) when run with the--ptxas-options=-v option.\n\n9.2.5. Texture Memory\n\nThe read-only texture memory space is cached. Therefore, a texture fetch costs one device memory read only on a cache miss; otherwise, it just costs one read from the texture cache. The texture cache is optimized for 2D spatial locality, so threads of the same warp that read texture addresses that are close together will achieve best performance. Texture memory is also designed for streaming fetches with a constant latency; that is, a cache hit reduces DRAM bandwidth demand, but not fetch latency.\n\nIn certain addressing situations, reading device memory through texture fetching can be an advantageous alternative to reading device memory from global or constant memory.\n\n9.2.5.1. Additional Texture Capabilities\n\nIf textures are fetched using tex1D(),tex2D(), or tex3D() rather than tex1Dfetch(), the hardware provides other capabilities that might be useful for some applications such as image processing, as shown in Table 4.\n\nFeature\n\nUse\n\nCaveat\n\nFiltering\n\nFast, low-precision interpolation between texels\n\nValid only if the texture reference returns floating-point data\n\nNormalized texture coordinates\n\nResolution-independent coding\n\nNone\n\nAddressing modes\n\nAutomatic handling of boundary cases1\n\nCan be used only with normalized texture coordinates\n\n1 The automatic handling of boundary cases in the bottom row of Table 4 refers to how a texture coordinate is resolved when it falls outside the valid addressing range. There are two options: clamp and wrap. If x is the coordinate and N is the number of texels for a one-dimensional texture, then with clamp, x is replaced by 0 if x < 0 and by 1-1/N if 1