content
stringlengths 0
188k
|
---|
How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog
December 2022
In this post, I’ll iteratively optimize an implementation of matrix multiplication written in CUDA.
My goal is not to build a cuBLAS replacement, but to deeply understand the most important performance characteristics of the GPUs that are used for modern deep learning.
This includes coalescing global memory accesses, shared memory caching and occupancy optimizations, among others.You can download the code for all kernels from Github. Also checkout wangzyon’s repo from which I copied the benchmarking setup. This post is less polished than my normal uploads, and includes many more sidenotes. I used it as notepad for ideas and scribbles while writing the kernels. That’s why I called it a worklog :)
Matrix multiplication on GPUs may currently be the most important algorithm that exists, considering it makes up almost all the FLOPs during the training and inference of large deep-learning models.
So how much work is it to write a performant CUDA SGEMMSGEMM performs C=αAB+βC at single (=32b) precision. from scratch?
I’ll start with a naive kernel and step-by-step apply optimizations until we get within 95% (on a good day) of the performance of cuBLAS (NVIDIA’s official matrix library):cuBLAS at FP32 that is. In my setting, doing the matmul using TF32 or BF16 precision allows cuBLAS to use the tensor cores, which increases FLOPS by 2.5x or 3.5x. I may look into tensor cores / warp matrix functions in a future post.
–
Come work on kernels at Anthropic!
We’re always hiring for capable performance & kernel engineers to optimize our models on TPUs, GPUs & Trainium. Apply here!
–
Kernel 1: Naive Implementation
In the CUDA programming model, computation is ordered in a three-level hierarchy.
Each invocation of a CUDA kernel creates a new grid, which consists of multiple blocks.
Each block consists of up to 1024 individual threads.These constants can be looked-up in the CUDA Programming guide.
Threads that are in the same block have access to the same shared memory region (SMEM).
The number of threads in a block can be configured using a variable normally called blockDim, which is a vector consisting of three ints.
The entries of that vector specify the sizes of blockDim.x, blockDim.y and blockDim.z, as visualized below:
Similarly, the number of blocks in a grid is configurable using the gridDim variable.
When we launch a new kernel from the hostIn accelerator lingo, host refers to the CPU and device is the accelerator, here the GPU., it creates a single grid, containing the blocks and threads as specified.From here on I’ll only be talking about 2D grids and blocks, partly because the 3D-structure is seldom used and because drawing in 3D is too hard.
It’s important to keep in mind that the thread hierarchy we just talked about mostly concerns program correctness.
For program performance, as we’ll see later, it’s not a good idea to treat all threads in the same block as equals.
For our first kernel, we’ll use the grid, block and thread hierarchy to assign each thread a unique entry in the result matrix C.
Then that thread will compute the dot product of the corresponding row of A and column of B, and write the result to C.
Due to each location of C being written to by only one thread, we have to do no synchronization.
We’ll launch the kernel like so:
// create as many blocks as necessary to map all of C
dim3 gridDim(CEIL_DIV(M, 32), CEIL_DIV(N, 32), 1);
// 32 * 32 = 1024 thread per block
dim3 blockDim(32, 32, 1);
// launch the asynchronous execution of the kernel on the device
// The function call returns immediately on the host
sgemm_naive<<<gridDim, blockDim>>>(M, N, K, alpha, A, B, beta, C);
CUDA code is written from a single-thread perspective.
In the code of the kernel, we access the blockIdx and threadIdx built-in variables.
These will return different values based on the thread that’s accessing them.In our example, threadIdx.x and threadIdx.y will vary from 0 to 31 based on the position of the thread in the grid. Same for blockIdx.x and blockIdx.y, which will vary from 0 to CEIL_DIV(N, 32) or CEIL_DIV(M, 32) based on the position of the thread’s block in the grid. We’ll do a lot of indexing into strided in-memory representations of matrices. Edward Yang’s post on PyTorch Internals contains a good explanation of strided tensors.
__global__ void sgemm_naive(int M, int N, int K, float alpha, const float *A,
const float *B, float beta, float *C) {
// compute position in C that this thread is responsible for
const uint x = blockIdx.x * blockDim.x + threadIdx.x;
const uint y = blockIdx.y * blockDim.y + threadIdx.y;
// `if` condition is necessary for when M or N aren't multiples of 32.
if (x < M && y < N) {
float tmp = 0.0;
for (int i = 0; i < K; ++i) {
tmp += A[x * K + i] * B[i * N + y];
}
// C = α*(A@B)+β*C
C[x * N + y] = alpha * tmp + beta * C[x * N + y];
}
}
To visualize this simple kernel:If the size of the matrix is not divisible by the size of the block, we’ll have to launch extra blocks to process the remainder. For example, in the picture below, we’ll create 9 blocks of equal threadsize, but only 4 of those fully utilize their 1024 threads. This artifact is called tile quantization, and appears whenever we try to map a fixed-sized volume across a variable-sized input.
This kernel takes about 0.5s to process three 4092² fp32 matrices on my A6000 GPU.
Let’s do some non-implementation-specific calculations:
Lower Bounding the Fastest Possible Runtime
For a matrix multiplication of two 4092² matrices, followed by an addition of a 4092² matrix (to make the GEMM):
So 268MB is the absolute minimum of memory that any implementation would have to transfer from/to global GPU memory,Global memory is the GPU’s main memory region. If Nvidia sells you a GPU advertised with 80GB of memory and 1TB/s of bandwidth, they’re talking about the capacity and bandwidth of global memory. Later we’ll talk about other memory regions on the GPU, like the shared memory, which is physically distinct and has very different performance characteristics. assuming it has a big enough cache.The cuBLAS kernel loads a total of 500MB of GMEM during the whole calculation. We’ll see later how increasing arithmetic intensity allows us to achieve an access volume that low.
Let’s calculate some upper bounds on kernel performance.
The GPU is advertised with 30TFLOPs/s of fp32 compute throughput and 768GB/s of global memory bandwidth.
If we achieved those numbers,Reminder that peak FLOPs is a reductionist metric, since it depends on the instruction mix. There’s no way you’d reach those 30TFLOPs/s if your FLOP of choice is DIV. However, since matmul uses mainly FMA instructions, which tends to be the fastest FLOPs, we have a good chance of actually getting close to that peak FLOP value. Similar story for the bandwidth: Peak bandwidth can only be reached if the access pattern suits the hardware. we’d need 4.5ms for the calculation and 0.34ms for the memory transfers.
So in our napkin math, the calculation takes ~10x more time than the memory accesses.
This means our final optimized kernel will be compute-bound, as long as we end up having to transfer <10x the absolute minimum memory volume of 278MB.The A6000 is advertised with 309TFLOPs/s of tensor core performance. If we could use tensor cores for our fp32 matmul, the calculation would only take 0.44ms, and an optimized kernel doing 4092^2 matrix mul would almost surely still be memory bound. This puts into perspective just how fast the tensor cores are.
Now that we’ve calculated some lower bounds for our fp32 GEMM calculation, let’s get back to the kernel on hand, to figure out why it’s so much slower than it could be.
Memory Access Pattern of the Naive Kernel
In our kernel, two threads in the same block with ThreadIds (0, 0) and (0, 1) will load the same column of B but different rows of A.
If we assume the worst case of zero caching, then each thread has to load 2*4092+1 floats from global memory.
As we have 4092² threads total, this would result in 548GB of memory traffic.
Below is a visualization of the memory access pattern of our naive kernel, taking two threads A (red) and B (green) as an example:
So to recap, when I run this kernel on an A6000 GPU it achieves ~300GFLOPs when multiplying two 4092x4092 float32 matrices.
Pretty bad, considering that the A6000 is advertised as being able to achieve almost 30 TFLOPs.Just for comparison, 300 GFLOPs is also roughly the performance achieved by the optimized BLAS library on the 2015 Haswell CPU that I used in my earlier post on CPU matmul.
So how can we start to make this faster?
One way is to optimize the memory access pattern of our kernel such that global memory accesses can be coalesced (=combined) into fewer accesses.
Kernel 2: Global Memory Coalescing
Before we get into global memory coalescing, we need to learn about the concept of a warp.
For execution, the threads of a block are grouped into so-called warps, consisting of 32 threads.
A warp is then assigned to a warp scheduler, which is the physical core that executes the instructions.Before the Volta architecture, it used to be the case that all threads of a warp were fed from the same instruction stream. On a branch, the threads that didn’t take the branch were inactived using the so-called active mask. However, since Volta, it’s no longer a good idea to rely on this ‘warp-synchronous’ behaviour, as instructions from different branches may be interleaved even for the same threads within a warp.
There are four warp schedulers per multiprocessor.
The grouping into warps happens based on a consecutive threadId.
If we set the blockDim to be multi-dimension, then the threadId is calculated like so:
threadId = threadIdx.x+blockDim.x*(threadIdx.y+blockDim.y*threadIdx.z)
Then, threads with neighbouring threadId become part of the same warp.
Below I tried to illustrate this, using a smaller “warpsize” of 8 threads (real warps always contain 32 threads):I like to think of the three dimensions x,y,z of threadId as being “column-major”, due to the first dimension x being the one that’s continuous in “warpspace”. I don’t know if others use that term, but it makes the concept more clear to me.
The concept of a warp is relevant for this second kernel, as sequential memory accesses by threads that are part of the same warp can be grouped and executed as one.
This is referred to as global memory coalescing.
It’s the most important thing to keep in mind when optimizing a kernel’s GMEM memory accesses toward achieving the peak bandwidth.
Below is an example, where consecutive memory accesses by threads in the same warp are grouped, allowing each warp to execute 8 memory accesses using only 2 32B loads:
In reality, the GPU supports 32B, 64B and 128B memory accesses.
So, if each thread is loading a 32bit float from global memory, the warp scheduler (probably the MIO) can coalesce this 32*4B=128B load into a single transaction.
This is only possible if the floats loaded are consecutive in memory, and if access is aligned.In that way, optimizing for global memory coalescing on GPU has a lot of similarities to optimizing for cache line utilization on CPU. Interestingly, to allow coalescing the threads within a warp have to access consecutive addresses, but the accesses don’t have to be consecutive within-warp. Illustrated below:
If they aren’t, or if access cannot be coalesced for some other reason, then the GPU will execute as many 32B loads as necessary to fetch all floats, leading to a lot of wasted bandwidth.
Profiling our naive kernel, we can observe the detrimental effect of non-coalesced access as we achieve only 15GB/s of GMEM throughput.
Looking back at the previous kernel, we assigned threads their entry of C like so:
const uint x = blockIdx.x * blockDim.x + threadIdx.x;
const uint y = blockIdx.y * blockDim.y + threadIdx.y;
Hence, threads of the same warp (those with consecutive threadIdx.x) were loading the rows of A non-consecutively from memory.
The naive kernel’s pattern of accessing the memory of A looked more like so:
To enable coalescing, we can change how we assign positions of the result matrix C to threads.
This change in the global memory access pattern is illustrated below:
To implement this, we only need to change the first two lines:
const int x = blockIdx.x * BLOCKSIZE + (threadIdx.x / BLOCKSIZE);
const int y = blockIdx.y * BLOCKSIZE + (threadIdx.x % BLOCKSIZE);
if (x < M && y < N) {
float tmp = 0.0;
for (int i = 0; i < K; ++i) {
tmp += A[x * K + i] * B[i * N + y];
}
C[x * N + y] = alpha * tmp + beta * C[x * N + y];
}
And we call it like so:This wasn’t immediately obvious to me, but enabling GMEM coalescing changes nothing in the assembly, see the SASS output on Godbolt. Access coalescing is done at kernel runtime by the hardware. This makes sense since coalescing requires aligned access, which cannot be guaranteed at compile time as we pass the matrix pointers as function arguments. Also: the assembly features partial unrolling of our inner loop even though the loop count K is not known at compile time. Exciting!
// gridDim stays the same
dim3 gridDim(CEIL_DIV(M, 32), CEIL_DIV(N, 32));
// make blockDim 1-dimensional, but don't change number of threads
dim3 blockDim(32 * 32);
sgemm_coalescing<<<gridDim, blockDim>>>(M, N, K, alpha, A, B, beta, C);
Global memory coalescing increases memory throughput from 15GB/s to 110GB/s.
Performance reaches 2000 GFLOPS, a big improvement compared to the 300 GFLOPS of the first, naive kernel.
For the next kernel, we’ll use the GPU’s fast on-chip memory, called shared memory, to cache data that will be re-used.
Kernel 3: Shared Memory Cache-Blocking
Next to the large global memory, a GPU has a much smaller region of memory that is physically located on the chip, called shared memory (SMEM).
Physically, there’s one shared memory per SM.Here’s a helpful illustration of the memory hierarchy on an A100 GPU (source):
Logically, this shared memory is partitioned among the blocks.
This means that a thread can communicate with the other threads in its block via the shared memory chunk.
On my A6000 GPU, each block has access to a maximum of 48KB of shared memory.The amount of SMEM is configurable, by trading off a larger shared memory for a smaller L1 cache. For specifics, see the compute capability documentation. Also, it’s possible to use more than 48KB of SMEM per thread by utilizing dynamic shared memory.
As the shared memory is located on-chip, it has a much lower latency and higher bandwidth than global memory.
I couldn’t find good benchmark results for the Ampere architecture but for Volta (released in 2017) the benchmarks performed in this paper report 750GiB/s of global memory bandwidth, and 12,080GiB/s of shared memory bandwidth.It doesn’t look like these numbers have changed much since Volta. Nvidia reports ~750GB of max GMEM bandwidth for my A6000 (Ampere).
So for this next kernel, we’ll load a chunk of A and a chunk of B from global memory into shared memory.
Then we’ll perform as much work as possible on the two chunks, with each thread still being assigned one entry of C.
We’ll move the chunks along the columns of A and the rows of B performing partial sums on C until the result is computed.
This is illustrated below:
The important parts of the code are below, with variable names corresponding to the plot above:In general, I didn’t write the code to work for arbitrary sizes of M, N and K, as the condition checking introduces a lot of clutter and isn’t very interesting. To make sure the kernel works correctly, I test it with random data and a few different matrix sizes by comparing to cuBLAS.
// advance pointers to the starting positions
A += cRow * BLOCKSIZE * K; // row=cRow, col=0
B += cCol * BLOCKSIZE; // row=0, col=cCol
C += cRow * BLOCKSIZE * N + cCol * BLOCKSIZE; // row=cRow, col=cCol
float tmp = 0.0;
// the outer loop advances A along the columns and B along
// the rows until we have fully calculated the result in C.
for (int bkIdx = 0; bkIdx < K; bkIdx += BLOCKSIZE) {
// Have each thread load one of the elements in A & B from
// global memory into shared memory.
// Make the threadCol (=threadIdx.x) the consecutive index
// to allow global memory access coalescing
As[threadRow * BLOCKSIZE + threadCol] = A[threadRow * K + threadCol];
Bs[threadRow * BLOCKSIZE + threadCol] = B[threadRow * N + threadCol];
// block threads in this block until cache is fully populated
__syncthreads();
// advance pointers onto next chunk
A += BLOCKSIZE;
B += BLOCKSIZE * N;
// execute the dotproduct on the currently cached block
for (int dotIdx = 0; dotIdx < BLOCKSIZE; ++dotIdx) {
tmp += As[threadRow * BLOCKSIZE + dotIdx] *
Bs[dotIdx * BLOCKSIZE + threadCol];
}
// need to sync again at the end, to avoid faster threads
// fetching the next block into the cache before slower threads are done
__syncthreads();
}
C[threadRow * N + threadCol] =
alpha * tmp + beta * C[threadRow * N + threadCol];
This kernel achieves ~2200 GFLOPS, a 50% improvement over the previous version.There’s only a 50% improvement partly because our previous kernel already had pretty good L1 cache hit rates.
We’re still far away from hitting the ~30 TFLOPs that the GPU can provide.
This is obvious from the roofline plot below:Notice how we’re achieving a higher memory bandwidth than cuBLAS. But because we’re doing much less work per byte loaded from memory (=lower arithmetic intensity), overall performance is worse.
At a CHUNKSIZE of 32, this uses 2*32*32*4B=8KB of shared memory space.This info can also be obtained by compiling with --ptxas-options=-v, which outputs: Used 37 registers, 8192 bytes smem, 400 bytes cmem[0].
My A6000 GPU has a maximum of 48KB of shared memory space available for each block, so we’re far away from hitting that limit.
This is not necessarily a problem, as there are downsides to increasing per-block shared-memory usage.
Each multiprocessor (SM) has a maximum of 100KB of SMEM available.
This means that if we’d modify our kernel to use the full 48KB of SMEM available, each SM could only keep two blocks loaded at the same time.
In CUDA parlance, increasing per-block SMEM utilization can decrease occupancy.
Occupancy is defined as the ratio between the number of active warps per SM and the maximum possible number of active warps per SM.
High occupancy is useful because it allows us to hide the high latency of our operations, by having a bigger pool of issue-able instructions available.On GPUs, math operations like FMA have a latency of 4 cycles which is equal to 2.6ns at a 1.5GHz clock. Compare this to a recent x86 CPU, where FMA has a 6 cycle latency or 1.8ns at a 3.5GHz clock.
There are three main limits to keeping more active blocks loaded on an SM: register count, warp count and SMEM capacity.
Let’s do an example calculation for our current kernel.
Occupancy Calculation for Kernel 3
Here are the relevant hardware stats for my GPU, obtained from the cudaGetDeviceProperties API (Multiprocessors are the SMs we talked about earlier):The amount of shared memory is configurable by using a feature called SharedMemoryCarveout. The so-called unified data cache is partitioned into L1 cache and shared memory, so we can trade-off less shared-memory for more L1 cache.
And here are the resource demands for our kernel:
Work is scheduled onto the SMs on a block granularity.
Each SM will load more blocks, as long as it has enough resources to accommodate them.
Calculation:I found lots of official and unofficial occupancy calculators, but no official formulae as how to calculate the occupancy. The results are correct (I checked using NVIDIA’s official tools), but there may be small errors eg in the application of rounding.
So this kernel is limited by the number of threads per block, and the number of registers per thread.
We cannot load more than one block per SM, giving us a final occupancy of 32 active warps / 48 max active warps = 66%.
A 66% occupancy is not too bad, so this doesn’t explain why our kernel runs so slow.We know that it’s possible to optimize our kernel towards high arithmetic intensity (AI) by observing that cuBLAS achieves ~245 FLOPs/Byte. Both at very high and very low AI, high occupancy is not needed to achieve peak throughput. For more details on this, see V. Volkov’s PhD thesis and its coverage of “cusp behaviour”:
Looking at the profiler gives us some hints. First, if we look at the mix of executed instructions, most of them are memory loads:LDS are shared memory loads. FMA is our fused multiply add. IADD3 is a “3 input integer addition”, which we need for moving the pointers along the K dimension.
Our inner loop looks like this in PTX (Godbolt link):
ld.shared.f32 %f91, [%r8+3456];
ld.shared.f32 %f92, [%r7+108];
fma.rn.f32 %f93, %f92, %f91, %f90;
That’s not good, given that a memory load is bound to have a higher latency than a simple FMA, and given that we know our kernel should be compute bound.
We see this effect when looking at the profiler’s sampling of warp states.
This quantifies how many cycles were spent in each state per executed instruction:Stall Not Selected means that the warp was eligible to be scheduled, but the scheduler selected another eligible warp instead. This adds evidence to our earlier hypothesis that occupancy is currently not a problem.
The meaning of the states is documented in the Kernel Profiling Guide.
For Stall MIO Throttle it reads:
Warp was stalled waiting for the MIO (memory input/output) instruction queue to be not full. This stall reason is high in cases of extreme utilization of the MIO pipelines, which include special math instructions, dynamic branches, as well as shared memory instructions
We’re not using special math instructions, nor dynamic branches, so it’s clear that we’re stalling waiting for our SMEM accesses to return.
So how do we make our kernel issue less SMEM instructions?
One way is to have each thread compute more than one output element, which allows us to perform more of the work in registers and relying less on SMEM.
Kernel 4: 1D Blocktiling for Calculating Multiple Results per Thread
So this next kernel works like our last kernel, but adds a new inner loop, for calculating multiple C entries per thread.
We now use a SMEM cache size of BM*BK + BN*BK = 64*8 + 64*8 = 1024 floats, for a total of 4KB per block.
Below a visualization.
I have highlighted two of the threads and the values they access in the inner loop in orange and red.
All of the important changes for this kernel happen in the inner loop.
The loading for GMEM to SMEM stays largely the same as before.
Let’s have a look:Godbolt link.
// allocate thread-local cache for results in registerfile
float threadResults[TM] = {0.0};
// outer loop over block tiles
for (uint bkIdx = 0; bkIdx < K; bkIdx += BK) {
// populate the SMEM caches (same as before)
As[innerRowA * BK + innerColA] = A[innerRowA * K + innerColA];
Bs[innerRowB * BN + innerColB] = B[innerRowB * N + innerColB];
__syncthreads();
// advance blocktile for outer loop
A += BK;
B += BK * N;
// calculate per-thread results
for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {
// we make the dotproduct loop the outside loop, which facilitates
// reuse of the Bs entry, which we can cache in a tmp var.
float Btmp = Bs[dotIdx * BN + threadCol];
for (uint resIdx = 0; resIdx < TM; ++resIdx) {
threadResults[resIdx] +=
As[(threadRow * TM + resIdx) * BK + dotIdx] * Btmp;
}
}
__syncthreads();
}
This kernel achieves ~8600 GFLOPs, 2.2x faster than our previous kernel.
Let’s calculate how many memory accesses each thread performed in our previous kernel, where each thread calculated one result:
And for our new kernel, where each thread calculates eight results:
As expected, we now spend much fewer cycles per instruction stalling due to memory pressure:Careful: The axis has changed compared to the previous plot.
Sidenote on Compiler Optimizations
Above we explicitly cached the entry of B into Btmp and reordered the two inner loops for efficiency.
If we don’t do that, then the code looks like this:
for (uint resIdx = 0; resIdx < TM; ++resIdx) {
for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {
threadResults[resIdx] +=
As[(threadRow * TM + resIdx) * BK + dotIdx] * Bs[dotIdx * BN + threadCol];
}
}
Interestingly, this has no adverse effect on performance.
This is surprising since our inner two loops now incur BK (=8) * TM (=8) * 2 = 128 SMEM accesses, instead of the previous 72.
Looking at the assembly (Godbolt link) has the answer:
// first inner-most loop
ld.shared.f32 %f45, [%r9];
ld.shared.f32 %f46, [%r8];
fma.rn.f32 %f47, %f46, %f45, %f212;
ld.shared.f32 %f48, [%r9+256];
ld.shared.f32 %f49, [%r8+4];
fma.rn.f32 %f50, %f49, %f48, %f47;
ld.shared.f32 %f51, [%r9+512];
ld.shared.f32 %f52, [%r8+8];
fma.rn.f32 %f53, %f52, %f51, %f50;
ld.shared.f32 %f54, [%r9+768];
ld.shared.f32 %f55, [%r8+12];
fma.rn.f32 %f56, %f55, %f54, %f53;
ld.shared.f32 %f57, [%r9+1024];
ld.shared.f32 %f58, [%r8+16];
fma.rn.f32 %f59, %f58, %f57, %f56;
ld.shared.f32 %f60, [%r9+1280];
ld.shared.f32 %f61, [%r8+20];
fma.rn.f32 %f62, %f61, %f60, %f59;
ld.shared.f32 %f63, [%r9+1536];
ld.shared.f32 %f64, [%r8+24];
fma.rn.f32 %f65, %f64, %f63, %f62;
ld.shared.f32 %f66, [%r9+1792];
ld.shared.f32 %f67, [%r8+28];
fma.rn.f32 %f212, %f67, %f66, %f65;
// second inner-most loop
ld.shared.f32 %f68, [%r8+32];
fma.rn.f32 %f69, %f68, %f45, %f211;
ld.shared.f32 %f70, [%r8+36];
fma.rn.f32 %f71, %f70, %f48, %f69;
ld.shared.f32 %f72, [%r8+40];
fma.rn.f32 %f73, %f72, %f51, %f71;
ld.shared.f32 %f74, [%r8+44];
fma.rn.f32 %f75, %f74, %f54, %f73;
ld.shared.f32 %f76, [%r8+48];
fma.rn.f32 %f77, %f76, %f57, %f75;
ld.shared.f32 %f78, [%r8+52];
fma.rn.f32 %f79, %f78, %f60, %f77;
ld.shared.f32 %f80, [%r8+56];
fma.rn.f32 %f81, %f80, %f63, %f79;
ld.shared.f32 %f82, [%r8+60];
fma.rn.f32 %f211, %f82, %f66, %f81;
// ... continues like this for inner-loops 3-8 ...
The compiler unrolls both loopsThe compiler can unroll them since the loop count is known at compile time. and then eliminates the repeated SMEM loads of the Bs entries, so we end up with the same amount of SMEM accesses as our optimized CUDA code.
When the PTX is compiled to SASS, the SMEM loads from Bs are vectorized:This already hints at an optimization we’ll perform later: Transposing As such that we can also vectorize those loads.
LDS R26, [R35.X4+0x800] // a 32b load from As
LDS.128 R8, [R2] // a 128b load from Bs
LDS.128 R12, [R2+0x20]
LDS R24, [R35.X4+0x900]
LDS.128 R20, [R2+0x60]
LDS R36, [R35.X4+0xb00]
LDS.128 R16, [R2+0x40]
LDS.128 R4, [R2+0x80]
LDS R38, [R35.X4+0xd00]
Areas of Improvement: Arithmetic Intensity
Our current kernel still suffers from the same stalling-for-memory problem as kernel 3, just to a lesser extent.
So we’ll just apply the same optimization again: computing even more results per thread.
The main reason this makes our kernel run faster is that it increases arithmetic intensity.Defined as the number of FLOPs executed per byte transferred (load + store!) between GMEM and SMEM.
Below I tried to make it more immediately obvious why calculating more results per thread raises arithmetic intensity:It’s more efficient to calculate a square of results per thread than a column of results because we can share more of the inputs:
In conclusion, all our kernels perform the same number of FLOPs, but we can reduce the number of GMEM accesses by calculating more results per thread.
We’ll continue optimizing arithmetic intensity for as long as we’re still memory bound.
Kernel 5: Increasing Arithmetic Intensity via 2D Blocktiling
The basic idea for kernel 5 will be to compute a grid of 8*8 elements of C per thread.
The first stage of the kernel is for all threads to work together to populate the SMEM cache.
We’ll have each thread load multiple elements.
This code looks like so:Here’s a graphical representation of the GMEM loading:
for (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) {
As[(innerRowA + loadOffset) * BK + innerColA] =
A[(innerRowA + loadOffset) * K + innerColA];
}
for (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) {
Bs[(innerRowB + loadOffset) * BN + innerColB] =
B[(innerRowB + loadOffset) * N + innerColB];
}
__syncthreads();
Now that the SMEM cache is populated, we have each thread multiply its relevant SMEM entries and accumulate the result into local registers.
Below I illustrated the (unchanged) outer loop along the input matrices, and the three inner loops for the dot product and the TN and TM dimension:
The interesting parts of the code look like this:Godbolt link
// allocate thread-local cache for results in registerfile
float threadResults[TM * TN] = {0.0};
// register caches for As and Bs
float regM[TM] = {0.0};
float regN[TN] = {0.0};
// outer-most loop over block tiles
for (uint bkIdx = 0; bkIdx < K; bkIdx += BK) {
// populate the SMEM caches
for (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) {
As[(innerRowA + loadOffset) * BK + innerColA] =
A[(innerRowA + loadOffset) * K + innerColA];
}
for (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) {
Bs[(innerRowB + loadOffset) * BN + innerColB] =
B[(innerRowB + loadOffset) * N + innerColB];
}
__syncthreads();
// advance blocktile
A += BK; // move BK columns to right
B += BK * N; // move BK rows down
// calculate per-thread results
for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {
// load relevant As & Bs entries into registers
for (uint i = 0; i < TM; ++i) {
regM[i] = As[(threadRow * TM + i) * BK + dotIdx];
}
for (uint i = 0; i < TN; ++i) {
regN[i] = Bs[dotIdx * BN + threadCol * TN + i];
}
// perform outer product on register cache, accumulate
// into threadResults
for (uint resIdxM = 0; resIdxM < TM; ++resIdxM) {
for (uint resIdxN = 0; resIdxN < TN; ++resIdxN) {
threadResults[resIdxM * TN + resIdxN] +=
regM[resIdxM] * regN[resIdxN];
}
}
}
__syncthreads();
}
In the inner loop, we can reduce the number of SMEM accesses by making dotIdx the outer loop, and explicitly loading the values we need for the two inner loops into registers.
Below is a drawing of the dotIdx loop across time, to visualize which SMEM entries get loaded into thread-local registers at each step:I had to reduce some dimensions to make it easier to draw. In the kernel: BK=TM=TN=8.
Resulting performance: 16TFLOPs, another 2x improvement.
Let’s repeat the memory access calculation.
We’re now calculating TM*TN = 8*8 = 64 results per thread.
Slowly performance is reaching acceptable levels, however, warp stalls due to memory pipeline congestion are still too frequent.
For kernel 6 we’ll take two measures to try to improve that: Transposing As to enable auto-vectorization of SMEM loads, and promising the compiler alignment on the GMEM accesses.
Kernel 6: Vectorize SMEM and GMEM Accesses
The first optimization that I already hinted at earlier is to transpose As.
This will allow us to load from As using vectorized SMEM loads (LDS.128 in SASS).
Below the same visualization of the three inner loops as for kernel 5, but now with As transposed in memory:
Looking at the assemblyGodbolt link we see that loading As into the registers, which used to be a 32b LDS load, is now also a 128b LDS.128 load, just like it had already been for Bs.
This gives us a 500GFLOPs speedup, or ~3%.
Next, we’ll vectorize all loads and stores from/to GMEM using vector datatypes, namely float4.
The code looks like this:Godbolt link for the full kernel
float4 tmp =
reinterpret_cast<float4 *>(&A[innerRowA * K + innerColA * 4])[0];
// transpose A during the GMEM to SMEM transfer
As[(innerColA * 4 + 0) * BM + innerRowA] = tmp.x;
As[(innerColA * 4 + 1) * BM + innerRowA] = tmp.y;
As[(innerColA * 4 + 2) * BM + innerRowA] = tmp.z;
As[(innerColA * 4 + 3) * BM + innerRowA] = tmp.w;
reinterpret_cast<float4 *>(&Bs[innerRowB * BN + innerColB * 4])[0] =
reinterpret_cast<float4 *>(&B[innerRowB * N + innerColB * 4])[0];
__syncthreads();
This leads to the 32b GMEM load instructions (LDG.E and STG.E) being replaced with 128b counterparts (LDG.E.128 and STG.E.128).
Initially, I was confused as to why running this:
reinterpret_cast<float4 *>(&Bs[innerRowB * BN + innerColB * 4])[0] =
reinterpret_cast<float4 *>(&B[innerRowB * N + innerColB * 4])[0];
would be any faster than just manually unrolling the access (or using pragma unroll):
Bs[innerRowB * BN + innerColB * 4 + 0] = B[innerRowB * N + innerColB * 4 + 0];
Bs[innerRowB * BN + innerColB * 4 + 1] = B[innerRowB * N + innerColB * 4 + 1];
Bs[innerRowB * BN + innerColB * 4 + 2] = B[innerRowB * N + innerColB * 4 + 2];
Bs[innerRowB * BN + innerColB * 4 + 3] = B[innerRowB * N + innerColB * 4 + 3];
Shouldn’t the compiler just be able to coalesce the 2nd version and also generate 128b loads?
I think the reason is that the compiler has no way to verify that the float* B pointer that is passed to the kernel is 128b aligned, which would be a requirement for using LDG.E.128.
So the reinterpret_cast’s only purpose is to promise the compiler that the float* B pointer will be aligned.Compare this to SMEM loads, where the compiler automatically generates vectorized loads because that memory is not user-managed.
Kernel 6 achieves 19TFLOPs.
The profiler still shows a bunch of problem areas and optimization opportunities: We’re running into shared-memory bank conflicts (which cuBLAS avoids), our occupancy is higher than necessary, and we haven’t implemented any double buffering (which the CUTLASS docs seem to suggest is pretty useful).
But before we get to those, let’s cover some more low-hanging fruit: Autotuning the kernel’s parameters.
Kernel 9: AutotuningI skipped kernels 7 and 8, which I wrote while figuring out how to best eliminate shared memory bank conflicts. They eliminate the conflicts but were overall still slower, so I won’t cover them here.
We’ve accumulated a total of five template parameters:
For kernel 6, these were set to BM=BN=128 and BK=TM=TN=8.
I wrote a bash script that searches through all sensible combinations and benchmarks their runtime.
This required me to make sure that:
The necessary modifications to the code ended up taking quite some time to implement.
It turns out that the optimal parameters vary quite a bit depending on the GPU model.I guess that’s why compilers like Triton provide routines for autotuning. I wonder how this works for cuBLAS, they probably store a precomputed mapping from {GPU type, matrix size, dtype, …} to the optimal GEMM implementation inside the cuBLAS binary.
On my A6000, BM=BN=128 BK=16 TM=TN=8 increased performance by 5%, from 19 to 20 TFLOPs.
On an A100 SMX4 40GB, that same configuration reached 12 TFLOPs, 6% worse than the optimal setting found by the autotuner (BM=BN=64 BK=16 TM=TN=4), which reached 12.6 TFLOPs.The A100 has worse fp32 performance than the A6000, which is why the FLOPs numbers are lower (cuBLAS reaches 14.7 TFLOPs on the A100). Nvidia rates the A100 at 19.5 TFLOPs and the A6000 at 38.7 TFLOPs.
I can’t explain why these specific parameters end up producing the optimal performance.
Autotuning works, every high-performance library uses it, but it also feels very unsatisfying.I’m sure that with enough time, enough access to low-level performance counters and some facetime with Nvidia engineers, I’d eventually figure it out. It’s good have a strong belief that computers can be understood.
Kernel 10: Warptiling
Currently, our loop structure looks like this:
We’ll now add another hierarchy of tiling, in between our blocktiling and threadtiling loops: warptiling.
Warptiling is somewhat confusing initially since unlike blocks and threads, warps don’t show up anywhere in the CUDA code explicitly.
They are a hardware feature that has no direct analog in the scalar CUDA-software world.
We can calculate a given thread’s warpId as warpId=threadIdx.x % warpSize, where warpSize is a built-in variable that is equal to 32 on any CUDA GPU I’ve ever worked with.
Warps are relevant for performance since (among other reasons):
Warptiling is elegant since we now make explicit all levels of parallelism:
The warptiling looks like this in the CUDA code:Godbolt link.
// dotIdx loops over contents of SMEM
for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) {
// populate registers for this thread's part of the warptile
for (uint wSubRowIdx = 0; wSubRowIdx < WMITER; ++wSubRowIdx) {
for (uint i = 0; i < TM; ++i) {
regM[wSubRowIdx * TM + i] =
As[(dotIdx * BM) + warpRow * WM + wSubRowIdx * WSUBM +
threadRowInWarp * TM + i];
}
}
for (uint wSubColIdx = 0; wSubColIdx < WNITER; ++wSubColIdx) {
for (uint i = 0; i < TN; ++i) {
regN[wSubColIdx * TN + i] =
Bs[(dotIdx * BN) + warpCol * WN + wSubColIdx * WSUBN +
threadColInWarp * TN + i];
}
}
// execute warptile matmul. Later this will map well to
// warp-wide matrix instructions, executed on tensor cores.
for (uint wSubRowIdx = 0; wSubRowIdx < WMITER; ++wSubRowIdx) {
for (uint wSubColIdx = 0; wSubColIdx < WNITER; ++wSubColIdx) {
// calculate per-thread results with register-cache locality
for (uint resIdxM = 0; resIdxM < TM; ++resIdxM) {
for (uint resIdxN = 0; resIdxN < TN; ++resIdxN) {
threadResults[(wSubRowIdx * TM + resIdxM) * (WNITER * TN) +
(wSubColIdx * TN) + resIdxN] +=
regM[wSubRowIdx * TM + resIdxM] *
regN[wSubColIdx * TN + resIdxN];
}
}
}
}
}
I tried my best to visualize all three levels of tiling below, although the structure is getting quite complex.The CUTLASS docs about efficient GEMMs go even more in-depth into warptiling, and their visualizations are illuminating.
Each warp will compute a chunk of size (WSUBN * WNITER) x (WSUBM * WMITER).
Each thread computes WNITER * WMITER many chunks of size TM*TN.
After autotuning the parameters, performance improves from 19.7 TFLOPs to 21.7 TFLOPs on an A100.
Here’s a plot that compares our warptiling kernel against cuBLAS across increasing matrix sizes: I generated this plot on an A100, which is why the absolute FLOPs numbers are different.
At dimensions 2048 and 4096, our measured FLOPs are only a few percentage points slower than cuBLAS.
However, for smaller matrices, we’re doing poorly in comparison to Nvidia’s library!
This happens because cuBLAS contains not one single implementation of SGEMM, but hundreds of them. There’s a reason I guess for why the library is 500MB of compiled code. To print all the kernels: cuobjdump --list-text <cublas location>.
At runtime, based on the dimensions, cuBLAS will pick which kernel to run.I launched matmuls for square matrices on all dimensions up to 4096 and found 16 different SGEMM kernels. Here’s a script for finding the kernel that was launched by cuBLAS (h/t Horace He).
I traced the cuBLAS call and these are the kernels it’s calling at each size:I used the Nsight Systems CLI for this.
At dimension 256 it calls two kernels: a matmul kernel followed by a reduction kernel.Split-K refers to partitioning the K-dimension across multiple threadblocks. This means that each block will only compute part of the chunk of C, and cuBLAS follows up with a reduce kernel to accumulate the final result. This requires some extra memory space to store the intermediate results before the reduction. I imagine this looks like so (but I’m uncertain here):
So if we were trying to write a high-performance library that works for all shapes and sizes we would have specializations for different shapes, and at runtime dispatch to the one that’s the best fit.
I also want to report a negative results: For this kernel, I additionally implemented an optimization called thread swizzling.
This technique assumes that threadblocks are launched in order of increasing blockIdx, and optimizes the mapping of blockIdx to C chunks in a way that should increase L2 locality.Remember that L2 is a cache for global memory that exists once for the whole GPU.
This Nvidia post has more info and visualizations.
It didn’t increase performance, presumably because L2 hit rate is already fairly high at 80%, so I ended up removing the swizzling code.The commit is here if anyone is interested.
It makes sense to move the loop over BK towards the outside, since it follows our maxim of “load some data, then do as much work on that data as possible”.
It further means that all computation that happens inside the BK loop will be independent and can be parallelized (for example using ILP).
We can now also start prefetching the data necessary for the next loop iteration already, a technique called double buffering.
Work in Progress: Kernel 11
If I get back to working on this post, here’s what I’ll look at next:
Conclusion
Writing this post was a similar experience to my previous post on optimizing SGEMM on CPU: Optimizing SGEMM iteratively is one of the best ways to deeply understand the performance characteristics of the hardware.
For writing the CUDA programs I was surprised by how easy it was to implement the code once I had made a good visualization of how I wanted the kernel to work.
Also: Powerlaws are everywhere.
It took me two weekends to write the first 6 kernels which reach 80% of peak FLOPs, and then 4 more weekends to do autotuning and warptiling to get to 94%.
How much I’m learning while writing this code has also seen diminishing results, hence I’m putting off hunting the last 6% until some future time.
All my code is available on Github.
Lastly, a big thanks to the creators of Godbolt.org (for looking at PTX and SASS assembly) and Excalidraw (for drawing the kernels)!
Both of these tools are a joy to use and have helped me learn much faster.
If you enjoy kernel work like this you’re likely a good fit for the Performance team at Anthropic. Come work with me! The team is headed by Tristan Hume who is the most capable & thoughtful manager I’ve ever had. We optimize Anthropic’s model for GPUs, TPUs and AWS Trainium. Feel free to reach out!
Further Resources and References
|
Fundamental Optimizations in CUDA
Peng Wang, Developer Technology, NVIDIA
Optimization Overview
GPU architecture
Kernel optimization
— Memory optimization
— Latency optimization
— Instruction optimization
CPU-GPU interaction optimization
— Overlapped execution using streams
Optimization Overview
GPU architecture
Kernel optimization
— Memory optimization
— Execution configuration
— Instruction optimization
CPU-GPU interaction optimization
— Overlapped execution using streams
GPU High Level View
Streaming Multiprocessor
Global memory
Fermi Multiprocessor
2 Warp Scheduler
— In-order dual-issue
— Up to 1536 concurrent threads
32 CUDA Cores
— Full IEEE 754-2008 FP32 and FP64
— 32 FP32 ops/clock, 16 FP64 ops/clock
Configurable 16/48 KB shared memory
Configurable 16/48 KB L1 cache
4 SFUs
32K 32-bit registers
Uniform Cache
64K Configurable
Cache / Shared Mem
Load/Store Units x 16
Core
Special Func Units x 4
Interconnect Network
Instruction Cache
Scheduler
Scheduler
Dispatch
Dispatch
Register File
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
Core
GPU and Programming Model
Warp and SIMT
Block
32 Threads
32 Threads
32 Threads
...
Warps
=
• Blocks divide into groups of 32
threads called warps
• Warps are basic scheduling units
• Context switching is free
• A lot of warps can hide memory
latency
• Warps always perform the same
instruction (SIMT)
• Each thread CAN execute its own
code path
Fermi Memory Hierarchy
Register
— Spills to local memory
Caches
— Shared memory
— L1 cache
— L2 cache
— Constant cache
— Texture cache
Global memory
Fermi Memory Hierarchy Review
L2
Global Memory
Registers
L1
SM-N
SMEM
Registers
L1
SM-0
SMEM
Registers
L1
SM-1
SMEM
General Optimization Strategies:
Measurement
Find out the limiting factor in kernel performance
— Memory bandwidth bound (memory optimization)
— Instruction throughput bound (instruction optimization)
— Latency bound (configuration optimization)
Measure effective memory/instruction throughput
Optimize for peak memory/instruction throughput
— Finding out the bottleneck
— Typically an iterative process
Optimization Overview
GPU architecture
Kernel optimization
— Memory optimization
— Latency optimization
— Instruction optimization
CPU-GPU interaction optimization
— Overlapped execution using streams
Memory Optimization
If the code is memory-bound and effective memory
throughput is much lower than the peak
Purpose: access only data that are absolutely necessary
Major techniques
— Improve access pattern to reduce wasted transactions: coalescing
— Reduce redundant access: shared memory
Coalescing
Global memory latency: 400-800 cycles
— The single most important performance consideration!
Coalescing: global memory access from a warp can be
coalesced into a single transaction
Criterion: requests from a warp falling in a L1 cache line, one
transaction
# transaction = # L1 line accessed
Caching or Non-caching?
On Fermi, by default all global memory access are cached in
L1.
— L1 can be by-passed by passing ―-Xptxas –dlcm=cg‖ to nvcc: cache
only in L2
If non-cached: same coalescing criterion
— But transaction size can be reduced to 32B segment
Caching or Non-caching?
Caching
— Help on some non-coalesced access, e.g. misaligned
— May lead to lower performance for some uncoalesced access due to
more wasted bandwidth
Non-caching
— Reduce wasted bandwidth
— Leave more space for register spilling
Caching Load
Warp requests 32 aligned, consecutive 4-byte words
Addresses fall within 1 cache-line
— Warp needs 128 bytes
— 128 bytes move across the bus on a miss
— Bus utilization: 100%
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Caching Load
...
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
addresses from a warp
0
Warp requests 32 aligned, permuted 4-byte words
Addresses fall within 1 cache-line
— Warp needs 128 bytes
— 128 bytes move across the bus on a miss
— Bus utilization: 100%
Caching Load
96
192
128
160
224
288
256
...
addresses from a warp
32
64
0
352
320
384
448
416
Memory addresses
Warp requests 32 misaligned, consecutive 4-byte words
Addresses fall within 2 cache-lines
— Warp needs 128 bytes
— 256 bytes move across the bus on misses
— Bus utilization: 50%
Non-caching Load
96
192
128
160
224
288
256
...
addresses from a warp
32
64
0
352
320
384
448
416
Memory addresses
Warp requests 32 misaligned, consecutive 4-byte words
Addresses fall within at most 5 segments
— Warp needs 128 bytes
— At most 160 bytes move across the bus
— Bus utilization: at least 80%
Some misaligned patterns will fall within 4 segments, so 100% utilization
Caching Load
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
All threads in a warp request the same 4-byte word
Addresses fall within a single cache-line
— Warp needs 4 bytes
— 128 bytes move across the bus on a miss
— Bus utilization: 3.125%
Non-caching Load
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
All threads in a warp request the same 4-byte word
Addresses fall within a single segment
— Warp needs 4 bytes
— 32 bytes move across the bus on a miss
— Bus utilization: 12.5%
Caching Load
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Warp requests 32 scattered 4-byte words
Addresses fall within N cache-lines
— Warp needs 128 bytes
— N*128 bytes move across the bus on a miss
— Bus utilization: 128 / (N*128)
Non-caching Load
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Warp requests 32 scattered 4-byte words
Addresses fall within N segments
— Warp needs 128 bytes
— N*32 bytes move across the bus on a miss
— Bus utilization: 128 / (N*32)
Shared Memory
Low latency: a few cycles
High throughput: 73.6 GB/s per SM (1.03 TB/s per GPU)
Main use
— Inter-block communication
— User-managed cache to reduce redundant global memory accesses
— Avoid non-coalesced access
Shared Memory Example: Matrix
Multiplication
A
B
C
C=AxB
Every thread corresponds to one entry in C.
Naive Kernel
__global__ void simpleMultiply(float* a,
float* b,
float* c,
int N)
{
int row = threadIdx.x + blockIdx.x*blockDim.x;
int col = threadIdx.y + blockIdx.y*blockDim.y;
float sum = 0.0f;
for (int i = 0; i < N; i++) {
sum += a[row*N+i] * b[i*N+col];
}
c[row*N+col] = sum;
}
Every thread corresponds to one entry in C.
Blocked Matrix Multiplication
A
B
C
C=AxB
Data reuse in the blocked version
Blocked and cached kernel
__global__ void coalescedMultiply(double*a,
double* b,
double*c,
int N)
{
__shared__ float aTile[TILE_DIM][TILE_DIM];
__shared__ double bTile[TILE_DIM][TILE_DIM];
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
float sum = 0.0f;
for (int k = 0; k < N; k += TILE_DIM) {
aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];
bTile[threadIdx.y][threadIdx.x] = b[threadIdx.y*N+col];
__syncthreads();
for (int i = k; i < k+TILE_DIM; i++)
sum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x];
}
c[row*N+col] = sum;
}
Performance Results
M=N=K=512
Bank Conflicts
Shared memory is divided into banks
— Successive 32-bit words assigned to successive banks
— Number of banks = 32 (Fermi)
Bank conflict: two R/W fall in the same
bank, the access will be serialized.
Special cases
— If all threads in a warp access the same word,
one broadcast. Fermi can also do multi-broadcast.
— If reading continuous byte/double, no conflict on Fermi
Bank 31
Bank 7
Bank 6
Bank 5
Bank 4
Bank 3
Bank 2
Bank 1
Bank 0
Shared memory
Bank Access Examples
Bank Access Examples
Optimizing Bank Conflict
Measure whether it matters
Change SMEM reads to the same value to see the impact
Avoiding bank conflict
— Change address patterns
— Padding
Use array[N_BANK][N_BANK+1]
Memory Optimizations
Strive for perfect coalescing
— Transpose the data structure, e.g. AOS to SOA
— Padding
— Change parallelization scheme: 1-thread-per-task to 1-warp-per-task?
Use shared memory to reduce global memory access, avoid
non-coalesced access
Bound to texture cache for unpredictable uncoalesced access
Use constant cache if all threads in a warp will access the
same constant data
Global Memory Throughput Metric
Measuring effective memory throughput:
— From the app point of view (―useful‖ bytes): number of bytes
needed by the algorithm divided by kernel time
— Compare to the theoretical bandwidth
70-80% is very good
Finding out bottleneck
— Start with global memory operations, achieve good throughput
— Add arithmetic, shared memory, etc, measuring perf as you go
Optimization Overview
GPU architecture
Kernel optimization
— Memory optimization
— Latency optimization
— Instruction optimization
CPU-GPU interaction optimization
— Overlapped execution using streams
Latency Optimization
When the code is latency bound
— Both the memory and instruction throughputs are far from the peak
Latency hiding:
— Instructions are issued in order
— A thread blocks when one of the operands isn’t ready
— Latency is hidden by switching threads
GMEM: 400-800 cycles
Arithmetic: 18-22 cycles
Purpose: have enough concurrency to hide latency
Major techniques: increase concurrency
— Adjust resource usage to increase active warps (TLP)
Grid/Block Size Heuristics
# of blocks >> # of SM > 100 to scale well to future device
Block size should be a multiple of 32 (warp size)
Minimum: 64. I generally use 128 or 256. But use whatever
is best for your app.
Depends on the problem, do experiments!
Occupancy
Occupancy: ratio of active warps per SM to the maximum
number of allowed warps
— Maximum number: 48 in Fermi
We need the occupancy to be high enough to hide latency
Occupancy is limited by resource usage
Dynamical Partitioning of SM Resources
Shared memory is partitioned among blocks
Registers are partitioned among threads: <= 63
Thread block slots: <= 8
Thread slots: <= 1536
Any of those can be the limiting factor on how many threads
can be launched at the same time on a SM
If adding a single instruction leads to significant perf drop,
occupancy is the primary suspect
Latency Hiding Occupancy Calculation
Assume global memory takes 400 cycles, we need 400/2 =
200 arithmetic instructions to hide the latency.
Assume the code has 8 independent arithmetic instructions
for every one global memory access. Thus 200/8~26 warps
would be enough (54% occupancy).
Lessons:
— Required occupancy depends on BOTH architecture and application
— In this example , beyond 54%, higher occupancy won’t lead to
further performance increase.
Occupancy Optimizations
Know the current occupancy
— Visual profiler
— --ptxas-options=-v: output resource usage info; input to Occupancy
Calculator
Adjust resource usage to increase occupancy
— Change block size
— Limit register usage
Compiler option –maxrregcount=n: per file
__launch_bounds__: per kernel
Use template to reduce register usage
— Dynamical allocating shared memory
Occupancy Calculator
http://developer.download.nvidia.com/compute/cuda/CUDA_Occupancy_calculator.xls
Increase ILP of Each Thread
Load by itself doesn’t stall execution
Increment a 64M element array
— Two accesses per thread (load then store, but they are dependent)
Thus, each warp (32 threads) has one outstanding transaction at a time
Several independent
smaller accesses have the
same effect as one larger
one.
For example:
Four 32-bit ~= one 128-bit
Optimization Overview
GPU architecture
Kernel optimization
— Memory optimization
— Latency optimization
— Instruction optimization
CPU-GPU interaction optimization
— Overlapped execution using streams
Instruction Optimization
If you find out the code is instruction bound
— Compute-intensive algorithm can easily become memory-bound if
not careful enough
— Typically, worry about instruction optimization after memory and
execution configuration optimizations
Purpose: reduce instruction count
— Use less instructions to get the same job done
Major techniques
— Use high throughput instructions
— Reduce wasted instructions: branch divergence, bank conflict, etc.
Fermi Arithmetic Instruction
Throughputs
Throughputs of common instructions
— Int & fp32: 2 cycles
— fp64: 2 cycles
— Fp32 transendental: 8 cycles
— Int divide and modulo are expensive
Divide by 2^n, use ―>> n‖
Modulo 2^n, use ―& (2^n – 1)‖
Reduce Instruction Count
Avoid automatic conversion of double to float
— Adding ―f‖ to floating literals (e.g. 1.0f) because the default is
double
Fermi default: -ftz=false, -prec-div=true, -prec-sqrt=true
for IEEE compliance
Fast math functions
— Two types of runtime math library functions
func(): slower but higher accuracy (5 ulp or less)
__func(): fast but lower accuracy (see prog. guide for full details)
— -use_fast_math: forces every func() to __func ()
Control Flow
Divergent branches:
— Threads within a single warp take different paths
— Example with divergence:
if (threadIdx.x > 2) {...} else {...}
Branch granularity < warp size
— Divergence inside a warp is processed by turning off the inactive threads
Different if-else branches are both executes: serialized
Different warps can execute different code with no impact on performance
Avoid diverging within a warp
— Example without divergence:
if (threadIdx.x / WARP_SIZE > 2) {...} else {...}
Branch granularity is a whole multiple of warp size
Kernel Optimization Workflow
Find Limiter
Compare to peak
GB/s
Memory
optimization
Compare to peak
Ginst/s
Instruction
optimization
Configuration
optimization
Memory bound
Instruction
bound
Latency bound
Done!
<<
<<
~
~
Optimization Overview
GPU architecture
Kernel optimization
— Memory optimization
— Latency optimization
— Instruction optimization
CPU-GPU interaction optimization
— Overlapped execution using streams
Minimizing CPU-GPU data transfer
Host<->device data transfer has much lower bandwidth than
global memory access.
— 8 GB/s (PCIe x16 Gen2) vs 156 GB/s & 515 Ginst/s (C2050)
Minimize transfer
— Intermediate data directly on GPU
— Recompute
— Move CPU codes to GPU that do not have performance gains if it
can reduce data transfer
Group transfer
— One large transfer much better than many small ones: 10 microsec
latency, 8 GB/s => latency dominated if data size < 80 KB
Streams and Async API
Default API:
— Kernel launches are asynchronous with CPU
— Memcopies (D2H, H2D) block CPU thread
— CUDA calls are serialized by the driver
Streams and async functions provide:
— Memcopies (D2H, H2D) asynchronous with CPU
— Ability to concurrently execute a kernel and a memcopy
— Concurrent kernel in Fermi
Stream = sequence of operations that execute in issue-order on GPU
— Operations from different streams can be interleaved
— A kernel and memcopy from different streams can be overlapped
Pinned (non-pageable) memory
Pinned memory enables:
— memcopies asynchronous with CPU & GPU
Usage
— cudaHostAlloc / cudaFreeHost
instead of malloc / free
— Additional flags if pinned region is to be shared between lightweight
CPU threads
Note:
— pinned memory is essentially removed from virtual memory
— cudaHostAlloc is typically very expensive
Overlap kernel and memory copy
Requirements:
— D2H or H2D memcopy from pinned memory
— Device with compute capability ≥ 1.1 (G84 and later)
— Kernel and memcopy in different, non-0 streams
Code:
cudaStream_t
stream1, stream2;
cudaStreamCreate(&stream1);
cudaStreamCreate(&stream2);
cudaMemcpyAsync( dst, src, size, dir, stream1 );
kernel<<<grid, block, 0, stream2>>>(…);
potentially
overlapped
Summary
Optimization needs an understanding of GPU architecture
Memory optimization: coalescing, shared memory
Execution configuration: latency hiding
Instruction throughput: use high throughput inst, reduce
wasted cycles
Do measurements!
— Use the Profiler, simple code modifications
— Compare to theoretical peaks |
CUDA C++ Best Practices Guide
The programming guide to using the CUDA Toolkit to obtain the best performance from NVIDIA GPUs.
1. Prefaceï
This Best Practices Guide is a manual to help developers obtain the best performance from NVIDIA® CUDA® GPUs. It presents established parallelization and optimization techniques and explains coding metaphors and idioms that can greatly simplify programming for CUDA-capable GPU architectures.
While the contents can be used as a reference manual, you should be aware that some topics are revisited in different contexts as various programming and configuration topics are explored. As a result, it is recommended that first-time readers proceed through the guide sequentially. This approach will greatly improve your understanding of effective programming practices and enable you to better use the guide for reference later.
1.1. Who Should Read This Guide?ï
The discussions in this guide all use the C++ programming language, so you should be comfortable reading C++ code.
This guide refers to and relies on several other documents that you should have at your disposal for reference, all of which are available at no cost from the CUDA website https://docs.nvidia.com/cuda/. The following documents are especially important resources:
CUDA Installation Guide
CUDA C++ Programming Guide
CUDA Toolkit Reference Manual
In particular, the optimization section of this guide assumes that you have already successfully downloaded and installed the CUDA Toolkit (if not, please refer to the relevant CUDA Installation Guide for your platform) and that you have a basic familiarity with the CUDA C++ programming language and environment (if not, please refer to the CUDA C++ Programming Guide).
1.2. Assess, Parallelize, Optimize, Deployï
This guide introduces the Assess, Parallelize, Optimize, Deploy(APOD) design cycle for applications with the goal of helping application developers to rapidly identify the portions of their code that would most readily benefit from GPU acceleration, rapidly realize that benefit, and begin leveraging the resulting speedups in production as early as possible.
APOD is a cyclical process: initial speedups can be achieved, tested, and deployed with only minimal initial investment of time, at which point the cycle can begin again by identifying further optimization opportunities, seeing additional speedups, and then deploying the even faster versions of the application into production.
1.2.1. Assessï
For an existing project, the first step is to assess the application to locate the parts of the code that are responsible for the bulk of the execution time. Armed with this knowledge, the developer can evaluate these bottlenecks for parallelization and start to investigate GPU acceleration.
By understanding the end-userâs requirements and constraints and by applying Amdahlâs and Gustafsonâs laws, the developer can determine the upper bound of performance improvement from acceleration of the identified portions of the application.
1.2.2. Parallelizeï
Having identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler.
On the other hand, some applicationsâ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures will require exposing parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput.
1.2.3. Optimizeï
After each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the process as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. Instead, strategies can be applied incrementally as they are learned.
Optimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developerâs optimization efforts and provide references into the relevant portions of the optimization section of this guide.
1.2.4. Deployï
Having completed the GPU acceleration of one or more components of the application it is possible to compare the outcome with the original expectation. Recall that the initial assess step allowed the developer to determine an upper bound for the potential speedup attainable by accelerating given hotspots.
Before tackling other hotspots to improve the total speedup, the developer should consider taking the partially parallelized implementation and carry it through to production. This is important for a number of reasons; for example, it allows the user to profit from their investment as early as possible (the speedup may be partial but is still valuable), and it minimizes risk for the developer and the user by providing an evolutionary rather than revolutionary set of changes to the application.
1.3. Recommendations and Best Practicesï
Throughout this guide, specific recommendations are made regarding the design and implementation of CUDA C++ code. These recommendations are categorized by priority, which is a blend of the effect of the recommendation and its scope. Actions that present substantial improvements for most CUDA applications have the highest priority, while small optimizations that affect only very specific situations are given a lower priority.
Before implementing lower priority recommendations, it is good practice to make sure all higher priority recommendations that are relevant have already been applied. This approach will tend to provide the best results for the time invested and will avoid the trap of premature optimization.
The criteria of benefit and scope for establishing priority will vary depending on the nature of the program. In this guide, they represent a typical case. Your code might reflect different priority factors. Regardless of this possibility, it is good practice to verify that no higher-priority recommendations have been overlooked before undertaking lower-priority items.
Note
Code samples throughout the guide omit error checking for conciseness. Production code should, however, systematically check the error code returned by each API call and check for failures in kernel launches by calling cudaGetLastError().
1.4. Assessing Your Applicationï
From supercomputers to mobile phones, modern processors increasingly rely on parallelism to provide performance. The core computational unit, which includes control, arithmetic, registers and typically some cache, is replicated some number of times and connected to memory via a network. As a result, all modern processors require parallel code in order to achieve good utilization of their computational power.
While processors are evolving to expose more fine-grained parallelism to the programmer, many existing applications have evolved either as serial codes or as coarse-grained parallel codes (for example, where the data is decomposed into regions processed in parallel, with sub-regions shared using MPI). In order to profit from any modern processor architecture, GPUs included, the first steps are to assess the application to identify the hotspots, determine whether they can be parallelized, and understand the relevant workloads both now and in the future.
2. Heterogeneous Computingï
CUDA programming involves running code on two different platforms concurrently: a host system with one or more CPUs and one or more CUDA-enabled NVIDIA GPU devices.
While NVIDIA GPUs are frequently associated with graphics, they are also powerful arithmetic engines capable of running thousands of lightweight threads in parallel. This capability makes them well suited to computations that can leverage parallel execution.
However, the device is based on a distinctly different design from the host system, and itâs important to understand those differences and how they determine the performance of CUDA applications in order to use CUDA effectively.
2.1. Differences between Host and Deviceï
The primary differences are in threading model and in separate physical memories:
Execution pipelines on host systems can support a limited number of concurrent threads. For example, servers that have two 32 core processors can run only 64 threads concurrently (or small multiple of that if the CPUs support simultaneous multithreading). By comparison, the smallest executable unit of parallelism on a CUDA device comprises 32 threads (termed a warp of threads). Modern NVIDIA GPUs can support up to 2048 active threads concurrently per multiprocessor (see Features and Specifications of the CUDA C++ Programming Guide) On GPUs with 80 multiprocessors, this leads to more than 160,000 concurrently active threads.
Threads on a CPU are generally heavyweight entities. The operating system must swap threads on and off CPU execution channels to provide multithreading capability. Context switches (when two threads are swapped) are therefore slow and expensive. By comparison, threads on GPUs are extremely lightweight. In a typical system, thousands of threads are queued up for work (in warps of 32 threads each). If the GPU must wait on one warp of threads, it simply begins executing work on another. Because separate registers are allocated to all active threads, no swapping of registers or other state need occur when switching among GPU threads. Resources stay allocated to each thread until it completes its execution. In short, CPU cores are designed to minimize latency for a small number of threads at a time each, whereas GPUs are designed to handle a large number of concurrent, lightweight threads in order to maximize throughput.
The host system and the device each have their own distinct attached physical memories 1. As the host and device memories are separated, items in the host memory must occasionally be communicated between device memory and host memory as described in What Runs on a CUDA-Enabled Device?.
These are the primary hardware differences between CPU hosts and GPU devices with respect to parallel programming. Other differences are discussed as they arise elsewhere in this document. Applications composed with these differences in mind can treat the host and device together as a cohesive heterogeneous system wherein each processing unit is leveraged to do the kind of work it does best: sequential work on the host and parallel work on the device.
2.2. What Runs on a CUDA-Enabled Device?ï
The following issues should be considered when determining what parts of an application to run on the device:
The device is ideally suited for computations that can be run on numerous data elements simultaneously in parallel. This typically involves arithmetic on large data sets (such as matrices) where the same operation can be performed across thousands, if not millions, of elements at the same time. This is a requirement for good performance on CUDA: the software must use a large number (generally thousands or tens of thousands) of concurrent threads. The support for running numerous threads in parallel derives from CUDAâs use of a lightweight threading model described above.
To use CUDA, data values must be transferred from the host to the device. These transfers are costly in terms of performance and should be minimized. (See Data Transfer Between Host and Device.) This cost has several ramifications:
The complexity of operations should justify the cost of moving data to and from the device. Code that transfers data for brief use by a small number of threads will see little or no performance benefit. The ideal scenario is one in which many threads perform a substantial amount of work.
For example, transferring two matrices to the device to perform a matrix addition and then transferring the results back to the host will not realize much performance benefit. The issue here is the number of operations performed per data element transferred. For the preceding procedure, assuming matrices of size NxN, there are N2 operations (additions) and 3N2 elements transferred, so the ratio of operations to elements transferred is 1:3 or O(1). Performance benefits can be more readily achieved when this ratio is higher. For example, a matrix multiplication of the same matrices requires N3 operations (multiply-add), so the ratio of operations to elements transferred is O(N), in which case the larger the matrix the greater the performance benefit. The types of operations are an additional factor, as additions have different complexity profiles than, for example, trigonometric functions. It is important to include the overhead of transferring data to and from the device in determining whether operations should be performed on the host or on the device.
Data should be kept on the device as long as possible. Because transfers should be minimized, programs that run multiple kernels on the same data should favor leaving the data on the device between kernel calls, rather than transferring intermediate results to the host and then sending them back to the device for subsequent calculations. So, in the previous example, had the two matrices to be added already been on the device as a result of some previous calculation, or if the results of the addition would be used in some subsequent calculation, the matrix addition should be performed locally on the device. This approach should be used even if one of the steps in a sequence of calculations could be performed faster on the host. Even a relatively slow kernel may be advantageous if it avoids one or more transfers between host and device memory. Data Transfer Between Host and Device provides further details, including the measurements of bandwidth between the host and the device versus within the device proper.
For best performance, there should be some coherence in memory access by adjacent threads running on the device. Certain memory access patterns enable the hardware to coalesce groups of reads or writes of multiple data items into one operation. Data that cannot be laid out so as to enable coalescing, or that doesnât have enough locality to use the L1 or texture caches effectively, will tend to see lesser speedups when used in computations on GPUs. A noteworthy exception to this are completely random memory access patterns. In general, they should be avoided, because compared to peak capabilities any architecture processes these memory access patterns at a low efficiency. However, compared to cache based architectures, like CPUs, latency hiding architectures, like GPUs, tend to cope better with completely random memory access patterns.
On Systems on a Chip with integrated GPUs, such as NVIDIA® Tegra®, host and device memory are physically the same, but there is still a logical distinction between host and device memory. See the Application Note on CUDA for Tegra for details.
3. Application Profilingï
3.1. Profileï
Many codes accomplish a significant portion of the work with a relatively small amount of code. Using a profiler, the developer can identify such hotspots and start to compile a list of candidates for parallelization.
3.1.1. Creating the Profileï
There are many possible approaches to profiling the code, but in all cases the objective is the same: to identify the function or functions in which the application is spending most of its execution time.
Note
High Priority: To maximize developer productivity, profile the application to determine hotspots and bottlenecks.
The most important consideration with any profiling activity is to ensure that the workload is realistic - i.e., that information gained from the test and decisions based upon that information are relevant to real data. Using unrealistic workloads can lead to sub-optimal results and wasted effort both by causing developers to optimize for unrealistic problem sizes and by causing developers to concentrate on the wrong functions.
There are a number of tools that can be used to generate the profile. The following example is based on gprof, which is an open-source profiler for Linux platforms from the GNU Binutils collection.
$ gcc -O2 -g -pg myprog.c
$ gprof ./a.out > profile.txt
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
33.34 0.02 0.02 7208 0.00 0.00 genTimeStep
16.67 0.03 0.01 240 0.04 0.12 calcStats
16.67 0.04 0.01 8 1.25 1.25 calcSummaryData
16.67 0.05 0.01 7 1.43 1.43 write
16.67 0.06 0.01 mcount
0.00 0.06 0.00 236 0.00 0.00 tzset
0.00 0.06 0.00 192 0.00 0.00 tolower
0.00 0.06 0.00 47 0.00 0.00 strlen
0.00 0.06 0.00 45 0.00 0.00 strchr
0.00 0.06 0.00 1 0.00 50.00 main
0.00 0.06 0.00 1 0.00 0.00 memcpy
0.00 0.06 0.00 1 0.00 10.11 print
0.00 0.06 0.00 1 0.00 0.00 profil
0.00 0.06 0.00 1 0.00 50.00 report
3.1.2. Identifying Hotspotsï
In the example above, we can clearly see that the function genTimeStep() takes one-third of the total running time of the application. This should be our first candidate function for parallelization. Understanding Scaling discusses the potential benefit we might expect from such parallelization.
It is worth noting that several of the other functions in the above example also take up a significant portion of the overall running time, such as calcStats() and calcSummaryData(). Parallelizing these functions as well should increase our speedup potential. However, since APOD is a cyclical process, we might opt to parallelize these functions in a subsequent APOD pass, thereby limiting the scope of our work in any given pass to a smaller set of incremental changes.
3.1.3. Understanding Scalingï
The amount of performance benefit an application will realize by running on CUDA depends entirely on the extent to which it can be parallelized. Code that cannot be sufficiently parallelized should run on the host, unless doing so would result in excessive transfers between the host and the device.
Note
High Priority: To get the maximum benefit from CUDA, focus first on finding ways to parallelize sequential code.
By understanding how applications can scale it is possible to set expectations and plan an incremental parallelization strategy. Strong Scaling and Amdahlâs Law describes strong scaling, which allows us to set an upper bound for the speedup with a fixed problem size. Weak Scaling and Gustafsonâs Law describes weak scaling, where the speedup is attained by growing the problem size. In many applications, a combination of strong and weak scaling is desirable.
3.1.3.1. Strong Scaling and Amdahlâs Lawï
Strong scaling is a measure of how, for a fixed overall problem size, the time to solution decreases as more processors are added to a system. An application that exhibits linear strong scaling has a speedup equal to the number of processors used.
Strong scaling is usually equated with Amdahlâs Law, which specifies the maximum speedup that can be expected by parallelizing portions of a serial program. Essentially, it states that the maximum speedup S of a program is:
\(S = \frac{1}{(1 - P) + \frac{P}{N}}\)
Here P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs.
The larger N is(that is, the greater the number of processors), the smaller the P/N fraction. It can be simpler to view N as a very large number, which essentially transforms the equation into \(S = 1/(1 - P)\). Now, if 3/4 of the running time of a sequential program is parallelized, the maximum speedup over serial code is 1 / (1 - 3/4) = 4.
In reality, most applications do not exhibit perfectly linear strong scaling, even if they do exhibit some degree of strong scaling. For most purposes, the key point is that the larger the parallelizable portion P is, the greater the potential speedup. Conversely, if P is a small number (meaning that the application is not substantially parallelizable), increasing the number of processors N does little to improve performance. Therefore, to get the largest speedup for a fixed problem size, it is worthwhile to spend effort on increasing P, maximizing the amount of code that can be parallelized.
3.1.3.2. Weak Scaling and Gustafsonâs Lawï
Weak scaling is a measure of how the time to solution changes as more processors are added to a system with a fixed problem size per processor; i.e., where the overall problem size increases as the number of processors is increased.
Weak scaling is often equated with Gustafsonâs Law, which states that in practice, the problem size scales with the number of processors. Because of this, the maximum speedup S of a program is:
\(S = N + (1 - P)(1 - N)\)
Here P is the fraction of the total serial execution time taken by the portion of code that can be parallelized and N is the number of processors over which the parallel portion of the code runs.
Another way of looking at Gustafsonâs Law is that it is not the problem size that remains constant as we scale up the system but rather the execution time. Note that Gustafsonâs Law assumes that the ratio of serial to parallel execution remains constant, reflecting additional cost in setting up and handling the larger problem.
3.1.3.3. Applying Strong and Weak Scalingï
Understanding which type of scaling is most applicable to an application is an important part of estimating speedup. For some applications the problem size will remain constant and hence only strong scaling is applicable. An example would be modeling how two molecules interact with each other, where the molecule sizes are fixed.
For other applications, the problem size will grow to fill the available processors. Examples include modeling fluids or structures as meshes or grids and some Monte Carlo simulations, where increasing the problem size provides increased accuracy.
Having understood the application profile, the developer should understand how the problem size would change if the computational performance changes and then apply either Amdahlâs or Gustafsonâs Law to determine an upper bound for the speedup.
4. Parallelizing Your Applicationï
Having identified the hotspots and having done the basic exercises to set goals and expectations, the developer needs to parallelize the code. Depending on the original code, this can be as simple as calling into an existing GPU-optimized library such as cuBLAS, cuFFT, or Thrust, or it could be as simple as adding a few preprocessor directives as hints to a parallelizing compiler.
On the other hand, some applicationsâ designs will require some amount of refactoring to expose their inherent parallelism. As even CPU architectures require exposing this parallelism in order to improve or simply maintain the performance of sequential applications, the CUDA family of parallel programming languages (CUDA C++, CUDA Fortran, etc.) aims to make the expression of this parallelism as simple as possible, while simultaneously enabling operation on CUDA-capable GPUs designed for maximum parallel throughput.
5. Getting Startedï
There are several key strategies for parallelizing sequential code. While the details of how to apply these strategies to a particular application is a complex and problem-specific topic, the general themes listed here apply regardless of whether we are parallelizing code to run on for multicore CPUs or for use on CUDA GPUs.
5.1. Parallel Librariesï
The most straightforward approach to parallelizing an application is to leverage existing libraries that take advantage of parallel architectures on our behalf. The CUDA Toolkit includes a number of such libraries that have been fine-tuned for NVIDIA CUDA GPUs, such as cuBLAS, cuFFT, and so on.
The key here is that libraries are most useful when they match well with the needs of the application. Applications already using other BLAS libraries can often quite easily switch to cuBLAS, for example, whereas applications that do little to no linear algebra will have little use for cuBLAS. The same goes for other CUDA Toolkit libraries: cuFFT has an interface similar to that of FFTW, etc.
Also of note is the Thrust library, which is a parallel C++ template library similar to the C++ Standard Template Library. Thrust provides a rich collection of data parallel primitives such as scan, sort, and reduce, which can be composed together to implement complex algorithms with concise, readable source code. By describing your computation in terms of these high-level abstractions you provide Thrust with the freedom to select the most efficient implementation automatically. As a result, Thrust can be utilized in rapid prototyping of CUDA applications, where programmer productivity matters most, as well as in production, where robustness and absolute performance are crucial.
5.2. Parallelizing Compilersï
Another common approach to parallelization of sequential codes is to make use of parallelizing compilers. Often this means the use of directives-based approaches, where the programmer uses a pragma or other similar notation to provide hints to the compiler about where parallelism can be found without needing to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow the compiler to do the detailed work of mapping the computation onto the parallel architecture.
The OpenACC standard provides a set of compiler directives to specify loops and regions of code in standard C, C++ and Fortran that should be offloaded from a host CPU to an attached accelerator such as a CUDA GPU. The details of managing the accelerator device are handled implicitly by an OpenACC-enabled compiler and runtime.
See http://www.openacc.org/ for details.
5.3. Coding to Expose Parallelismï
For applications that need additional functionality or performance beyond what existing parallel libraries or parallelizing compilers can provide, parallel programming languages such as CUDA C++ that integrate seamlessly with existing sequential code are essential.
Once we have located a hotspot in our applicationâs profile assessment and determined that custom code is the best approach, we can use CUDA C++ to expose the parallelism in that portion of our code as a CUDA kernel. We can then launch this kernel onto the GPU and retrieve the results without requiring major rewrites to the rest of our application.
This approach is most straightforward when the majority of the total running time of our application is spent in a few relatively isolated portions of the code. More difficult to parallelize are applications with a very flat profile - i.e., applications where the time spent is spread out relatively evenly across a wide portion of the code base. For the latter variety of application, some degree of code refactoring to expose the inherent parallelism in the application might be necessary, but keep in mind that this refactoring work will tend to benefit all future architectures, CPU and GPU alike, so it is well worth the effort should it become necessary.
6. Getting the Right Answerï
Obtaining the right answer is clearly the principal goal of all computation. On parallel systems, it is possible to run into difficulties not typically found in traditional serial-oriented programming. These include threading issues, unexpected values due to the way floating-point values are computed, and challenges arising from differences in the way CPU and GPU processors operate. This chapter examines issues that can affect the correctness of returned data and points to appropriate solutions.
6.1. Verificationï
6.1.1. Reference Comparisonï
A key aspect of correctness verification for modifications to any existing program is to establish some mechanism whereby previous known-good reference outputs from representative inputs can be compared to new results. After each change is made, ensure that the results match using whatever criteria apply to the particular algorithm. Some will expect bitwise identical results, which is not always possible, especially where floating-point arithmetic is concerned; see Numerical Accuracy and Precision regarding numerical accuracy. For other algorithms, implementations may be considered correct if they match the reference within some small epsilon.
Note that the process used for validating numerical results can easily be extended to validate performance results as well. We want to ensure that each change we make is correct and that it improves performance (and by how much). Checking these things frequently as an integral part of our cyclical APOD process will help ensure that we achieve the desired results as rapidly as possible.
6.1.2. Unit Testingï
A useful counterpart to the reference comparisons described above is to structure the code itself in such a way that is readily verifiable at the unit level. For example, we can write our CUDA kernels as a collection of many short __device__ functions rather than one large monolithic __global__ function; each device function can be tested independently before hooking them all together.
For example, many kernels have complex addressing logic for accessing memory in addition to their actual computation. If we validate our addressing logic separately prior to introducing the bulk of the computation, then this will simplify any later debugging efforts. (Note that the CUDA compiler considers any device code that does not contribute to a write to global memory as dead code subject to elimination, so we must at least write something out to global memory as a result of our addressing logic in order to successfully apply this strategy.)
Going a step further, if most functions are defined as __host__ __device__ rather than just __device__ functions, then these functions can be tested on both the CPU and the GPU, thereby increasing our confidence that the function is correct and that there will not be any unexpected differences in the results. If there are differences, then those differences will be seen early and can be understood in the context of a simple function.
As a useful side effect, this strategy will allow us a means to reduce code duplication should we wish to include both CPU and GPU execution paths in our application: if the bulk of the work of our CUDA kernels is done in __host__ __device__ functions, we can easily call those functions from both the host code and the device code without duplication.
6.2. Debuggingï
CUDA-GDB is a port of the GNU Debugger that runs on Linux and Mac; see: https://developer.nvidia.com/cuda-gdb.
The NVIDIA Nsight Visual Studio Edition is available as a free plugin for Microsoft Visual Studio; see: https://developer.nvidia.com/nsight-visual-studio-edition.
Several third-party debuggers support CUDA debugging as well; see: https://developer.nvidia.com/debugging-solutions for more details.
6.3. Numerical Accuracy and Precisionï
Incorrect or unexpected results arise principally from issues of floating-point accuracy due to the way floating-point values are computed and stored. The following sections explain the principal items of interest. Other peculiarities of floating-point arithmetic are presented in Features and Technical Specifications of the CUDA C++ Programming Guide as well as in a whitepaper and accompanying webinar on floating-point precision and performance available from https://developer.nvidia.com/content/precision-performance-floating-point-and-ieee-754-compliance-nvidia-gpus.
6.3.1. Single vs. Double Precisionï
Devices of CUDA Compute Capability 1.3 and higher provide native support for double-precision floating-point values (that is, values 64 bits wide). Results obtained using double-precision arithmetic will frequently differ from the same operation performed via single-precision arithmetic due to the greater precision of the former and due to rounding issues. Therefore, it is important to be sure to compare values of like precision and to express the results within a certain tolerance rather than expecting them to be exact.
6.3.2. Floating Point Math Is Not Associativeï
Each floating-point arithmetic operation involves a certain amount of rounding. Consequently, the order in which arithmetic operations are performed is important. If A, B, and C are floating-point values, (A+B)+C is not guaranteed to equal A+(B+C) as it is in symbolic math. When you parallelize computations, you potentially change the order of operations and therefore the parallel results might not match sequential results. This limitation is not specific to CUDA, but an inherent part of parallel computation on floating-point values.
6.3.3. IEEE 754 Complianceï
All CUDA compute devices follow the IEEE 754 standard for binary floating-point representation, with some small exceptions. These exceptions, which are detailed in Features and Technical Specifications of the CUDA C++ Programming Guide, can lead to results that differ from IEEE 754 values computed on the host system.
One of the key differences is the fused multiply-add (FMA) instruction, which combines multiply-add operations into a single instruction execution. Its result will often differ slightly from results obtained by doing the two operations separately.
6.3.4. x86 80-bit Computationsï
x86 processors can use an 80-bit double extended precision math when performing floating-point calculations. The results of these calculations can frequently differ from pure 64-bit operations performed on the CUDA device. To get a closer match between values, set the x86 host processor to use regular double or single precision (64 bits and 32 bits, respectively). This is done with the FLDCW x86 assembly instruction or the equivalent operating system API.
7. Optimizing CUDA Applicationsï
After each round of application parallelization is complete, the developer can move to optimizing the implementation to improve performance. Since there are many possible optimizations that can be considered, having a good understanding of the needs of the application can help to make the process as smooth as possible. However, as with APOD as a whole, program optimization is an iterative process (identify an opportunity for optimization, apply and test the optimization, verify the speedup achieved, and repeat), meaning that it is not necessary for a programmer to spend large amounts of time memorizing the bulk of all possible optimization strategies prior to seeing good speedups. Instead, strategies can be applied incrementally as they are learned.
Optimizations can be applied at various levels, from overlapping data transfers with computation all the way down to fine-tuning floating-point operation sequences. The available profiling tools are invaluable for guiding this process, as they can help suggest a next-best course of action for the developerâs optimization efforts and provide references into the relevant portions of the optimization section of this guide.
8. Performance Metricsï
When attempting to optimize CUDA code, it pays to know how to measure performance accurately and to understand the role that bandwidth plays in performance measurement. This chapter discusses how to correctly measure performance using CPU timers and CUDA events. It then explores how bandwidth affects performance metrics and how to mitigate some of the challenges it poses.
8.1. Timingï
CUDA calls and kernel executions can be timed using either CPU or GPU timers. This section examines the functionality, advantages, and pitfalls of both approaches.
8.1.1. Using CPU Timersï
Any CPU timer can be used to measure the elapsed time of a CUDA call or kernel execution. The details of various CPU timing approaches are outside the scope of this document, but developers should always be aware of the resolution their timing calls provide.
When using CPU timers, it is critical to remember that many CUDA API functions are asynchronous; that is, they return control back to the calling CPU thread prior to completing their work. All kernel launches are asynchronous, as are memory-copy functions with the Async suffix on their names. Therefore, to accurately measure the elapsed time for a particular call or sequence of CUDA calls, it is necessary to synchronize the CPU thread with the GPU by calling cudaDeviceSynchronize() immediately before starting and stopping the CPU timer. cudaDeviceSynchronize()blocks the calling CPU thread until all CUDA calls previously issued by the thread are completed.
Although it is also possible to synchronize the CPU thread with a particular stream or event on the GPU, these synchronization functions are not suitable for timing code in streams other than the default stream. cudaStreamSynchronize() blocks the CPU thread until all CUDA calls previously issued into the given stream have completed. cudaEventSynchronize() blocks until a given event in a particular stream has been recorded by the GPU. Because the driver may interleave execution of CUDA calls from other non-default streams, calls in other streams may be included in the timing.
Because the default stream, stream 0, exhibits serializing behavior for work on the device (an operation in the default stream can begin only after all preceding calls in any stream have completed; and no subsequent operation in any stream can begin until it finishes), these functions can be used reliably for timing in the default stream.
Be aware that CPU-to-GPU synchronization points such as those mentioned in this section imply a stall in the GPUâs processing pipeline and should thus be used sparingly to minimize their performance impact.
8.1.2. Using CUDA GPU Timersï
The CUDA event API provides calls that create and destroy events, record events (including a timestamp), and convert timestamp differences into a floating-point value in milliseconds. How to time code using CUDA events illustrates their use.
How to time code using CUDA events
cudaEvent_t start, stop;
float time;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord( start, 0 );
kernel<<<grid,threads>>> ( d_odata, d_idata, size_x, size_y,
NUM_REPS);
cudaEventRecord( stop, 0 );
cudaEventSynchronize( stop );
cudaEventElapsedTime( &time, start, stop );
cudaEventDestroy( start );
cudaEventDestroy( stop );
Here cudaEventRecord() is used to place the start and stop events into the default stream, stream 0. The device will record a timestamp for the event when it reaches that event in the stream. The cudaEventElapsedTime() function returns the time elapsed between the recording of the start and stop events. This value is expressed in milliseconds and has a resolution of approximately half a microsecond. Like the other calls in this listing, their specific operation, parameters, and return values are described in the CUDA Toolkit Reference Manual. Note that the timings are measured on the GPU clock, so the timing resolution is operating-system-independent.
8.2. Bandwidthï
Bandwidth - the rate at which data can be transferred - is one of the most important gating factors for performance. Almost all changes to code should be made in the context of how they affect bandwidth. As described in Memory Optimizations of this guide, bandwidth can be dramatically affected by the choice of memory in which data is stored, how the data is laid out and the order in which it is accessed, as well as other factors.
To measure performance accurately, it is useful to calculate theoretical and effective bandwidth. When the latter is much lower than the former, design or implementation details are likely to reduce bandwidth, and it should be the primary goal of subsequent optimization efforts to increase it.
Note
High Priority: Use the effective bandwidth of your computation as a metric when measuring performance and optimization benefits.
8.2.1. Theoretical Bandwidth Calculationï
Theoretical bandwidth can be calculated using hardware specifications available in the product literature. For example, the NVIDIA Tesla V100 uses HBM2 (double data rate) RAM with a memory clock rate of 877 MHz and a 4096-bit-wide memory interface.
Using these data items, the peak theoretical memory bandwidth of the NVIDIA Tesla V100 is 898 GB/s:
\(\left. \left( 0.877 \times 10^{9} \right. \times (4096/8) \times 2 \right) \div 10^{9} = 898\text{GB/s}\)
In this calculation, the memory clock rate is converted in to Hz, multiplied by the interface width (divided by 8, to convert bits to bytes) and multiplied by 2 due to the double data rate. Finally, this product is divided by 109 to convert the result to GB/s.
Note
Some calculations use 10243 instead of 109 for the final calculation. In such a case, the bandwidth would be 836.4 GiB/s. It is important to use the same divisor when calculating theoretical and effective bandwidth so that the comparison is valid.
Note
On GPUs with GDDR memory with ECC enabled the available DRAM is reduced by 6.25% to allow for the storage of ECC bits. Fetching ECC bits for each memory transaction also reduced the effective bandwidth by approximately 20% compared to the same GPU with ECC disabled, though the exact impact of ECC on bandwidth can be higher and depends on the memory access pattern. HBM2 memories, on the other hand, provide dedicated ECC resources, allowing overhead-free ECC protection.2
8.2.2. Effective Bandwidth Calculationï
Effective bandwidth is calculated by timing specific program activities and by knowing how data is accessed by the program. To do so, use this equation:
\(\text{Effective\ bandwidth} = \left( {\left( B_{r} + B_{w} \right) \div 10^{9}} \right) \div \text{time}\)
Here, the effective bandwidth is in units of GB/s, Br is the number of bytes read per kernel, Bw is the number of bytes written per kernel, and time is given in seconds.
For example, to compute the effective bandwidth of a 2048 x 2048 matrix copy, the following formula could be used:
\(\text{Effective\ bandwidth} = \left( {\left( 2048^{2} \times 4 \times 2 \right) \div 10^{9}} \right) \div \text{time}\)
The number of elements is multiplied by the size of each element (4 bytes for a float), multiplied by 2 (because of the read and write), divided by 109 (or 1,0243) to obtain GB of memory transferred. This number is divided by the time in seconds to obtain GB/s.
8.2.3. Throughput Reported by Visual Profilerï
For devices with compute capability of 2.0 or greater, the Visual Profiler can be used to collect several different memory throughput measures. The following throughput metrics can be displayed in the Details or Detail Graphs view:
Requested Global Load Throughput
Requested Global Store Throughput
Global Load Throughput
Global Store Throughput
DRAM Read Throughput
DRAM Write Throughput
The Requested Global Load Throughput and Requested Global Store Throughput values indicate the global memory throughput requested by the kernel and therefore correspond to the effective bandwidth obtained by the calculation shown under Effective Bandwidth Calculation.
Because the minimum memory transaction size is larger than most word sizes, the actual memory throughput required for a kernel can include the transfer of data not used by the kernel. For global memory accesses, this actual throughput is reported by the Global Load Throughput and Global Store Throughput values.
Itâs important to note that both numbers are useful. The actual memory throughput shows how close the code is to the hardware limit, and a comparison of the effective or requested bandwidth to the actual bandwidth presents a good estimate of how much bandwidth is wasted by suboptimal coalescing of memory accesses (see Coalesced Access to Global Memory). For global memory accesses, this comparison of requested memory bandwidth to actual memory bandwidth is reported by the Global Memory Load Efficiency and Global Memory Store Efficiency metrics.
As an exception, scattered writes to HBM2 see some overhead from ECC but much less than the overhead with similar access patterns on ECC-protected GDDR5 memory.
9. Memory Optimizationsï
Memory optimizations are the most important area for performance. The goal is to maximize the use of the hardware by maximizing bandwidth. Bandwidth is best served by using as much fast memory and as little slow-access memory as possible. This chapter discusses the various kinds of memory on the host and device and how best to set up data items to use the memory effectively.
9.1. Data Transfer Between Host and Deviceï
The peak theoretical bandwidth between the device memory and the GPU is much higher (898 GB/s on the NVIDIA Tesla V100, for example) than the peak theoretical bandwidth between host memory and device memory (16 GB/s on the PCIe x16 Gen3). Hence, for best overall application performance, it is important to minimize data transfer between the host and the device, even if that means running kernels on the GPU that do not demonstrate any speedup compared with running them on the host CPU.
Note
High Priority: Minimize data transfer between the host and the device, even if it means running some kernels on the device that do not show performance gains when compared with running them on the host CPU.
Intermediate data structures should be created in device memory, operated on by the device, and destroyed without ever being mapped by the host or copied to host memory.
Also, because of the overhead associated with each transfer, batching many small transfers into one larger transfer performs significantly better than making each transfer separately, even if doing so requires packing non-contiguous regions of memory into a contiguous buffer and then unpacking after the transfer.
Finally, higher bandwidth between the host and the device is achieved when using page-locked (or pinned) memory, as discussed in the CUDA C++ Programming Guide and the Pinned Memory section of this document.
9.1.1. Pinned Memoryï
Page-locked or pinned memory transfers attain the highest bandwidth between the host and the device. On PCIe x16 Gen3 cards, for example, pinned memory can attain roughly 12 GB/s transfer rates.
Pinned memory is allocated using the cudaHostAlloc() functions in the Runtime API. The bandwidthTest CUDA Sample shows how to use these functions as well as how to measure memory transfer performance.
For regions of system memory that have already been pre-allocated, cudaHostRegister() can be used to pin the memory on-the-fly without the need to allocate a separate buffer and copy the data into it.
Pinned memory should not be overused. Excessive use can reduce overall system performance because pinned memory is a scarce resource, but how much is too much is difficult to know in advance. Furthermore, the pinning of system memory is a heavyweight operation compared to most normal system memory allocations, so as with all optimizations, test the application and the systems it runs on for optimal performance parameters.
9.1.2. Asynchronous and Overlapping Transfers with Computationï
Data transfers between the host and the device using cudaMemcpy() are blocking transfers; that is, control is returned to the host thread only after the data transfer is complete. The cudaMemcpyAsync() function is a non-blocking variant of cudaMemcpy() in which control is returned immediately to the host thread. In contrast with cudaMemcpy(), the asynchronous transfer version requires pinned host memory (see Pinned Memory), and it contains an additional argument, a stream ID. A stream is simply a sequence of operations that are performed in order on the device. Operations in different streams can be interleaved and in some cases overlapped - a property that can be used to hide data transfers between the host and the device.
Asynchronous transfers enable overlap of data transfers with computation in two different ways. On all CUDA-enabled devices, it is possible to overlap host computation with asynchronous data transfers and with device computations. For example, Asynchronous and Overlapping Transfers with Computation demonstrates how host computation in the routine cpuFunction() is performed while data is transferred to the device and a kernel using the device is executed.
Overlapping computation and data transfers
cudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, 0);
kernel<<<grid, block>>>(a_d);
cpuFunction();
The last argument to the cudaMemcpyAsync() function is the stream ID, which in this case uses the default stream, stream 0. The kernel also uses the default stream, and it will not begin execution until the memory copy completes; therefore, no explicit synchronization is needed. Because the memory copy and the kernel both return control to the host immediately, the host function cpuFunction() overlaps their execution.
In Asynchronous and Overlapping Transfers with Computation, the memory copy and kernel execution occur sequentially. On devices that are capable of concurrent copy and compute, it is possible to overlap kernel execution on the device with data transfers between the host and the device. Whether a device has this capability is indicated by the asyncEngineCount field of the cudaDeviceProp structure (or listed in the output of the deviceQuery CUDA Sample). On devices that have this capability, the overlap once again requires pinned host memory, and, in addition, the data transfer and kernel must use different, non-default streams (streams with non-zero stream IDs). Non-default streams are required for this overlap because memory copy, memory set functions, and kernel calls that use the default stream begin only after all preceding calls on the device (in any stream) have completed, and no operation on the device (in any stream) commences until they are finished.
Asynchronous and Overlapping Transfers with Computation illustrates the basic technique.
Concurrent copy and execute
cudaStreamCreate(&stream1);
cudaStreamCreate(&stream2);
cudaMemcpyAsync(a_d, a_h, size, cudaMemcpyHostToDevice, stream1);
kernel<<<grid, block, 0, stream2>>>(otherData_d);
In this code, two streams are created and used in the data transfer and kernel executions as specified in the last arguments of the cudaMemcpyAsync call and the kernelâs execution configuration.
Asynchronous and Overlapping Transfers with Computation demonstrates how to overlap kernel execution with asynchronous data transfer. This technique could be used when the data dependency is such that the data can be broken into chunks and transferred in multiple stages, launching multiple kernels to operate on each chunk as it arrives. Sequential copy and execute and Staged concurrent copy and execute demonstrate this. They produce equivalent results. The first segment shows the reference sequential implementation, which transfers and operates on an array of N floats (where N is assumed to be evenly divisible by nThreads).
Sequential copy and execute
cudaMemcpy(a_d, a_h, N*sizeof(float), dir);
kernel<<<N/nThreads, nThreads>>>(a_d);
Staged concurrent copy and execute shows how the transfer and kernel execution can be broken up into nStreams stages. This approach permits some overlapping of the data transfer and execution.
Staged concurrent copy and execute
size=N*sizeof(float)/nStreams;
for (i=0; i<nStreams; i++) {
offset = i*N/nStreams;
cudaMemcpyAsync(a_d+offset, a_h+offset, size, dir, stream[i]);
kernel<<<N/(nThreads*nStreams), nThreads, 0,
stream[i]>>>(a_d+offset);
}
(In Staged concurrent copy and execute, it is assumed that N is evenly divisible by nThreads*nStreams.) Because execution within
a stream occurs sequentially, none of the kernels will launch until the data transfers in their respective streams complete. Current GPUs can
simultaneously process asynchronous data transfers and execute kernels. GPUs with a single copy engine can perform one asynchronous data
transfer and execute kernels whereas GPUs with two copy engines can simultaneously perform one asynchronous data transfer from the host to
the device, one asynchronous data transfer from the device to the host, and execute kernels. The number of copy engines on a GPU is given
by the asyncEngineCount field of the cudaDeviceProp structure, which is also listed in the output of the deviceQuery CUDA
Sample. (It should be mentioned that it is not possible to overlap a blocking transfer with an asynchronous transfer, because the blocking
transfer occurs in the default stream, so it will not begin until all previous CUDA calls complete. It will not allow any other CUDA call
to begin until it has completed.) A diagram depicting the timeline of execution for the two code segments is shown
in Figure 1, and nStreams is equal to 4
for Staged concurrent copy and execute in the bottom half of the figure.
Figure 1 Timeline comparison for copy and kernel executionï
Sequential
Concurrent
For this example, it is assumed that the data transfer and kernel execution times are comparable. In such cases, and when the execution time (tE) exceeds the transfer time (tT), a rough estimate for the overall time is tE + tT/nStreams for the staged version versus tE + tT for the sequential version. If the transfer time exceeds the execution time, a rough estimate for the overall time is tT + tE/nStreams.
9.1.3. Zero Copyï
Zero copy is a feature that was added in version 2.2 of the CUDA Toolkit. It enables GPU threads to directly access host memory. For this purpose, it requires mapped pinned (non-pageable) memory. On integrated GPUs (i.e., GPUs with the integrated field of the CUDA device properties structure set to 1), mapped pinned memory is always a performance gain because it avoids superfluous copies as integrated GPU and CPU memory are physically the same. On discrete GPUs, mapped pinned memory is advantageous only in certain cases. Because the data is not cached on the GPU, mapped pinned memory should be read or written only once, and the global loads and stores that read and write the memory should be coalesced. Zero copy can be used in place of streams because kernel-originated data transfers automatically overlap kernel execution without the overhead of setting up and determining the optimal number of streams.
Note
Low Priority: Use zero-copy operations on integrated GPUs for CUDA Toolkit version 2.2 and later.
The host code in Zero-copy host code shows how zero copy is typically set up.
Zero-copy host code
float *a_h, *a_map;
...
cudaGetDeviceProperties(&prop, 0);
if (!prop.canMapHostMemory)
exit(0);
cudaSetDeviceFlags(cudaDeviceMapHost);
cudaHostAlloc(&a_h, nBytes, cudaHostAllocMapped);
cudaHostGetDevicePointer(&a_map, a_h, 0);
kernel<<<gridSize, blockSize>>>(a_map);
In this code, the canMapHostMemory field of the structure returned by cudaGetDeviceProperties() is used to check that the device
supports mapping host memory to the deviceâs address space. Page-locked memory mapping is enabled by calling cudaSetDeviceFlags()
with cudaDeviceMapHost. Note that cudaSetDeviceFlags() must be called prior to setting a device or making a CUDA call that
requires state (that is, essentially, before a context is created). Page-locked mapped host memory is allocated using cudaHostAlloc(),
and the pointer to the mapped device address space is obtained via the function cudaHostGetDevicePointer(). In the code
in Zero-copy host code, kernel() can reference the mapped pinned host memory using the pointer a_map in exactly the
same was as it would if a_map referred to a location in device memory.
Note
Mapped pinned host memory allows you to overlap CPU-GPU memory transfers with computation while avoiding the use of CUDA streams. But since any repeated access to such memory areas causes repeated CPU-GPU transfers, consider creating a second area in device memory to manually cache the previously read host memory data.
9.1.4. Unified Virtual Addressingï
Devices of compute capability 2.0 and later support a special addressing mode called Unified Virtual Addressing (UVA) on 64-bit Linux and Windows. With UVA, the host memory and the device memories of all installed supported devices share a single virtual address space.
Prior to UVA, an application had to keep track of which pointers referred to device memory (and for which device) and which referred to host memory as a separate bit of metadata (or as hard-coded information in the program) for each pointer. Using UVA, on the other hand, the physical memory space to which a pointer points can be determined simply by inspecting the value of the pointer using cudaPointerGetAttributes().
Under UVA, pinned host memory allocated with cudaHostAlloc() will have identical host and device pointers, so it is not necessary to call cudaHostGetDevicePointer() for such allocations. Host memory allocations pinned after-the-fact via cudaHostRegister(), however, will continue to have different device pointers than their host pointers, so cudaHostGetDevicePointer() remains necessary in that case.
UVA is also a necessary precondition for enabling peer-to-peer (P2P) transfer of data directly across the PCIe bus or NVLink for supported GPUs in supported configurations, bypassing host memory.
See the CUDA C++ Programming Guide for further explanations and software requirements for UVA and P2P.
9.2. Device Memory Spacesï
CUDA devices use several memory spaces, which have different characteristics that reflect their distinct usages in CUDA applications. These
memory spaces include global, local, shared, texture, and registers, as shown in Figure 2.
Figure 2 Memory spaces on a CUDA deviceï
Of these different memory spaces, global memory is the most plentiful; see Features and Technical Specifications of the CUDA C++ Programming Guide for the amounts of memory available in each memory space at each compute capability level. Global, local, and texture memory have the greatest access latency, followed by constant memory, shared memory, and the register file.
The various principal traits of the memory types are shown in Table 1.
Memory
Location on/off chip
Cached
Access
Scope
Lifetime
Register
On
n/a
R/W
1 thread
Thread
Local
Off
Yesâ â
R/W
1 thread
Thread
Shared
On
n/a
R/W
All threads in block
Block
Global
Off
â
R/W
All threads + host
Host allocation
Constant
Off
Yes
R
All threads + host
Host allocation
Texture
Off
Yes
R
All threads + host
Host allocation
â Cached in L1 and L2 by default on devices of compute capability 6.0 and 7.x; cached only in L2 by default on devices of lower compute capabilities, though some allow opt-in to caching in L1 as well via compilation flags.
â â Cached in L1 and L2 by default except on devices of compute capability 5.x; devices of compute capability 5.x cache locals only in L2.
In the case of texture access, if a texture reference is bound to a linear array in global memory, then the device code can write to the underlying array. Texture references that are bound to CUDA arrays can be written to via surface-write operations by binding a surface to the same underlying CUDA array storage). Reading from a texture while writing to its underlying global memory array in the same kernel launch should be avoided because the texture caches are read-only and are not invalidated when the associated global memory is modified.
9.2.1. Coalesced Access to Global Memoryï
A very important performance consideration in programming for CUDA-capable GPU architectures is the coalescing of global memory accesses. Global memory loads and stores by threads of a warp are coalesced by the device into as few as possible transactions.
Note
High Priority: Ensure global memory accesses are coalesced whenever possible.
The access requirements for coalescing depend on the compute capability of the device and are documented in the CUDA C++ Programming Guide.
For devices of compute capability 6.0 or higher, the requirements can be summarized quite easily: the concurrent accesses of the threads of a warp will coalesce into a number of transactions equal to the number of 32-byte transactions necessary to service all of the threads of the warp.
For certain devices of compute capability 5.2, L1-caching of accesses to global memory can be optionally enabled. If L1-caching is enabled on these devices, the number of required transactions is equal to the number of required 128-byte aligned segments.
Note
On devices of compute capability 6.0 or higher, L1-caching is the default, however the data access unit is 32-byte regardless of whether global loads are cached in L1 or not.
On devices with GDDR memory, accessing memory in a coalesced way is even more important when ECC is turned on. Scattered accesses increase ECC memory transfer overhead, especially when writing data to global memory.
Coalescing concepts are illustrated in the following simple examples. These examples assume compute capability 6.0 or higher and that accesses are for 4-byte words, unless otherwise noted.
9.2.1.1. A Simple Access Patternï
The first and simplest case of coalescing can be achieved by any CUDA-enabled device of compute capability 6.0 or higher: the k-th thread accesses the k-th word in a 32-byte aligned array. Not all threads need to participate.
For example, if the threads of a warp access adjacent 4-byte words (e.g., adjacent float values), four coalesced 32-byte
transactions will service that memory access. Such a pattern is shown in Figure 3 <coalesced-access-figure>.
Figure 3 Coalesced accessï
This access pattern results in four 32-byte transactions, indicated by the red rectangles.
If from any of the four 32-byte segments only a subset of the words are requested (e.g. if several threads had accessed the
same word or if some threads did not participate in the access), the full segment is fetched anyway. Furthermore, if accesses
by the threads of the warp had been permuted within or accross the four segments, still only four 32-byte transactions would
have been performed by a device with compute capability 6.0 or higher.
9.2.1.2. A Sequential but Misaligned Access Patternï
If sequential threads in a warp access memory that is sequential but not aligned with a 32-byte segment, five 32-byte segments
will be requested, as shown in Figure 4.
Figure 4 Misaligned sequential addresses that fall within five 32-byte segmentsï
Memory allocated through the CUDA Runtime API, such as via cudaMalloc(), is guaranteed to be aligned to at least 256 bytes. Therefore, choosing sensible thread block sizes, such as multiples of the warp size (i.e., 32 on current GPUs), facilitates memory accesses by warps that are properly aligned. (Consider what would happen to the memory addresses accessed by the second, third, and subsequent thread blocks if the thread block size was not a multiple of warp size, for example.)
9.2.1.3. Effects of Misaligned Accessesï
It is easy and informative to explore the ramifications of misaligned accesses using a simple copy kernel, such as the one
in A copy kernel that illustrates misaligned accesses.
A copy kernel that illustrates misaligned accesses
__global__ void offsetCopy(float *odata, float* idata, int offset)
{
int xid = blockIdx.x * blockDim.x + threadIdx.x + offset;
odata[xid] = idata[xid];
}
In A copy kernel that illustrates misaligned accesses, data is copied from the input array idata to the output array, both
of which exist in global memory. The kernel is executed within a loop in host code that varies the parameter offset from 0 to 32
(for example, Figure 4 corresponds to this misalignments).
The effective bandwidth for the copy with various offsets on an NVIDIA Tesla V100 (compute capability 7.0)
is shown in Figure 5.
Figure 5 Performance of offsetCopy kernelï
For the NVIDIA Tesla V100, global memory accesses with no offset or with offsets that are multiples of 8 words result in four 32-byte transactions. The achieved bandwidth is approximately 790 GB/s. Otherwise, five 32-byte segments are loaded per warp, and we would expect approximately 4/5th of the memory throughput achieved with no offsets.
In this particular example, the offset memory throughput achieved is, however, approximately 9/10th, because adjacent warps reuse the cache lines their neighbors fetched. So while the impact is still evident it is not as large as we might have expected. It would have been more so if adjacent warps had not exhibited such a high degree of reuse of the over-fetched cache lines.
9.2.1.4. Strided Accessesï
As seen above, in the case of misaligned sequential accesses, caches help to alleviate the performance impact. It may be different with non-unit-strided accesses, however, and this is a pattern that occurs frequently when dealing with multidimensional data or matrices. For this reason, ensuring that as much as possible of the data in each cache line fetched is actually used is an important part of performance optimization of memory accesses on these devices.
To illustrate the effect of strided access on effective bandwidth, see the kernel strideCopy() in A kernel to illustrate non-unit stride data copy, which copies data with a stride of stride elements between threads from idata to odata.
A kernel to illustrate non-unit stride data copy
__global__ void strideCopy(float *odata, float* idata, int stride)
{
int xid = (blockIdx.x*blockDim.x + threadIdx.x)*stride;
odata[xid] = idata[xid];
}
Figure 6 illustrates such a situation; in this case, threads within a warp access words in memory with a stride of 2. This action leads to a load of eight L2 cache segments per warp on the Tesla V100 (compute capability 7.0).
Figure 6 Adjacent threads accessing memory with a stride of 2ï
A stride of 2 results in a 50% of load/store efficiency since half the elements in the transaction are not used and represent
wasted bandwidth. As the stride increases, the effective bandwidth decreases until the point where 32 32-byte segments are loaded
for the 32 threads in a warp, as indicated in Figure 7.
Figure 7 Performance of strideCopy kernelï
As illustrated in Figure 7, non-unit-stride global memory accesses should be avoided whenever possible. One method for doing so utilizes shared memory, which is discussed in the next section.
9.2.2. L2 Cacheï
Starting with CUDA 11.0, devices of compute capability 8.0 and above have the capability to influence persistence of data in the L2 cache. Because L2 cache is on-chip, it potentially provides higher bandwidth and lower latency accesses to global memory.
For more details refer to the L2 Access Management section in the CUDA C++ Programming Guide.
9.2.2.1. L2 Cache Access Windowï
When a CUDA kernel accesses a data region in the global memory repeatedly, such data accesses can be considered to be persisting. On the other hand, if the data is only accessed once, such data accesses can be considered to be streaming. A portion of the L2 cache can be set aside for persistent accesses to a data region in global memory. If this set-aside portion is not used by persistent accesses, then streaming or normal data accesses can use it.
The L2 cache set-aside size for persisting accesses may be adjusted, within limits:
cudaGetDeviceProperties(&prop, device_id);
cudaDeviceSetLimit(cudaLimitPersistingL2CacheSize, prop.persistingL2CacheMaxSize); /* Set aside max possible size of L2 cache for persisting accesses */
Mapping of user data to L2 set-aside portion can be controlled using an access policy window on a CUDA stream or CUDA graph kernel node. The example below shows how to use the access policy window on a CUDA stream.
cudaStreamAttrValue stream_attribute; // Stream level attributes data structure
stream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(ptr); // Global Memory data pointer
stream_attribute.accessPolicyWindow.num_bytes = num_bytes; // Number of bytes for persisting accesses.
// (Must be less than cudaDeviceProp::accessPolicyMaxWindowSize)
stream_attribute.accessPolicyWindow.hitRatio = 1.0; // Hint for L2 cache hit ratio for persisting accesses in the num_bytes region
stream_attribute.accessPolicyWindow.hitProp = cudaAccessPropertyPersisting; // Type of access property on cache hit
stream_attribute.accessPolicyWindow.missProp = cudaAccessPropertyStreaming; // Type of access property on cache miss.
//Set the attributes to a CUDA stream of type cudaStream_t
cudaStreamSetAttribute(stream, cudaStreamAttributeAccessPolicyWindow, &stream_attribute);
The access policy window requires a value for hitRatio and num_bytes. Depending on the value of the num_bytes parameter and the size of L2 cache, one may need to tune the value of hitRatio to avoid thrashing of L2 cache lines.
9.2.2.2. Tuning the Access Window Hit-Ratioï
The hitRatio parameter can be used to specify the fraction of accesses that receive the hitProp property. For example, if the hitRatio value is 0.6, 60% of the memory accesses in the global memory region [ptr..ptr+num_bytes) have the persisting property and 40% of the memory accesses have the streaming property. To understand the effect of hitRatio and num_bytes, we use a sliding window micro benchmark.
This microbenchmark uses a 1024 MB region in GPU global memory. First, we set aside 30 MB of the L2 cache for persisting accesses using cudaDeviceSetLimit(), as discussed above. Then, as shown in the figure below, we specify that the accesses to the first freqSize * sizeof(int) bytes of the memory region are persistent. This data will thus use the L2 set-aside portion. In our experiment, we vary the size of this persistent data region from 10 MB to 60 MB to model various scenarios where data fits in or exceeds the available L2 set-aside portion of 30 MB. Note that the NVIDIA Tesla A100 GPU has 40 MB of total L2 cache capacity. Accesses to the remaining data of the memory region (i.e., streaming data) are considered normal or streaming accesses and will thus use the remaining 10 MB of the non set-aside L2 portion (unless part of the L2 set-aside portion is unused).
Figure 8 Mapping Persistent data accesses to set-aside L2 in sliding window experimentï
Consider the following kernel code and access window parameters, as the implementation of the sliding window experiment.
__global__ void kernel(int *data_persistent, int *data_streaming, int dataSize, int freqSize) {
int tid = blockIdx.x * blockDim.x + threadIdx.x;
/*Each CUDA thread accesses one element in the persistent data section
and one element in the streaming data section.
Because the size of the persistent memory region (freqSize * sizeof(int) bytes) is much
smaller than the size of the streaming memory region (dataSize * sizeof(int) bytes), data
in the persistent region is accessed more frequently*/
data_persistent[tid % freqSize] = 2 * data_persistent[tid % freqSize];
data_streaming[tid % dataSize] = 2 * data_streaming[tid % dataSize];
}
stream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(data_persistent);
stream_attribute.accessPolicyWindow.num_bytes = freqSize * sizeof(int); //Number of bytes for persisting accesses in range 10-60 MB
stream_attribute.accessPolicyWindow.hitRatio = 1.0; //Hint for cache hit ratio. Fixed value 1.0
The performance of the above kernel is shown in the chart below. When the persistent data region fits well into the 30 MB set-aside portion of the L2 cache, a performance increase of as much as 50% is observed. However, once the size of this persistent data region exceeds the size of the L2 set-aside cache portion, approximately 10% performance drop is observed due to thrashing of L2 cache lines.
Figure 9 The performance of the sliding-window benchmark with fixed hit-ratio of 1.0ï
In order to optimize the performance, when the size of the persistent data is more than the size of the set-aside L2 cache portion, we tune the num_bytes and hitRatio parameters in the access window as below.
stream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(data_persistent);
stream_attribute.accessPolicyWindow.num_bytes = 20*1024*1024; //20 MB
stream_attribute.accessPolicyWindow.hitRatio = (20*1024*1024)/((float)freqSize*sizeof(int)); //Such that up to 20MB of data is resident.
We fix the num_bytes in the access window to 20 MB and tune the hitRatio such that a random 20 MB of the total persistent data is resident in the L2 set-aside cache portion. The remaining portion of this persistent data will be accessed using the streaming property. This helps in reducing cache thrashing. The results are shown in the chart below, where we see good performance regardless of whether the persistent data fits in the L2 set-aside or not.
Figure 10 The performance of the sliding-window benchmark with tuned hit-ratioï
9.2.3. Shared Memoryï
Because it is on-chip, shared memory has much higher bandwidth and lower latency than local and global memory - provided there are no bank conflicts between the threads, as detailed in the following section.
9.2.3.1. Shared Memory and Memory Banksï
To achieve high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (banks) that can be accessed simultaneously. Therefore, any memory load or store of n addresses that spans n distinct memory banks can be serviced simultaneously, yielding an effective bandwidth that is n times as high as the bandwidth of a single bank.
However, if multiple addresses of a memory request map to the same memory bank, the accesses are serialized. The hardware splits a memory request that has bank conflicts into as many separate conflict-free requests as necessary, decreasing the effective bandwidth by a factor equal to the number of separate memory requests. The one exception here is when multiple threads in a warp address the same shared memory location, resulting in a broadcast. In this case, multiple broadcasts from different banks are coalesced into a single multicast from the requested shared memory locations to the threads.
To minimize bank conflicts, it is important to understand how memory addresses map to memory banks and how to optimally schedule memory requests.
On devices of compute capability 5.x or newer, each bank has a bandwidth of 32 bits every clock cycle, and successive 32-bit words are assigned to successive banks. The warp size is 32 threads and the number of banks is also 32, so bank conflicts can occur between any threads in the warp. See Compute Capability 5.x for further details.
9.2.3.2. Shared Memory in Matrix Multiplication (C=AB)ï
Shared memory enables cooperation between threads in a block. When multiple threads in a block use the same data from global memory, shared memory can be used to access the data from global memory only once. Shared memory can also be used to avoid uncoalesced memory accesses by loading and storing data in a coalesced pattern from global memory and then reordering it in shared memory. Aside from memory bank conflicts, there is no penalty for non-sequential or unaligned accesses by a warp in shared memory.
The use of shared memory is illustrated via the simple example of a matrix multiplication C = AB for the case with A of dimension Mxw, B of dimension wxN, and C of dimension MxN. To keep the kernels simple, M and N are multiples of 32, since the warp size (w) is 32 for current devices.
A natural decomposition of the problem is to use a block and tile size of wxw threads. Therefore, in terms of wxw tiles, A is a column matrix, B is a row matrix, and C is their outer product; see Figure 11. A grid of N/w by M/w blocks is launched, where each thread block calculates the elements of a different tile in C from a single tile of A and a single tile of B.
Figure 11 Block-column matrix multiplied by block-row matrix. Block-column matrix (A) multiplied by block-row matrix (B) with resulting product matrix (C).ï
To do this, the simpleMultiply kernel (Unoptimized matrix multiplication) calculates the output elements of a tile of matrix C.
Unoptimized matrix multiplication
__global__ void simpleMultiply(float *a, float* b, float *c,
int N)
{
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
float sum = 0.0f;
for (int i = 0; i < TILE_DIM; i++) {
sum += a[row*TILE_DIM+i] * b[i*N+col];
}
c[row*N+col] = sum;
}
In Unoptimized matrix multiplication, a, b, and c are pointers to global memory for the matrices A, B, and C, respectively; blockDim.x, blockDim.y, and TILE_DIM are all equal to w. Each thread in the wxw-thread block calculates one element in a tile of C. row and col are the row and column of the element in C being calculated by a particular thread. The for loop over i multiplies a row of A by a column of B, which is then written to C.
The effective bandwidth of this kernel is 119.9 GB/s on an NVIDIA Tesla V100. To analyze performance, it is necessary to consider how warps access global memory in the for loop. Each warp of threads calculates one row of a tile of C, which depends on a single row of A and an entire tile of B as illustrated in Figure 12.
Figure 12 Computing a row of a tile. Computing a row of a tile in C using one row of A and an entire tile of B.ï
For each iteration i of the for loop, the threads in a warp read a row of the B tile, which is a sequential and coalesced access for all compute capabilities.
However, for each iteration i, all threads in a warp read the same value from global memory for matrix A, as the index row*TILE_DIM+i is constant within a warp. Even though such an access requires only 1 transaction on devices of compute capability 2.0 or higher, there is wasted bandwidth in the transaction, because only one 4-byte word out of 8 words in a 32-byte cache segment is used. We can reuse this cache line in subsequent iterations of the loop, and we would eventually utilize all 8 words; however, when many warps execute on the same multiprocessor simultaneously, as is generally the case, the cache line may easily be evicted from the cache between iterations i and i+1.
The performance on a device of any compute capability can be improved by reading a tile of A into shared memory as shown
in Using shared memory to improve the global memory load efficiency in matrix multiplication.
Using shared memory to improve the global memory load efficiency in matrix multiplication
__global__ void coalescedMultiply(float *a, float* b, float *c,
int N)
{
__shared__ float aTile[TILE_DIM][TILE_DIM];
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
float sum = 0.0f;
aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];
__syncwarp();
for (int i = 0; i < TILE_DIM; i++) {
sum += aTile[threadIdx.y][i]* b[i*N+col];
}
c[row*N+col] = sum;
}
In Using shared memory to improve the global memory load efficiency in matrix multiplication, each element in a tile of A is read from global memory only once, in a fully coalesced fashion (with no wasted bandwidth), to shared memory. Within each iteration of the for loop, a value in shared memory is broadcast to all threads in a warp. Instead of a __syncthreads()synchronization barrier call, a __syncwarp() is sufficient after reading the tile of A into shared memory because only threads within the warp that write the data into shared memory read this data. This kernel has an effective bandwidth of 144.4 GB/s on an NVIDIA Tesla V100. This illustrates the use of the shared memory as a user-managed cache when the hardware L1 cache eviction policy does not match up well with the needs of the application or when L1 cache is not used for reads from global memory.
A further improvement can be made to how Using shared memory to improve the global memory load efficiency in matrix multiplication deals with matrix B. In calculating each of the rows of a tile of matrix C, the entire tile of B is read. The repeated reading of the B tile can be eliminated by reading it into shared memory once (Improvement by reading additional data into shared memory).
Improvement by reading additional data into shared memory
__global__ void sharedABMultiply(float *a, float* b, float *c,
int N)
{
__shared__ float aTile[TILE_DIM][TILE_DIM],
bTile[TILE_DIM][TILE_DIM];
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
float sum = 0.0f;
aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];
bTile[threadIdx.y][threadIdx.x] = b[threadIdx.y*N+col];
__syncthreads();
for (int i = 0; i < TILE_DIM; i++) {
sum += aTile[threadIdx.y][i]* bTile[i][threadIdx.x];
}
c[row*N+col] = sum;
}
Note that in Improvement by reading additional data into shared memory, a __syncthreads() call is required after reading the B tile because a warp reads data from shared memory that were written to shared memory by different warps. The effective bandwidth of this routine is 195.5 GB/s on an NVIDIA Tesla V100. Note that the performance improvement is not due to improved coalescing in either case, but to avoiding redundant transfers from global memory.
The results of the various optimizations are summarized in Table 2.
Optimization
NVIDIA Tesla V100
No optimization
119.9 GB/s
Coalesced using shared memory to store a tile of A
144.4 GB/s
Using shared memory to eliminate redundant reads of a tile of B
195.5 GB/s
Note
Medium Priority: Use shared memory to avoid redundant transfers from global memory.
9.2.3.3. Shared Memory in Matrix Multiplication (C=AAT)ï
A variant of the previous matrix multiplication can be used to illustrate how strided accesses to global memory, as well as shared memory bank conflicts, are handled. This variant simply uses the transpose of A in place of B, so C = AAT.
A simple implementation for C = AAT is shown in Unoptimized handling of strided accesses to global memory.
Unoptimized handling of strided accesses to global memory
__global__ void simpleMultiply(float *a, float *c, int M)
{
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
float sum = 0.0f;
for (int i = 0; i < TILE_DIM; i++) {
sum += a[row*TILE_DIM+i] * a[col*TILE_DIM+i];
}
c[row*M+col] = sum;
}
In the example above, the row-th, col-th element of C is obtained by taking the dot product of the row-th and col-th rows of A. The effective bandwidth for this kernel is 12.8 GB/s on an NVIDIA Tesla V100. These results are substantially lower than the corresponding measurements for the C = AB kernel. The difference is in how threads in a half warp access elements of A in the second term, a[col*TILE_DIM+i], for each iteration i. For a warp of threads, col represents sequential columns of the transpose of A, and therefore col*TILE_DIM represents a strided access of global memory with a stride of w, resulting in plenty of wasted bandwidth.
The way to avoid strided access is to use shared memory as before, except in this case a warp reads a row of A into a column of a shared memory tile, as
shown in An optimized handling of strided accesses using coalesced reads from global memory.
An optimized handling of strided accesses using coalesced reads from global memory
__global__ void coalescedMultiply(float *a, float *c, int M)
{
__shared__ float aTile[TILE_DIM][TILE_DIM],
transposedTile[TILE_DIM][TILE_DIM];
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
float sum = 0.0f;
aTile[threadIdx.y][threadIdx.x] = a[row*TILE_DIM+threadIdx.x];
transposedTile[threadIdx.x][threadIdx.y] =
a[(blockIdx.x*blockDim.x + threadIdx.y)*TILE_DIM +
threadIdx.x];
__syncthreads();
for (int i = 0; i < TILE_DIM; i++) {
sum += aTile[threadIdx.y][i]* transposedTile[i][threadIdx.x];
}
c[row*M+col] = sum;
}
An optimized handling of strided accesses using coalesced reads from global memory uses the shared transposedTile to avoid uncoalesced accesses in the second term in the dot product and the shared aTile technique from the previous example to avoid uncoalesced accesses in the first term. The effective bandwidth of this kernel is 140.2 GB/s on an NVIDIA Tesla V100.These results are lower than those obtained by the final kernel for C = AB. The cause of the difference is shared memory bank conflicts.
The reads of elements in transposedTile within the for loop are free of conflicts, because threads of each half warp read across rows of the tile, resulting in unit stride across the banks. However, bank conflicts occur when copying the tile from global memory into shared memory. To enable the loads from global memory to be coalesced, data are read from global memory sequentially. However, this requires writing to shared memory in columns, and because of the use of wxw tiles in shared memory, this results in a stride between threads of w banks - every thread of the warp hits the same bank (Recall that w is selected as 32). These many-way bank conflicts are very expensive. The simple remedy is to pad the shared memory array so that it has an extra column, as in the following line of code.
__shared__ float transposedTile[TILE_DIM][TILE_DIM+1];
This padding eliminates the conflicts entirely, because now the stride between threads is w+1 banks (i.e., 33 for current devices), which, due to modulo arithmetic used to compute bank indices, is equivalent to a unit stride. After this change, the effective bandwidth is 199.4 GB/s on an NVIDIA Tesla V100, which is comparable to the results from the last C = AB kernel.
The results of these optimizations are summarized in Table 3.
Optimization
NVIDIA Tesla V100
No optimization
12.8 GB/s
Using shared memory to coalesce global reads
140.2 GB/s
Removing bank conflicts
199.4 GB/s
These results should be compared with those in Table 2. As can be seen from these tables, judicious use of shared memory can dramatically improve performance.
The examples in this section have illustrated three reasons to use shared memory:
To enable coalesced accesses to global memory, especially to avoid large strides (for general matrices, strides are much larger than 32)
To eliminate (or reduce) redundant loads from global memory
To avoid wasted bandwidth
9.2.3.4. Asynchronous Copy from Global Memory to Shared Memoryï
CUDA 11.0 introduces an async-copy feature that can be used within device code to explicitly manage the asynchronous copying of data from global memory to shared memory. This feature enables CUDA kernels to overlap copying data from global to shared memory with computation. It also avoids an intermediary register file access traditionally present between the global memory read and the shared memory write.
For more details refer to the memcpy_async section in the CUDA C++ Programming Guide.
To understand the performance difference between synchronous copy and asynchronous copy of data from global memory to shared memory, consider the following micro benchmark CUDA kernels for demonstrating the synchronous and asynchronous approaches. Asynchronous copies are hardware accelerated for NVIDIA A100 GPU.
template <typename T>
__global__ void pipeline_kernel_sync(T *global, uint64_t *clock, size_t copy_count) {
extern __shared__ char s[];
T *shared = reinterpret_cast<T *>(s);
uint64_t clock_start = clock64();
for (size_t i = 0; i < copy_count; ++i) {
shared[blockDim.x * i + threadIdx.x] = global[blockDim.x * i + threadIdx.x];
}
uint64_t clock_end = clock64();
atomicAdd(reinterpret_cast<unsigned long long *>(clock),
clock_end - clock_start);
}
template <typename T>
__global__ void pipeline_kernel_async(T *global, uint64_t *clock, size_t copy_count) {
extern __shared__ char s[];
T *shared = reinterpret_cast<T *>(s);
uint64_t clock_start = clock64();
//pipeline pipe;
for (size_t i = 0; i < copy_count; ++i) {
__pipeline_memcpy_async(&shared[blockDim.x * i + threadIdx.x],
&global[blockDim.x * i + threadIdx.x], sizeof(T));
}
__pipeline_commit();
__pipeline_wait_prior(0);
uint64_t clock_end = clock64();
atomicAdd(reinterpret_cast<unsigned long long *>(clock),
clock_end - clock_start);
}
The synchronous version for the kernel loads an element from global memory to an intermediate register and then stores the intermediate register value to shared memory. In the asynchronous version of the kernel, instructions to load from global memory and store directly into shared memory are issued as soon as __pipeline_memcpy_async() function is called. The __pipeline_wait_prior(0) will wait until all the instructions in the pipe object have been executed. Using asynchronous copies does not use any intermediate register. Not using intermediate registers can help reduce register pressure and can increase kernel occupancy. Data copied from global memory to shared memory using asynchronous copy instructions can be cached in the L1 cache or the L1 cache can be optionally bypassed. If individual CUDA threads are copying elements of 16 bytes, the L1 cache can be bypassed. This difference is illustrated in Figure 13.
Figure 13 Comparing Synchronous vs Asynchronous Copy from Global Memory to Shared Memoryï
We evaluate the performance of both kernels using elements of size 4B, 8B and 16B per thread i.e., using int, int2 and int4 for the template parameter. We adjust the copy_count in the kernels such that each thread block copies from 512 bytes up to 48 MB. The performance of the kernels is shown in Figure 14.
Figure 14 Comparing Performance of Synchronous vs Asynchronous Copy from Global Memory to Shared Memoryï
From the performance chart, the following observations can be made for this experiment.
Best performance with synchronous copy is achieved when the copy_count parameter is a multiple of 4 for all three element sizes. The compiler can optimize groups of 4 load and store instructions. This is evident from the saw tooth curves.
Asynchronous copy achieves better performance in nearly all cases.
The async-copy does not require the copy_count parameter to be a multiple of 4, to maximize performance through compiler optimizations.
Overall, best performance is achieved when using asynchronous copies with an element of size 8 or 16 bytes.
9.2.4. Local Memoryï
Local memory is so named because its scope is local to the thread, not because of its physical location. In fact, local memory is off-chip. Hence, access to local memory is as expensive as access to global memory. In other words, the term local in the name does not imply faster access.
Local memory is used only to hold automatic variables. This is done by the nvcc compiler when it determines that there is insufficient register space to hold the variable. Automatic variables that are likely to be placed in local memory are large structures or arrays that would consume too much register space and arrays that the compiler determines may be indexed dynamically.
Inspection of the PTX assembly code (obtained by compiling with -ptx or -keep command-line options to nvcc) reveals whether a variable has been placed in local memory during the first compilation phases. If it has, it will be declared using the .local mnemonic and accessed using the ld.local and st.local mnemonics. If it has not, subsequent compilation phases might still decide otherwise, if they find the variable consumes too much register space for the targeted architecture. There is no way to check this for a specific variable, but the compiler reports total local memory usage per kernel (lmem) when run with the--ptxas-options=-v option.
9.2.5. Texture Memoryï
The read-only texture memory space is cached. Therefore, a texture fetch costs one device memory read only on a cache miss; otherwise, it just costs one read from the texture cache. The texture cache is optimized for 2D spatial locality, so threads of the same warp that read texture addresses that are close together will achieve best performance. Texture memory is also designed for streaming fetches with a constant latency; that is, a cache hit reduces DRAM bandwidth demand, but not fetch latency.
In certain addressing situations, reading device memory through texture fetching can be an advantageous alternative to reading device memory from global or constant memory.
9.2.5.1. Additional Texture Capabilitiesï
If textures are fetched using tex1D(),tex2D(), or tex3D() rather than tex1Dfetch(), the hardware provides other capabilities that might be useful for some applications such as image processing, as shown in Table 4.
Feature
Use
Caveat
Filtering
Fast, low-precision interpolation between texels
Valid only if the texture reference returns floating-point data
Normalized texture coordinates
Resolution-independent coding
None
Addressing modes
Automatic handling of boundary cases1
Can be used only with normalized texture coordinates
1 The automatic handling of boundary cases in the bottom row of Table 4 refers to how a texture coordinate is resolved when it falls outside the valid addressing range. There are two options: clamp and wrap. If x is the coordinate and N is the number of texels for a one-dimensional texture, then with clamp, x is replaced by 0 if x < 0 and by 1-1/N if 1 <x. With wrap, x is replaced by frac(x) where frac(x) = x - floor(x). Floor returns the largest integer less than or equal to x. So, in clamp mode where N = 1, an x of 1.3 is clamped to 1.0; whereas in wrap mode, it is converted to 0.3
Within a kernel call, the texture cache is not kept coherent with respect to global memory writes, so texture fetches from addresses that have been written via global stores in the same kernel call return undefined data. That is, a thread can safely read a memory location via texture if the location has been updated by a previous kernel call or memory copy, but not if it has been previously updated by the same thread or another thread within the same kernel call.
9.2.6. Constant Memoryï
There is a total of 64 KB constant memory on a device. The constant memory space is cached. As a result, a read from constant memory costs one memory read from device memory only on a cache miss; otherwise, it just costs one read from the constant cache. Accesses to different addresses by threads within a warp are serialized, thus the cost scales linearly with the number of unique addresses read by all threads within a warp. As such, the constant cache is best when threads in the same warp accesses only a few distinct locations. If all threads of a warp access the same location, then constant memory can be as fast as a register access.
9.2.7. Registersï
Generally, accessing a register consumes zero extra clock cycles per instruction, but delays may occur due to register read-after-write dependencies and register memory bank conflicts.
The compiler and hardware thread scheduler will schedule instructions as optimally as possible to avoid register memory bank conflicts. An application has no direct control over these bank conflicts. In particular, there is no register-related reason to pack data into vector data types such as float4 or int4 types.
9.2.7.1. Register Pressureï
Register pressure occurs when there are not enough registers available for a given task. Even though each multiprocessor contains thousands of 32-bit registers (see Features and Technical Specifications of the CUDA C++ Programming Guide), these are partitioned among concurrent threads. To prevent the compiler from allocating too many registers, use the -maxrregcount=N compiler command-line option or the launch bounds kernel definition qualifier (see Execution Configuration of the CUDA C++ Programming Guide) to control the maximum number of registers to allocated per thread.
9.3. Allocationï
Device memory allocation and de-allocation via cudaMalloc() and cudaFree() are expensive operations. It is recommended to use cudaMallocAsync() and cudaFreeAsync() which are stream ordered pool allocators to manage device memory.
9.4. NUMA Best Practicesï
Some recent Linux distributions enable automatic NUMA balancing (or âAutoNUMAâ) by default. In some instances, operations performed by automatic NUMA balancing may degrade the performance of applications running on NVIDIA GPUs. For optimal performance, users should manually tune the NUMA characteristics of their application.
The optimal NUMA tuning will depend on the characteristics and desired hardware affinities of each application and node, but in general applications computing on NVIDIA GPUs are advised to choose a policy that disables automatic NUMA balancing. For example, on IBM Newell POWER9 nodes (where the CPUs correspond to NUMA nodes 0 and 8), use:
numactl --membind=0,8
to bind memory allocations to the CPUs.
10. Execution Configuration Optimizationsï
One of the keys to good performance is to keep the multiprocessors on the device as busy as possible. A device in which work is poorly balanced across the multiprocessors will deliver suboptimal performance. Hence, itâs important to design your application to use threads and blocks in a way that maximizes hardware utilization and to limit practices that impede the free distribution of work. A key concept in this effort is occupancy, which is explained in the following sections.
Hardware utilization can also be improved in some cases by designing your application so that multiple, independent kernels can execute at the same time. Multiple kernels executing at the same time is known as concurrent kernel execution. Concurrent kernel execution is described below.
Another important concept is the management of system resources allocated for a particular task. How to manage this resource utilization is discussed in the final sections of this chapter.
10.1. Occupancyï
Thread instructions are executed sequentially in CUDA, and, as a result, executing other warps when one warp is paused or stalled is the only way to hide latencies and keep the hardware busy. Some metric related to the number of active warps on a multiprocessor is therefore important in determining how effectively the hardware is kept busy. This metric is occupancy.
Occupancy is the ratio of the number of active warps per multiprocessor to the maximum number of possible active warps. (To determine the latter number, see the deviceQuery CUDA Sample or refer to Compute Capabilities.) Another way to view occupancy is the percentage of the hardwareâs ability to process warps that is actively in use.
Higher occupancy does not always equate to higher performance-there is a point above which additional occupancy does not improve performance. However, low occupancy always interferes with the ability to hide memory latency, resulting in performance degradation.
Per thread resources required by a CUDA kernel might limit the maximum block size in an unwanted way. In order to maintain forward compatibility to future hardware and toolkits and to ensure that at least one thread block can run on an SM, developers should include the single argument __launch_bounds__(maxThreadsPerBlock) which specifies the largest block size that the kernel will be launched with. Failure to do so could lead to âtoo many resources requested for launchâ errors. Providing the two argument version of __launch_bounds__(maxThreadsPerBlock,minBlocksPerMultiprocessor) can improve performance in some cases. The right value for minBlocksPerMultiprocessor should be determined using a detailed per kernel analysis.
10.1.1. Calculating Occupancyï
One of several factors that determine occupancy is register availability. Register storage enables threads to keep local variables nearby for low-latency access. However, the set of registers (known as the register file) is a limited commodity that all threads resident on a multiprocessor must share. Registers are allocated to an entire block all at once. So, if each thread block uses many registers, the number of thread blocks that can be resident on a multiprocessor is reduced, thereby lowering the occupancy of the multiprocessor. The maximum number of registers per thread can be set manually at compilation time per-file using the -maxrregcount option or per-kernel using the __launch_bounds__ qualifier (see Register Pressure).
For purposes of calculating occupancy, the number of registers used by each thread is one of the key factors. For example, on devices of CUDA Compute Capability 7.0 each multiprocessor has 65,536 32-bit registers and can have a maximum of 2048 simultaneous threads resident (64 warps x 32 threads per warp). This means that in one of these devices, for a multiprocessor to have 100% occupancy, each thread can use at most 32 registers. However, this approach of determining how register count affects occupancy does not take into account the register allocation granularity. For example, on a device of compute capability 7.0, a kernel with 128-thread blocks using 37 registers per thread results in an occupancy of 75% with 12 active 128-thread blocks per multi-processor, whereas a kernel with 320-thread blocks using the same 37 registers per thread results in an occupancy of 63% because only four 320-thread blocks can reside on a multiprocessor. Furthermore, register allocations are rounded up to the nearest 256 registers per warp.
The number of registers available, the maximum number of simultaneous threads resident on each multiprocessor, and the register allocation granularity vary over different
compute capabilities. Because of these nuances in register allocation and the fact that a multiprocessorâs shared memory is also partitioned between resident thread blocks,
the exact relationship between register usage and occupancy can be difficult to determine. The --ptxas options=v option of nvcc details the number of registers
used per thread for each kernel. See Hardware Multithreading of the CUDA C++ Programming Guide for the register allocation formulas for devices of various compute
capabilities and Features and Technical Specifications of the CUDA C++ Programming Guide for the total number of registers available on those devices. Alternatively,
NVIDIA provides an occupancy calculator as part of Nsight Compute; refer to https://docs.nvidia.com/nsight-compute/NsightCompute/index.html#occupancy-calculator.
Figure 15 Using the CUDA Occupancy Calculator to project GPU multiprocessor occupancyï
An application can also use the Occupancy API from the CUDA Runtime, e.g. cudaOccupancyMaxActiveBlocksPerMultiprocessor, to dynamically select launch configurations based on runtime parameters.
10.2. Hiding Register Dependenciesï
Note
Medium Priority: To hide latency arising from register dependencies, maintain sufficient numbers of active threads per multiprocessor (i.e., sufficient occupancy).
Register dependencies arise when an instruction uses a result stored in a register written by an instruction before it. The latency of most arithmetic instructions is typically 4 cycles on devices of compute capability 7.0. So threads must wait approximatly 4 cycles before using an arithmetic result. However, this latency can be completely hidden by the execution of threads in other warps. See Registers for details.
10.3. Thread and Block Heuristicsï
Note
Medium Priority: The number of threads per block should be a multiple of 32 threads, because this provides optimal computing efficiency and facilitates coalescing.
The dimension and size of blocks per grid and the dimension and size of threads per block are both important factors. The multidimensional aspect of these parameters allows easier mapping of multidimensional problems to CUDA and does not play a role in performance. As a result, this section discusses size but not dimension.
Latency hiding and occupancy depend on the number of active warps per multiprocessor, which is implicitly determined by the execution parameters along with resource (register and shared memory) constraints. Choosing execution parameters is a matter of striking a balance between latency hiding (occupancy) and resource utilization.
Choosing the execution configuration parameters should be done in tandem; however, there are certain heuristics that apply to each parameter individually. When choosing the first execution configuration parameter-the number of blocks per grid, or grid size - the primary concern is keeping the entire GPU busy. The number of blocks in a grid should be larger than the number of multiprocessors so that all multiprocessors have at least one block to execute. Furthermore, there should be multiple active blocks per multiprocessor so that blocks that arenât waiting for a __syncthreads() can keep the hardware busy. This recommendation is subject to resource availability; therefore, it should be determined in the context of the second execution parameter - the number of threads per block, or block size - as well as shared memory usage. To scale to future devices, the number of blocks per kernel launch should be in the thousands.
When choosing the block size, it is important to remember that multiple concurrent blocks can reside on a multiprocessor, so occupancy is not determined by block size alone. In particular, a larger block size does not imply a higher occupancy.
As mentioned in Occupancy, higher occupancy does not always equate to better performance. For example, improving occupancy from 66 percent to 100 percent generally does not translate to a similar increase in performance. A lower occupancy kernel will have more registers available per thread than a higher occupancy kernel, which may result in less register spilling to local memory; in particular, with a high degree of exposed instruction-level parallelism (ILP) it is, in some cases, possible to fully cover latency with a low occupancy.
There are many such factors involved in selecting block size, and inevitably some experimentation is required. However, a few rules of thumb should be followed:
Threads per block should be a multiple of warp size to avoid wasting computation on under-populated warps and to facilitate coalescing.
A minimum of 64 threads per block should be used, and only if there are multiple concurrent blocks per multiprocessor.
Between 128 and 256 threads per block is a good initial range for experimentation with different block sizes.
Use several smaller thread blocks rather than one large thread block per multiprocessor if latency affects performance. This is particularly beneficial to kernels that frequently call __syncthreads().
Note that when a thread block allocates more registers than are available on a multiprocessor, the kernel launch fails, as it will when too much shared memory or too many threads are requested.
10.4. Effects of Shared Memoryï
Shared memory can be helpful in several situations, such as helping to coalesce or eliminate redundant access to global memory. However, it also can act as a constraint on occupancy. In many cases, the amount of shared memory required by a kernel is related to the block size that was chosen, but the mapping of threads to shared memory elements does not need to be one-to-one. For example, it may be desirable to use a 64x64 element shared memory array in a kernel, but because the maximum number of threads per block is 1024, it is not possible to launch a kernel with 64x64 threads per block. In such cases, kernels with 32x32 or 64x16 threads can be launched with each thread processing four elements of the shared memory array. The approach of using a single thread to process multiple elements of a shared memory array can be beneficial even if limits such as threads per block are not an issue. This is because some operations common to each element can be performed by the thread once, amortizing the cost over the number of shared memory elements processed by the thread.
A useful technique to determine the sensitivity of performance to occupancy is through experimentation with the amount of dynamically allocated shared memory, as specified in the third parameter of the execution configuration. By simply increasing this parameter (without modifying the kernel), it is possible to effectively reduce the occupancy of the kernel and measure its effect on performance.
10.5. Concurrent Kernel Executionï
As described in Asynchronous and Overlapping Transfers with Computation, CUDA streams can be used to overlap kernel execution with data transfers. On devices that are capable of concurrent kernel execution, streams can also be used to execute multiple kernels simultaneously to more fully take advantage of the deviceâs multiprocessors. Whether a device has this capability is indicated by the concurrentKernels field of the cudaDeviceProp structure (or listed in the output of the deviceQuery CUDA Sample). Non-default streams (streams other than stream 0) are required for concurrent execution because kernel calls that use the default stream begin only after all preceding calls on the device (in any stream) have completed, and no operation on the device (in any stream) commences until they are finished.
The following example illustrates the basic technique. Because kernel1 and kernel2 are executed in different, non-default streams, a capable device can execute the kernels at the same time.
cudaStreamCreate(&stream1);
cudaStreamCreate(&stream2);
kernel1<<<grid, block, 0, stream1>>>(data_1);
kernel2<<<grid, block, 0, stream2>>>(data_2);
10.6. Multiple contextsï
CUDA work occurs within a process space for a particular GPU known as a context. The context encapsulates kernel launches and memory allocations for that GPU as well as supporting constructs such as the page tables. The context is explicit in the CUDA Driver API but is entirely implicit in the CUDA Runtime API, which creates and manages contexts automatically.
With the CUDA Driver API, a CUDA application process can potentially create more than one context for a given GPU. If multiple CUDA application processes access the same GPU concurrently, this almost always implies multiple contexts, since a context is tied to a particular host process unless Multi-Process Service is in use.
While multiple contexts (and their associated resources such as global memory allocations) can be allocated concurrently on a given GPU, only one of these contexts can execute work at any given moment on that GPU; contexts sharing the same GPU are time-sliced. Creating additional contexts incurs memory overhead for per-context data and time overhead for context switching. Furthermore, the need for context switching can reduce utilization when work from several contexts could otherwise execute concurrently (see also Concurrent Kernel Execution).
Therefore, it is best to avoid multiple contexts per GPU within the same CUDA application. To assist with this, the CUDA Driver API provides methods to access and manage a special context on each GPU called the primary context. These are the same contexts used implicitly by the CUDA Runtime when there is not already a current context for a thread.
// When initializing the program/library
CUcontext ctx;
cuDevicePrimaryCtxRetain(&ctx, dev);
// When the program/library launches work
cuCtxPushCurrent(ctx);
kernel<<<...>>>(...);
cuCtxPopCurrent(&ctx);
// When the program/library is finished with the context
cuDevicePrimaryCtxRelease(dev);
Note
NVIDIA-SMI can be used to configure a GPU for exclusive process mode, which limits the number of contexts per GPU to one. This context can be current to as many threads as desired within the creating process, and cuDevicePrimaryCtxRetain will fail if a non-primary context that was created with the CUDA driver API already exists on the device.
11. Instruction Optimizationï
Awareness of how instructions are executed often permits low-level optimizations that can be useful, especially in code that is run frequently (the so-called hot spot in a program). Best practices suggest that this optimization be performed after all higher-level optimizations have been completed.
11.1. Arithmetic Instructionsï
Single-precision floats provide the best performance, and their use is highly encouraged. The throughput of individual arithmetic operations is detailed in the CUDA C++ Programming Guide.
11.1.1. Division Modulo Operationsï
Note
Low Priority: Use shift operations to avoid expensive division and modulo calculations.
Integer division and modulo operations are particularly costly and should be avoided or replaced with bitwise operations whenever possible: If \(n\) is a power of 2, ( \(i/n\) ) is equivalent to ( \(i \gg {log2}(n)\) ) and ( \(i\% n\) ) is equivalent to ( \(i\&\left( {n - 1} \right)\) ).
The compiler will perform these conversions if n is literal. (For further information, refer to Performance Guidelines in the CUDA C++ Programming Guide).
11.1.2. Loop Counters Signed vs. Unsignedï
Note
Low Medium Priority: Use signed integers rather than unsigned integers as loop counters.
In the C language standard, unsigned integer overflow semantics are well defined, whereas signed integer overflow causes undefined results. Therefore, the compiler can optimize more aggressively with signed arithmetic than it can with unsigned arithmetic. This is of particular note with loop counters: since it is common for loop counters to have values that are always positive, it may be tempting to declare the counters as unsigned. For slightly better performance, however, they should instead be declared as signed.
For example, consider the following code:
for (i = 0; i < n; i++) {
out[i] = in[offset + stride*i];
}
Here, the sub-expression stride*i could overflow a 32-bit integer, so if i is declared as unsigned, the overflow semantics prevent the compiler from using some optimizations that might otherwise have applied, such as strength reduction. If instead i is declared as signed, where the overflow semantics are undefined, the compiler has more leeway to use these optimizations.
11.1.3. Reciprocal Square Rootï
The reciprocal square root should always be invoked explicitly as rsqrtf() for single precision and rsqrt() for double precision. The compiler optimizes 1.0f/sqrtf(x) into rsqrtf() only when this does not violate IEEE-754 semantics.
11.1.4. Other Arithmetic Instructionsï
Note
Low Priority: Avoid automatic conversion of doubles to floats.
The compiler must on occasion insert conversion instructions, introducing additional execution cycles. This is the case for:
Functions operating on char or short whose operands generally need to be converted to an int
Double-precision floating-point constants (defined without any type suffix) used as input to single-precision floating-point computations
The latter case can be avoided by using single-precision floating-point constants, defined with an f suffix such as 3.141592653589793f, 1.0f, 0.5f.
For single-precision code, use of the float type and the single-precision math functions are highly recommended.
It should also be noted that the CUDA math libraryâs complementary error function, erfcf(), is particularly fast with full single-precision accuracy.
11.1.5. Exponentiation With Small Fractional Argumentsï
For some fractional exponents, exponentiation can be accelerated significantly compared to the use of pow() by using square roots, cube roots, and their inverses. For those exponentiations where the exponent is not exactly representable as a floating-point number, such as 1/3, this can also provide much more accurate results, as use of pow() magnifies the initial representational error.
The formulas in the table below are valid for x >= 0, x != -0, that is, signbit(x) == 0.
Computation
Formula
x1/9
r = rcbrt(rcbrt(x))
x-1/9
r = cbrt(rcbrt(x))
x1/6
r = rcbrt(rsqrt(x))
x-1/6
r = rcbrt(sqrt(x))
x1/4
r = rsqrt(rsqrt(x))
x-1/4
r = sqrt(rsqrt(x))
x1/3
r = cbrt(x)
x-1/3
r = rcbrt(x)
x1/2
r = sqrt(x)
x-1/2
r = rsqrt(x)
x2/3
r = cbrt(x); r = r*r
x-2/3
r = rcbrt(x); r = r*r
x3/4
r = sqrt(x); r = r*sqrt(r)
x-3/4
r = rsqrt(x); r = r*sqrt(r)
x7/6
r = x*rcbrt(rsqrt(x))
x-7/6
r = (1/x) * rcbrt(sqrt(x))
x5/4
r = x*rsqrt(rsqrt(x))
x-5/4
r = (1/x)*sqrt(rsqrt(x))
x4/3
r = x*cbrt(x)
x-4/3
r = (1/x)*rcbrt(x)
x3/2
r = x*sqrt(x)
x-3/2
r = (1/x)*rsqrt(x)
11.1.6. Math Librariesï
Note
Medium Priority: Use the fast math library whenever speed trumps precision.
Two types of runtime math operations are supported. They can be distinguished by their names: some have names with prepended underscores, whereas others do not (e.g., __functionName() versus functionName()). Functions following the __functionName() naming convention map directly to the hardware level. They are faster but provide somewhat lower accuracy (e.g., __sinf(x) and __expf(x)). Functions following functionName() naming convention are slower but have higher accuracy (e.g., sinf(x) and expf(x)). The throughput of __sinf(x), __cosf(x), and__expf(x) is much greater than that of sinf(x), cosf(x), and expf(x). The latter become even more expensive (about an order of magnitude slower) if the magnitude of the argument x needs to be reduced. Moreover, in such cases, the argument-reduction code uses local memory, which can affect performance even more because of the high latency of local memory. More details are available in the CUDA C++ Programming Guide.
Note also that whenever sine and cosine of the same argument are computed, the sincos family of instructions should be used to optimize performance:
__sincosf() for single-precision fast math (see next paragraph)
sincosf() for regular single-precision
sincos() for double precision
The -use_fast_math compiler option of nvcc coerces every functionName() call to the equivalent __functionName() call. It also disables single-precision denormal support and lowers the precision of single-precision division in general. This is an aggressive optimization that can both reduce numerical accuracy and alter special case handling. A more robust approach is to selectively introduce calls to fast intrinsic functions only if merited by performance gains and where altered behavior can be tolerated. Note this switch is effective only on single-precision floating point.
Note
Medium Priority: Prefer faster, more specialized math functions over slower, more general ones when possible.
For small integer powers (e.g., x2 or x3), explicit multiplication is almost certainly faster than the use of general exponentiation routines such as pow(). While compiler optimization improvements continually seek to narrow this gap, explicit multiplication (or the use of an equivalent purpose-built inline function or macro) can have a significant advantage. This advantage is increased when several powers of the same base are needed (e.g., where both x2 and x5 are calculated in close proximity), as this aids the compiler in its common sub-expression elimination (CSE) optimization.
For exponentiation using base 2 or 10, use the functions exp2() or expf2() and exp10() or expf10() rather than the functions pow() or powf(). Both pow() and powf() are heavy-weight functions in terms of register pressure and instruction count due to the numerous special cases arising in general exponentiation and the difficulty of achieving good accuracy across the entire ranges of the base and the exponent. The functions exp2(), exp2f(), exp10(), and exp10f(), on the other hand, are similar to exp() and expf() in terms of performance, and can be as much as ten times faster than their pow()/powf() equivalents.
For exponentiation with an exponent of 1/3, use the cbrt() or cbrtf() function rather than the generic exponentiation functions pow() or powf(), as the former are significantly faster than the latter. Likewise, for exponentation with an exponent of -1/3, use rcbrt() or rcbrtf().
Replace sin(Ï*<expr>) with sinpi(<expr>), cos(Ï*<expr>) with cospi(<expr>), and sincos(Ï*<expr>) with sincospi(<expr>). This is advantageous with regard to both accuracy and performance. As a particular example, to evaluate the sine function in degrees instead of radians, use sinpi(x/180.0). Similarly, the single-precision functions sinpif(), cospif(), and sincospif() should replace calls to sinf(), cosf(), and sincosf() when the function argument is of the form Ï*<expr>. (The performance advantage sinpi() has over sin() is due to simplified argument reduction; the accuracy advantage is because sinpi() multiplies by Ï only implicitly, effectively using an infinitely precise mathematical Ï rather than a single- or double-precision approximation thereof.)
11.1.7. Precision-related Compiler Flagsï
By default, the nvcc compiler generates IEEE-compliant code, but it also provides options to generate code that somewhat less accurate but faster:
-ftz=true (denormalized numbers are flushed to zero)
-prec-div=false (less precise division)
-prec-sqrt=false (less precise square root)
Another, more aggressive, option is -use_fast_math, which coerces every functionName() call to the equivalent __functionName() call. This makes the code run faster at the cost of diminished precision and accuracy. See Math Libraries.
11.2. Memory Instructionsï
Note
High Priority: Minimize the use of global memory. Prefer shared memory access where possible.
Memory instructions include any instruction that reads from or writes to shared, local, or global memory. When accessing uncached local or global memory, there are hundreds of clock cycles of memory latency.
As an example, the assignment operator in the following sample code has a high throughput, but, crucially, there is a latency of hundreds of clock cycles to read data from global memory:
__shared__ float shared[32];
__device__ float device[32];
shared[threadIdx.x] = device[threadIdx.x];
Much of this global memory latency can be hidden by the thread scheduler if there are sufficient independent arithmetic instructions that can be issued while waiting for the global memory access to complete. However, it is best to avoid accessing global memory whenever possible.
12. Control Flowï
12.1. Branching and Divergenceï
Note
High Priority: Avoid different execution paths within the same warp.
Flow control instructions (if, switch, do, for, while) can significantly affect the instruction throughput by causing threads of the same warp to diverge; that is, to follow different execution paths. If this happens, the different execution paths must be executed separately; this increases the total number of instructions executed for this warp.
To obtain best performance in cases where the control flow depends on the thread ID, the controlling condition should be written so as to minimize the number of divergent warps.
This is possible because the distribution of the warps across the block is deterministic as mentioned in SIMT Architecture of the CUDA C++ Programming Guide. A trivial example is when the controlling condition depends only on (threadIdx / WSIZE) where WSIZE is the warp size.
In this case, no warp diverges because the controlling condition is perfectly aligned with the warps.
For branches including just a few instructions, warp divergence generally results in marginal performance losses. For example, the compiler may use predication to avoid an actual branch. Instead, all instructions are scheduled, but a per-thread condition code or predicate controls which threads execute the instructions. Threads with a false predicate do not write results, and also do not evaluate addresses or read operands.
Starting with the Volta architecture, Independent Thread Scheduling allows a warp to remain diverged outside of the data-dependent conditional block. An explicit __syncwarp() can be used to guarantee that the warp has reconverged for subsequent instructions.
12.2. Branch Predicationï
Note
Low Priority: Make it easy for the compiler to use branch predication in lieu of loops or control statements.
Sometimes, the compiler may unroll loops or optimize out if or switch statements by using branch predication instead. In these cases, no warp can ever diverge. The programmer can also control loop unrolling using
#pragma unroll
For more information on this pragma, refer to the CUDA C++ Programming Guide.
When using branch predication, none of the instructions whose execution depends on the controlling condition is skipped. Instead, each such instruction is associated with a per-thread condition code or predicate that is set to true or false according to the controlling condition. Although each of these instructions is scheduled for execution, only the instructions with a true predicate are actually executed. Instructions with a false predicate do not write results, and they also do not evaluate addresses or read operands.
The compiler replaces a branch instruction with predicated instructions only if the number of instructions controlled by the branch condition is less than or equal to a certain threshold.
13. Deploying CUDA Applicationsï
Having completed the GPU acceleration of one or more components of the application it is possible to compare the outcome with the original expectation. Recall that the initial assess step allowed the developer to determine an upper bound for the potential speedup attainable by accelerating given hotspots.
Before tackling other hotspots to improve the total speedup, the developer should consider taking the partially parallelized implementation and carry it through to production. This is important for a number of reasons; for example, it allows the user to profit from their investment as early as possible (the speedup may be partial but is still valuable), and it minimizes risk for the developer and the user by providing an evolutionary rather than revolutionary set of changes to the application.
14. Understanding the Programming Environmentï
With each generation of NVIDIA processors, new features are added to the GPU that CUDA can leverage. Consequently, itâs important to understand the characteristics of the architecture.
Programmers should be aware of two version numbers. The first is the compute capability, and the second is the version number of the CUDA Runtime and CUDA Driver APIs.
14.1. CUDA Compute Capabilityï
The compute capability describes the features of the hardware and reflects the set of instructions supported by the device as well as other specifications, such as the maximum number of threads per block and the number of registers per multiprocessor. Higher compute capability versions are supersets of lower (that is, earlier) versions, so they are backward compatible.
The compute capability of the GPU in the device can be queried programmatically as illustrated in the deviceQuery CUDA Sample. The output for that program is shown in Figure 16. This information is obtained by calling cudaGetDeviceProperties() and accessing the information in the structure it returns.
Figure 16 Sample CUDA configuration data reported by deviceQueryï
The major and minor revision numbers of the compute capability are shown on the seventh line of Figure 16. Device 0 of this system has compute capability 7.0.
More details about the compute capabilities of various GPUs are in CUDA-Enabled GPUs and Compute Capabilities of the CUDA C++ Programming Guide. In particular, developers should note the number of multiprocessors on the device, the number of registers and the amount of memory available, and any special capabilities of the device.
14.2. Additional Hardware Dataï
Certain hardware features are not described by the compute capability. For example, the ability to overlap kernel execution with asynchronous data transfers between the host and the device is available on most but not all GPUs irrespective of the compute capability. In such cases, call cudaGetDeviceProperties() to determine whether the device is capable of a certain feature. For example, the asyncEngineCount field of the device property structure indicates whether overlapping kernel execution and data transfers is possible (and, if so, how many concurrent transfers are possible); likewise, the canMapHostMemory field indicates whether zero-copy data transfers can be performed.
14.3. Which Compute Capability Targetï
To target specific versions of NVIDIA hardware and CUDA software, use the -arch, -code, and -gencode options of nvcc. Code that uses the warp shuffle operation, for example, must be compiled with -arch=sm_30 (or higher compute capability).
See Building for Maximum Compatibility for further discussion of the flags used for building code for multiple generations of CUDA-capable device simultaneously.
14.4. CUDA Runtimeï
The host runtime component of the CUDA software environment can be used only by host functions. It provides functions to handle the following:
Device management
Context management
Memory management
Code module management
Execution control
Texture reference management
Interoperability with OpenGL and Direct3D
As compared to the lower-level CUDA Driver API, the CUDA Runtime greatly eases device management by providing implicit initialization, context management, and device code module management. The C++ host code generated by nvcc utilizes the CUDA Runtime, so applications that link to this code will depend on the CUDA Runtime; similarly, any code that uses the cuBLAS, cuFFT, and other CUDA Toolkit libraries will also depend on the CUDA Runtime, which is used internally by these libraries.
The functions that make up the CUDA Runtime API are explained in the CUDA Toolkit Reference Manual.
The CUDA Runtime handles kernel loading and setting up kernel parameters and launch configuration before the kernel is launched. The implicit driver version checking, code initialization, CUDA context management, CUDA module management (cubin to function mapping), kernel configuration, and parameter passing are all performed by the CUDA Runtime.
It comprises two principal parts:
A C-style function interface (cuda_runtime_api.h).
C++-style convenience wrappers (cuda_runtime.h) built on top of the C-style functions.
For more information on the Runtime API, refer to CUDA Runtime of the CUDA C++ Programming Guide.
15. CUDA Compatibility Developerâs Guideï
CUDA Toolkit is released on a monthly release cadence to deliver new features, performance improvements, and critical bug fixes. CUDA compatibility allows users to update the latest CUDA Toolkit software (including the compiler, libraries, and tools) without requiring update to the entire driver stack.
The CUDA software environment consists of three parts:
CUDA Toolkit (libraries, CUDA runtime and developer tools) - SDK for developers to build CUDA applications.
CUDA driver - User-mode driver component used to run CUDA applications (e.g. libcuda.so on Linux systems).
NVIDIA GPU device driver - Kernel-mode driver component for NVIDIA GPUs.
On Linux systems, the CUDA driver and kernel mode components are delivered together in the NVIDIA display driver package. This is shown in Figure 1.
Figure 17 Components of CUDAï
The CUDA compiler (nvcc), provides a way to handle CUDA and non-CUDA code (by splitting and steering compilation), along with the CUDA runtime, is part of the CUDA compiler toolchain. The CUDA Runtime API provides developers with high-level C++ interface for simplified management of devices, kernel executions etc., While the CUDA driver API provides (CUDA Driver API) a low-level programming interface for applications to target NVIDIA hardware.
Built on top of these technologies are CUDA libraries, some of which are included in the CUDA Toolkit, while others such as cuDNN may be released independently of the CUDA Toolkit.
15.1. CUDA Toolkit Versioningï
Starting with CUDA 11, the toolkit versions are based on an industry-standard semantic versioning scheme: .X.Y.Z, where:
.X stands for the major version - APIs have changed and binary compatibility is broken.
.Y stands for the minor version - Introduction of new APIs, deprecation of old APIs, and source compatibility might be broken but binary compatibility is maintained.
.Z stands for the release/patch version - new updates and patches will increment this.
Each component in the toolkit is recommended to be semantically versioned. From CUDA 11.3 NVRTC is also semantically versioned. We will note some of them later on in the document. The versions of the components in the toolkit are available in this table.
Compatibility of the CUDA platform is thus intended to address a few scenarios:
NVIDIA driver upgrades to systems with GPUs running in production for enterprises or datacenters can be complex and may need advance planning. Delays in rolling out new NVIDIA drivers could mean that users of such systems may not have access to new features available in CUDA releases. Not requiring driver updates for new CUDA releases can mean that new versions of the software can be made available faster to users.
Many software libraries and applications built on top of CUDA (e.g. math libraries or deep learning frameworks) do not have a direct dependency on the CUDA runtime, compiler or driver. In such cases, users or developers can still benefit from not having to upgrade the entire CUDA Toolkit or driver to use these libraries or frameworks.
Upgrading dependencies is error-prone and time consuming, and in some corner cases, can even change the semantics of a program. Constantly recompiling with the latest CUDA Toolkit means forcing upgrades on the end-customers of an application product. Package managers facilitate this process but unexpected issues can still arise and if a bug is found, it necessitates a repeat of the above upgrade process.
CUDA supports several compatibility choices:
First introduced in CUDA 10, the CUDA Forward Compatible Upgrade is designed to allow users to get access to new CUDA features and run applications built with new CUDA releases on systems with older installations of the NVIDIA datacenter driver.
First introduced in CUDA 11.1, CUDA Enhanced Compatibility provides two benefits:
By leveraging semantic versioning across components in the CUDA Toolkit, an application can be built for one CUDA minor release (for example 11.1) and work across all future minor releases within the major family (i.e. 11.x).
The CUDA runtime has relaxed the minimum driver version check and thus no longer requires a driver upgrade when moving to a new minor release.
The CUDA driver ensures backward Binary Compatibility is maintained for compiled CUDA applications. Applications compiled with CUDA toolkit versions as old as 3.2 will run on newer drivers.
15.2. Source Compatibilityï
We define source compatibility as a set of guarantees provided by the library, where a well-formed application built against a specific version of the library (using the SDK) will continue to build and run without errors when a newer version of the SDK is installed.
Both the CUDA driver and the CUDA runtime are not source compatible across the different SDK releases. APIs can be deprecated and removed. Therefore, an application that compiled successfully on an older version of the toolkit may require changes in order to compile against a newer version of the toolkit.
Developers are notified through deprecation and documentation mechanisms of any current or upcoming changes. This does not mean that application binaries compiled using an older toolkit will not be supported anymore. Application binaries rely on CUDA Driver API interface and even though the CUDA Driver API itself may also have changed across toolkit versions, CUDA guarantees Binary Compatibility of the CUDA Driver API interface.
15.3. Binary Compatibilityï
We define binary compatibility as a set of guarantees provided by the library, where an application targeting the said library will continue to work when dynamically linked against a different version of the library.
The CUDA Driver API has a versioned C-style ABI, which guarantees that applications that were running against an older driver (for example CUDA 3.2) will still run and function correctly against a modern driver (for example one shipped with CUDA 11.0). This means that even though an application source might need to be changed if it has to be recompiled against a newer CUDA Toolkit in order to use the newer features, replacing the driver components installed in a system with a newer version will always support existing applications and its functions.
The CUDA Driver API thus is binary-compatible (the OS loader can pick up a newer version and the application continues to work) but not source-compatible (rebuilding your application against a newer SDK might require source changes).
Figure 18 CUDA Toolkit and Minimum Driver Versionsï
Before we proceed further on this topic, itâs important for developers to understand the concept of Minimum Driver Version and how that may affect them.
Each version of the CUDA Toolkit (and runtime) requires a minimum version of the NVIDIA driver. Applications compiled against a CUDA Toolkit version will only run on systems with the specified minimum driver version for that toolkit version. Prior to CUDA 11.0, the minimum driver version for a toolkit was the same as the driver shipped with that version of the CUDA Toolkit.
So, when an application is built with CUDA 11.0, it can only run on a system with an R450 or later driver. If such an application is run on a system with the R418 driver installed, CUDA initialization will return an error as can be seen in the example below.
In this example, the deviceQuery sample is compiled with CUDA 11.1 and is run on a system with R418. In this scenario, CUDA initialization returns an error due to the minimum driver requirement.
ubuntu@:~/samples/1_Utilities/deviceQuery
$ make
/usr/local/cuda-11.1/bin/nvcc -ccbin g++ -I../../common/inc -m64 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -gencode arch=compute_86,code=compute_86 -o deviceQuery.o -c deviceQuery.cpp
/usr/local/cuda-11.1/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -gencode arch=compute_86,code=compute_86 -o deviceQuery deviceQuery.o
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.165.02 Driver Version: 418.165.02 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 42C P0 28W / 70W | 0MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
$ samples/bin/x86_64/linux/release/deviceQuery
samples/bin/x86_64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 3
-> initialization error
Result = FAIL
Refer to the CUDA Toolkit Release Notes for details for the minimum driver version and the version of the driver shipped with the toolkit.
15.3.1. CUDA Binary (cubin) Compatibilityï
A slightly related but important topic is one of application binary compatibility across GPU architectures in CUDA.
CUDA C++ provides a simple path for users familiar with the C++ programming language to easily write programs for execution by the device. Kernels can be written using the CUDA instruction set architecture, called PTX, which is described in the PTX reference manual. It is however usually more effective to use a high-level programming language such as C++. In both cases, kernels must be compiled into binary code by nvcc (called cubins) to execute on the device.
The cubins are architecture-specific. Binary compatibility for cubins is guaranteed from one compute capability minor revision to the next one, but not from one compute capability minor revision to the previous one or across major compute capability revisions. In other words, a cubin object generated for compute capability X.y will only execute on devices of compute capability X.z where zâ¥y.
To execute code on devices of specific compute capability, an application must load binary or PTX code that is compatible with this compute capability. For portability, that is, to be able to execute code on future GPU architectures with higher compute capability (for which no binary code can be generated yet), an application must load PTX code that will be just-in-time compiled by the NVIDIA driver for these future devices.
More information on cubins, PTX and application compatibility can be found in the CUDA C++ Programming Guide.
15.4. CUDA Compatibility Across Minor Releasesï
By leveraging the semantic versioning, starting with CUDA 11, components in the CUDA Toolkit will remain binary compatible across the minor versions of the toolkit. In order to maintain binary compatibility across minor versions, the CUDA runtime no longer bumps up the minimum driver version required for every minor release - this only happens when a major release is shipped.
One of the main reasons a new toolchain requires a new minimum driver is to handle the JIT compilation of PTX code and the JIT linking of binary code.
In this section, we will review the usage patterns that may require new user workflows when taking advantage of the compatibility features of the CUDA platform.
15.4.1. Existing CUDA Applications within Minor Versions of CUDAï
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 39C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
When our CUDA 11.1 application (i.e. cudart 11.1 is statically linked) is run on the system, we see that it runs successfully even when the driver reports a 11.0 version - that is, without requiring the driver or other toolkit components to be updated on the system.
$ samples/bin/x86_64/linux/release/deviceQuery
samples/bin/x86_64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "Tesla T4"
CUDA Driver Version / Runtime Version 11.0 / 11.1
CUDA Capability Major/Minor version number: 7.5
...<snip>...
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.0, CUDA Runtime Version = 11.1, NumDevs = 1
Result = PASS
By using new CUDA versions, users can benefit from new CUDA programming model APIs, compiler optimizations and math library features.
The following sections discuss some caveats and considerations.
15.4.1.1. Handling New CUDA Features and Driver APIsï
A subset of CUDA APIs donât need a new driver and they can all be used without any driver dependencies. For example, cuMemMap APIs or any of APIs introduced prior to CUDA 11.0, such as cudaDeviceSynchronize, do not require a driver upgrade. To use other CUDA APIs introduced in a minor release (that require a new driver), one would have to implement fallbacks or fail gracefully. This situation is not different from what is available today where developers use macros to compile out features based on CUDA versions. Users should refer to the CUDA headers and documentation for new CUDA APIs introduced in a release.
When working with a feature exposed in a minor version of the toolkit, the feature might not be available at runtime if the application is running against an older CUDA driver. Users wishing to take advantage of such a feature should query its availability with a dynamic check in the code:
static bool hostRegisterFeatureSupported = false;
static bool hostRegisterIsDeviceAddress = false;
static error_t cuFooFunction(int *ptr)
{
int *dptr = null;
if (hostRegisterFeatureSupported) {
cudaHostRegister(ptr, size, flags);
if (hostRegisterIsDeviceAddress) {
qptr = ptr;
}
else {
cudaHostGetDevicePointer(&qptr, ptr, 0);
}
}
else {
// cudaMalloc();
// cudaMemcpy();
}
gemm<<<1,1>>>(dptr);
cudaDeviceSynchronize();
}
int main()
{
// rest of code here
cudaDeviceGetAttribute(
&hostRegisterFeatureSupported,
cudaDevAttrHostRegisterSupported,
0);
cudaDeviceGetAttribute(
&hostRegisterIsDeviceAddress,
cudaDevAttrCanUseHostPointerForRegisteredMem,
0);
cuFooFunction(/* malloced pointer */);
}
Alternatively the applicationâs interface might not work at all without a new CUDA driver and then its best to return an error right away:
#define MIN_VERSION 11010
cudaError_t foo()
{
int version = 0;
cudaGetDriverVersion(&version);
if (version < MIN_VERSION) {
return CUDA_ERROR_INSUFFICIENT_DRIVER;
}
// proceed as normal
}
A new error code is added to indicate that the functionality is missing from the driver you are running against: cudaErrorCallRequiresNewerDriver.
15.4.1.2. Using PTXï
PTX defines a virtual machine and ISA for general purpose parallel thread execution. PTX programs are translated at load time to the target hardware instruction set via the JIT Compiler which is part of the CUDA driver. As PTX is compiled by the CUDA driver, new toolchains will generate PTX that is not compatible with the older CUDA driver. This is not a problem when PTX is used for future device compatibility (the most common case), but can lead to issues when used for runtime compilation.
For codes continuing to make use of PTX, in order to support compiling on an older driver, your code must be first transformed into device code via the static ptxjitcompiler library or NVRTC with the option of generating code for a specific architecture (e.g. sm_80) rather than a virtual architecture (e.g. compute_80). For this workflow, a new nvptxcompiler_static library is shipped with the CUDA Toolkit.
We can see this usage in the following example:
char* compilePTXToNVElf()
{
nvPTXCompilerHandle compiler = NULL;
nvPTXCompileResult status;
size_t elfSize, infoSize, errorSize;
char *elf, *infoLog, *errorLog;
int minorVer, majorVer;
const char* compile_options[] = { "--gpu-name=sm_80",
"--device-debug"
};
nvPTXCompilerGetVersion(&majorVer, &minorVer);
nvPTXCompilerCreate(&compiler, (size_t)strlen(ptxCode), ptxCode);
status = nvPTXCompilerCompile(compiler, 2, compile_options);
if (status != NVPTXCOMPILE_SUCCESS) {
nvPTXCompilerGetErrorLogSize(compiler, (void*)&errorSize);
if (errorSize != 0) {
errorLog = (char*)malloc(errorSize+1);
nvPTXCompilerGetErrorLog(compiler, (void*)errorLog);
printf("Error log: %s\n", errorLog);
free(errorLog);
}
exit(1);
}
nvPTXCompilerGetCompiledProgramSize(compiler, &elfSize));
elf = (char*)malloc(elfSize);
nvPTXCompilerGetCompiledProgram(compiler, (void*)elf);
nvPTXCompilerGetInfoLogSize(compiler, (void*)&infoSize);
if (infoSize != 0) {
infoLog = (char*)malloc(infoSize+1);
nvPTXCompilerGetInfoLog(compiler, (void*)infoLog);
printf("Info log: %s\n", infoLog);
free(infoLog);
}
nvPTXCompilerDestroy(&compiler);
return elf;
}
15.4.1.3. Dynamic Code Generationï
NVRTC is a runtime compilation library for CUDA C++. It accepts CUDA C++ source code in character string form and creates handles that can be used to obtain the PTX. The PTX string generated by NVRTC can be loaded by cuModuleLoadData and cuModuleLoadDataEx.
Dealing with relocatable objects is not yet supported, therefore the cuLink* set of APIs in the CUDA driver will not work with enhanced compatibility. An upgraded driver matching the CUDA runtime version is currently required for those APIs.
As mentioned in the PTX section, the compilation of PTX to device code lives along with the CUDA driver, hence the generated PTX might be newer than what is supported by the driver on the deployment system. When using NVRTC, it is recommended that the resulting PTX code is first transformed to the final device code via the steps outlined by the PTX user workflow. This ensures your code is compatible. Alternatively, NVRTC can generate cubins directly starting with CUDA 11.1. Applications using the new API can load the final device code directly using driver APIs cuModuleLoadData and cuModuleLoadDataEx.
NVRTC used to support only virtual architectures through the option -arch, since it was only emitting PTX. It will now support actual architectures as well to emit SASS. The interface is augmented to retrieve either the PTX or cubin if an actual architecture is specified.
The example below shows how an existing example can be adapted to use the new features, guarded by the USE_CUBIN macro in this case:
#include <nvrtc.h>
#include <cuda.h>
#include <iostream>
void NVRTC_SAFE_CALL(nvrtcResult result) {
if (result != NVRTC_SUCCESS) {
std::cerr << "\nnvrtc error: " << nvrtcGetErrorString(result) << '\n';
std::exit(1);
}
}
void CUDA_SAFE_CALL(CUresult result) {
if (result != CUDA_SUCCESS) {
const char *msg;
cuGetErrorName(result, &msg);
std::cerr << "\ncuda error: " << msg << '\n';
std::exit(1);
}
}
const char *hello = " \n\
extern \"C\" __global__ void hello() { \n\
printf(\"hello world\\n\"); \n\
} \n";
int main()
{
nvrtcProgram prog;
NVRTC_SAFE_CALL(nvrtcCreateProgram(&prog, hello, "hello.cu", 0, NULL, NULL));
#ifdef USE_CUBIN
const char *opts[] = {"-arch=sm_70"};
#else
const char *opts[] = {"-arch=compute_70"};
#endif
nvrtcResult compileResult = nvrtcCompileProgram(prog, 1, opts);
size_t logSize;
NVRTC_SAFE_CALL(nvrtcGetProgramLogSize(prog, &logSize));
char *log = new char[logSize];
NVRTC_SAFE_CALL(nvrtcGetProgramLog(prog, log));
std::cout << log << '\n';
delete[] log;
if (compileResult != NVRTC_SUCCESS)
exit(1);
size_t codeSize;
#ifdef USE_CUBIN
NVRTC_SAFE_CALL(nvrtcGetCUBINSize(prog, &codeSize));
char *code = new char[codeSize];
NVRTC_SAFE_CALL(nvrtcGetCUBIN(prog, code));
#else
NVRTC_SAFE_CALL(nvrtcGetPTXSize(prog, &codeSize));
char *code = new char[codeSize];
NVRTC_SAFE_CALL(nvrtcGetPTX(prog, code));
#endif
NVRTC_SAFE_CALL(nvrtcDestroyProgram(&prog));
CUdevice cuDevice;
CUcontext context;
CUmodule module;
CUfunction kernel;
CUDA_SAFE_CALL(cuInit(0));
CUDA_SAFE_CALL(cuDeviceGet(&cuDevice, 0));
CUDA_SAFE_CALL(cuCtxCreate(&context, 0, cuDevice));
CUDA_SAFE_CALL(cuModuleLoadDataEx(&module, code, 0, 0, 0));
CUDA_SAFE_CALL(cuModuleGetFunction(&kernel, module, "hello"));
CUDA_SAFE_CALL(cuLaunchKernel(kernel, 1, 1, 1, 1, 1, 1, 0, NULL, NULL, 0));
CUDA_SAFE_CALL(cuCtxSynchronize());
CUDA_SAFE_CALL(cuModuleUnload(module));
CUDA_SAFE_CALL(cuCtxDestroy(context));
delete[] code;
}
15.4.1.4. Recommendations for building a minor-version compatible libraryï
We recommend that the CUDA runtime be statically linked to minimize dependencies. Verify that your library doesnât leak dependencies, breakages, namespaces, etc. outside your established ABI contract.
Follow semantic versioning for your libraryâs soname. Having a semantically versioned ABI means the interfaces need to be maintained and versioned. The library should follow semantic rules and increment the version number when a change is made that affects this ABI contract. Missing dependencies is also a binary compatibility break, hence you should provide fallbacks or guards for functionality that depends on those interfaces. Increment major versions when there are ABI breaking changes such as API deprecation and modifications. New APIs can be added in minor versions.
Conditionally use features to remain compatible against older drivers. If no new features are used (or if they are used conditionally with fallbacks provided) youâll be able to remain compatible.
Donât expose ABI structures that can change. A pointer to a structure with a size embedded is a better solution.
When linking with dynamic libraries from the toolkit, the library must be equal to or newer than what is needed by any one of the components involved in the linking of your application. For example, if you link against the CUDA 11.1 dynamic runtime, and use functionality from 11.1, as well as a separate shared library that was linked against the CUDA 11.2 dynamic runtime that requires 11.2 functionality, the final link step must include a CUDA 11.2 or newer dynamic runtime.
15.4.1.5. Recommendations for taking advantage of minor version compatibility in your applicationï
Certain functionality might not be available so you should query where applicable. This is common for building applications that are GPU architecture, platform and compiler agnostic. However we now add âthe underlying driverâ to that mix.
As with the previous section on library building recommendations, if using the CUDA runtime, we recommend linking to the CUDA runtime statically when building your application. When using the driver APIs directly, we recommend using the new driver entry point access API (cuGetProcAddress) documented here: CUDA Driver API :: CUDA Toolkit Documentation.
When using a shared or static library, follow the release notes of said library to determine if the library supports minor version compatibility.
16. Preparing for Deploymentï
16.1. Testing for CUDA Availabilityï
When deploying a CUDA application, it is often desirable to ensure that the application will continue to function properly even if the target machine does not have a CUDA-capable GPU and/or a sufficient version of the NVIDIA Driver installed. (Developers targeting a single machine with known configuration may choose to skip this section.)
Detecting a CUDA-Capable GPU
When an application will be deployed to target machines of arbitrary/unknown configuration, the application should explicitly test for the existence of a CUDA-capable GPU in order to take appropriate action when no such device is available. The cudaGetDeviceCount() function can be used to query for the number of available devices. Like all CUDA Runtime API functions, this function will fail gracefully and return cudaErrorNoDevice to the application if there is no CUDA-capable GPU or cudaErrorInsufficientDriver if there is not an appropriate version of the NVIDIA Driver installed. If cudaGetDeviceCount() reports an error, the application should fall back to an alternative code path.
A system with multiple GPUs may contain GPUs of different hardware versions and capabilities. When using multiple GPUs from the same application, it is recommended to use GPUs of the same type, rather than mixing hardware generations. The cudaChooseDevice() function can be used to select the device that most closely matches a desired set of features.
Detecting Hardware and Software Configuration
When an application depends on the availability of certain hardware or software capabilities to enable certain functionality, the CUDA API can be queried for details about the configuration of the available device and for the installed software versions.
The cudaGetDeviceProperties() function reports various features of the available devices, including the CUDA Compute Capability of the device (see also the Compute Capabilities section of the CUDA C++ Programming Guide). See Version Management for details on how to query the available CUDA software API versions.
16.2. Error Handlingï
All CUDA Runtime API calls return an error code of type cudaError_t; the return value will be equal to cudaSuccess if no errors have occurred. (The exceptions to this are kernel launches, which return void, and cudaGetErrorString(), which returns a character string describing the cudaError_t code that was passed into it.) The CUDA Toolkit libraries (cuBLAS, cuFFT, etc.) likewise return their own sets of error codes.
Since some CUDA API calls and all kernel launches are asynchronous with respect to the host code, errors may be reported to the host asynchronously as well; often this occurs the next time the host and device synchronize with each other, such as during a call to cudaMemcpy() or to cudaDeviceSynchronize().
Always check the error return values on all CUDA API functions, even for functions that are not expected to fail, as this will allow the application to detect and recover from errors as soon as possible should they occur. To check for errors occurring during kernel launches using the <<<...>>> syntax, which does not return any error code, the return code of cudaGetLastError() should be checked immediately after the kernel launch. Applications that do not check for CUDA API errors could at times run to completion without having noticed that the data calculated by the GPU is incomplete, invalid, or uninitialized.
Note
The CUDA Toolkit Samples provide several helper functions for error checking with the various CUDA APIs; these helper functions are located in the samples/common/inc/helper_cuda.h file in the CUDA Toolkit.
16.3. Building for Maximum Compatibilityï
Each generation of CUDA-capable device has an associated compute capability version that indicates the feature set supported by the device (see CUDA Compute Capability). One or more compute capability versions can be specified to the nvcc compiler while building a file; compiling for the native compute capability for the target GPU(s) of the application is important to ensure that application kernels achieve the best possible performance and are able to use the features that are available on a given generation of GPU.
When an application is built for multiple compute capabilities simultaneously (using several instances of the -gencode flag to
nvcc), the binaries for the specified compute capabilities are combined into the executable, and the CUDA Driver selects the most
appropriate binary at runtime according to the compute capability of the present device. If an appropriate native binary (cubin)
is not available, but the intermediate PTX code (which targets an abstract virtual instruction set and is used for forward-compatibility)
is available, then the kernel will be compiled Just In Time (JIT) (see Compiler JIT Cache Management Tools)
from the PTX to the native cubin for the device. If the PTX is also not available, then the kernel launch will fail.
Windows
nvcc.exe -ccbin "C:\vs2008\VC\bin"
-Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT"
-gencode=arch=compute_30,code=sm_30
-gencode=arch=compute_35,code=sm_35
-gencode=arch=compute_50,code=sm_50
-gencode=arch=compute_60,code=sm_60
-gencode=arch=compute_70,code=sm_70
-gencode=arch=compute_75,code=sm_75
-gencode=arch=compute_75,code=compute_75
--compile -o "Release\mykernel.cu.obj" "mykernel.cu"
Mac/Linux
/usr/local/cuda/bin/nvcc
-gencode=arch=compute_30,code=sm_30
-gencode=arch=compute_35,code=sm_35
-gencode=arch=compute_50,code=sm_50
-gencode=arch=compute_60,code=sm_60
-gencode=arch=compute_70,code=sm_70
-gencode=arch=compute_75,code=sm_75
-gencode=arch=compute_75,code=compute_75
-O2 -o mykernel.o -c mykernel.cu
Alternatively, the nvcc command-line option -arch=sm_XX can be used as a shorthand equivalent to the following more explicit -gencode= command-line options described above:
-gencode=arch=compute_XX,code=sm_XX
-gencode=arch=compute_XX,code=compute_XX
However, while the -arch=sm_XX command-line option does result in inclusion of a PTX back-end target by default (due to the code=compute_XX target it implies), it can only specify a single target cubin architecture at a time, and it is not possible to use multiple -arch= options on the same nvcc command line, which is why the examples above use -gencode= explicitly.
16.4. Distributing the CUDA Runtime and Librariesï
CUDA applications are built against the CUDA Runtime library, which handles device, memory, and kernel management. Unlike the CUDA Driver, the CUDA Runtime guarantees neither forward nor backward binary compatibility across versions. It is therefore best to redistribute the CUDA Runtime library with the application when using dynamic linking or else to statically link against the CUDA Runtime. This will ensure that the executable will be able to run even if the user does not have the same CUDA Toolkit installed that the application was built against.
Note
When statically linking to the CUDA Runtime, multiple versions of the runtime can peacably coexist in the same application process simultaneously; for example, if an application uses one version of the CUDA Runtime, and a plugin to that application is statically linked to a different version, that is perfectly acceptable, as long as the installed NVIDIA Driver is sufficient for both.
Statically-linked CUDA Runtime
The easiest option is to statically link against the CUDA Runtime. This is the default if using nvcc to link in CUDA 5.5 and later. Static linking makes the executable slightly larger, but it ensures that the correct version of runtime library functions are included in the application binary without requiring separate redistribution of the CUDA Runtime library.
Dynamically-linked CUDA Runtime
If static linking against the CUDA Runtime is impractical for some reason, then a dynamically-linked version of the CUDA Runtime library is also available. (This was the default and only option provided in CUDA versions 5.0 and earlier.)
To use dynamic linking with the CUDA Runtime when using the nvcc from CUDA 5.5 or later to link the application, add
the --cudart=shared flag to the link command line; otherwise the statically-linked CUDA Runtime library is used by default.
After the application is dynamically linked against the CUDA Runtime, this version of the runtime library should be bundled with the application. It can be copied into the same directory as the application executable or into a subdirectory of that installation path.
Other CUDA Libraries
Although the CUDA Runtime provides the option of static linking, some libraries included in the CUDA Toolkit are available only in dynamically-linked form. As with
the dynamically-linked version of the CUDA Runtime library, these libraries should
be bundled with the application executable when distributing that application.
16.4.1. CUDA Toolkit Library Redistributionï
The CUDA Toolkitâs End-User License Agreement (EULA) allows for redistribution of many of the CUDA libraries under certain terms and conditions. This allows applications that depend on these libraries to redistribute the exact versions of the libraries against which they were built and tested, thereby avoiding any trouble for end users who might have a different version of the CUDA Toolkit (or perhaps none at all) installed on their machines. Please refer to the EULA for details.
Note
This does not apply to the NVIDIA Driver; the end user must still download and install an NVIDIA Driver appropriate to their GPU(s) and operating system.
16.4.1.1. Which Files to Redistributeï
When redistributing the dynamically-linked versions of one or more CUDA libraries, it is important to identify the exact files that need to be redistributed. The following examples use the cuBLAS library from CUDA Toolkit 5.5 as an illustration:
Linux
In a shared library on Linux, there is a string field called the SONAME that indicates the binary compatibility level of the library. The SONAME of the library against which the application was built must match the filename of the library that is redistributed with the application.
For example, in the standard CUDA Toolkit installation, the files libcublas.so and libcublas.so.5.5 are both symlinks pointing to a specific build of cuBLAS, which is named like libcublas.so.5.5.x, where x is the build number (e.g., libcublas.so.5.5.17). However, the SONAME of this library is given as âlibcublas.so.5.5â:
$ objdump -p /usr/local/cuda/lib64/libcublas.so | grep SONAME
SONAME libcublas.so.5.5
Because of this, even if -lcublas (with no version number specified) is used when linking the application, the SONAME found at link time implies that âlibcublas.so.5.5â is the name of the file that the dynamic loader will look for when loading the application and therefore must be the name of the file (or a symlink to the same) that is redistributed with the application.
The ldd tool is useful for identifying the exact filenames of the libraries that the application expects to find at runtime as well as the path, if any, of the copy of that library that the dynamic loader would select when loading the application given the current library search path:
$ ldd a.out | grep libcublas
libcublas.so.5.5 => /usr/local/cuda/lib64/libcublas.so.5.5
Mac
In a shared library on Mac OS X, there is a field called the install name that indicates the expected installation path and filename the library; the CUDA libraries also use this filename to indicate binary compatibility. The value of this field is propagated into an application built against the library and is used to locate the library of the correct version at runtime.
For example, if the install name of the cuBLAS library is given as @rpath/libcublas.5.5.dylib, then the library is version 5.5 and the copy of this library
redistributed with the application must be named libcublas.5.5.dylib, even though only -lcublas (with no version number specified) is used at link time.
Furthermore, this file should be installed into the @rpath of the application; see Where to Install Redistributed CUDA Libraries.
To view a libraryâs install name, use the otool -L command:
$ otool -L a.out
a.out:
@rpath/libcublas.5.5.dylib (...)
Windows
The binary compatibility version of the CUDA libraries on Windows is indicated as part of the filename.
For example, a 64-bit application linked to cuBLAS 5.5 will look for cublas64_55.dll at runtime, so this is the file that should be redistributed with that application, even though cublas.lib is the file that the application is linked against. For 32-bit applications, the file would be cublas32_55.dll.
To verify the exact DLL filename that the application expects to find at runtime, use the dumpbin tool from the Visual Studio command prompt:
$ dumpbin /IMPORTS a.exe
Microsoft (R) COFF/PE Dumper Version 10.00.40219.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file a.exe
File Type: EXECUTABLE IMAGE
Section contains the following imports:
...
cublas64_55.dll
...
16.4.1.2. Where to Install Redistributed CUDA Librariesï
Once the correct library files are identified for redistribution, they must be configured for installation into a location where the application will be able to find them.
On Windows, if the CUDA Runtime or other dynamically-linked CUDA Toolkit library is placed in the same directory as the executable, Windows will locate it automatically. On Linux and Mac, the -rpath linker option should be used to instruct the executable to search its local path for these libraries before searching the system paths:
Linux/Mac
nvcc -I $(CUDA_HOME)/include
-Xlinker "-rpath '$ORIGIN'" --cudart=shared
-o myprogram myprogram.cu
Windows
nvcc.exe -ccbin "C:\vs2008\VC\bin"
-Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT" --cudart=shared
-o "Release\myprogram.exe" "myprogram.cu"
Note
It may be necessary to adjust the value of -ccbin to reflect the location of your Visual Studio installation.
To specify an alternate path where the libraries will be distributed, use linker options similar to those below:
Linux/Mac
nvcc -I $(CUDA_HOME)/include
-Xlinker "-rpath '$ORIGIN/lib'" --cudart=shared
-o myprogram myprogram.cu
Windows
nvcc.exe -ccbin "C:\vs2008\VC\bin"
-Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT /DELAY" --cudart=shared
-o "Release\myprogram.exe" "myprogram.cu"
For Linux and Mac, the -rpath option is used as before. For Windows, the /DELAY option is used; this requires that the application call SetDllDirectory() before the first call to any CUDA API function in order to specify the directory containing the CUDA DLLs.
Note
For Windows 8, SetDefaultDLLDirectories() and AddDllDirectory() should be used instead of SetDllDirectory(). Please see the MSDN documentation for these routines for more information.
17. Deployment Infrastructure Toolsï
17.1. Nvidia-SMIï
The NVIDIA System Management Interface (nvidia-smi) is a command line utility that aids in the management and monitoring of NVIDIA GPU devices. This utility allows administrators to query GPU device state and, with the appropriate privileges, permits administrators to modify GPU device state. nvidia-smi is targeted at Tesla and certain Quadro GPUs, though limited support is also available on other NVIDIA GPUs. nvidia-smi ships with NVIDIA GPU display drivers on Linux, and with 64-bit Windows Server 2008 R2 and Windows 7. nvidia-smi can output queried information as XML or as human-readable plain text either to standard output or to a file. See the nvidia-smi documenation for details. Please note that new versions of nvidia-smi are not guaranteed to be backward-compatible with previous versions.
17.1.1. Queryable stateï
Both correctable single-bit and detectable double-bit errors are reported. Error counts are provided for both the current boot cycle and the lifetime of the GPU.
Current utilization rates are reported for both the compute resources of the GPU and the memory interface.
The list of active processes running on the GPU is reported, along with the corresponding process name/ID and allocated GPU memory.
Max and current clock rates are reported for several important clock domains, as well as the current GPU performance state (pstate).
The current GPU core temperature is reported, along with fan speeds for products with active cooling.
The current board power draw and power limits are reported for products that report these measurements.
Various dynamic and static information is reported, including board serial numbers, PCI device IDs, VBIOS/Inforom version numbers and product names.
17.1.2. Modifiable stateï
Enable and disable ECC reporting.
Clear single-bit and double-bit ECC error counts.
Indicate whether compute processes can run on the GPU and whether they run exclusively or concurrently with other compute processes.
Indicate whether the NVIDIA driver stays loaded when no applications are connected to the GPU. It is best to enable this option in most circumstances.
Reinitialize the GPU hardware and software state via a secondary bus reset.
17.2. NVMLï
The NVIDIA Management Library (NVML) is a C-based interface that provides direct access to the queries and commands exposed via nvidia-smi intended as a platform for building 3rd-party system management applications. The NVML API is shipped with the CUDA Toolkit (since version 8.0) and is also available standalone on the NVIDIA developer website as part of the GPU Deployment Kit through a single header file accompanied by PDF documentation, stub libraries, and sample applications; see https://developer.nvidia.com/gpu-deployment-kit. Each new version of NVML is backward-compatible.
An additional set of Perl and Python bindings are provided for the NVML API. These bindings expose the same features as the C-based interface and also provide backwards compatibility. The Perl bindings are provided via CPAN and the Python bindings via PyPI.
All of these products (nvidia-smi, NVML, and the NVML language bindings) are updated with each new CUDA release and provide roughly the same functionality.
See https://developer.nvidia.com/nvidia-management-library-nvml for additional information.
17.3. Cluster Management Toolsï
Managing your GPU cluster will help achieve maximum GPU utilization and help you and your users extract the best possible performance. Many of the industryâs most popular cluster management tools support CUDA GPUs via NVML. For a listing of some of these tools, see https://developer.nvidia.com/cluster-management.
17.4. Compiler JIT Cache Management Toolsï
Any PTX device code loaded by an application at runtime is compiled further to binary code by the device driver. This is called just-in-time compilation (JIT). Just-in-time compilation increases application load time but allows applications to benefit from latest compiler improvements. It is also the only way for applications to run on devices that did not exist at the time the application was compiled.
When JIT compilation of PTX device code is used, the NVIDIA driver caches the resulting binary code on disk. Some aspects of this behavior such as cache location and maximum cache size can be controlled via the use of environment variables; see Just in Time Compilation of the CUDA C++ Programming Guide.
17.5. CUDA_VISIBLE_DEVICESï
It is possible to rearrange the collection of installed CUDA devices that will be visible to and enumerated by a CUDA application prior to the start of that application by way of the CUDA_VISIBLE_DEVICES environment variable.
Devices to be made visible to the application should be included as a comma-separated list in terms of the system-wide list of enumerable devices. For example, to use only devices 0 and 2 from the system-wide list of devices, set CUDA_VISIBLE_DEVICES=0,2 before launching the application. The application will then enumerate these devices as device 0 and device 1, respectively.
18. Recommendations and Best Practicesï
This chapter contains a summary of the recommendations for optimization that are explained in this document.
18.1. Overall Performance Optimization Strategiesï
Performance optimization revolves around three basic strategies:
Maximizing parallel execution
Optimizing memory usage to achieve maximum memory bandwidth
Optimizing instruction usage to achieve maximum instruction throughput
Maximizing parallel execution starts with structuring the algorithm in a way that exposes as much parallelism as possible. Once the parallelism of the algorithm has been exposed, it needs to be mapped to the hardware as efficiently as possible. This is done by carefully choosing the execution configuration of each kernel launch. The application should also maximize parallel execution at a higher level by explicitly exposing concurrent execution on the device through streams, as well as maximizing concurrent execution between the host and the device.
Optimizing memory usage starts with minimizing data transfers between the host and the device because those transfers have much lower bandwidth than internal device data transfers. Kernel access to global memory also should be minimized by maximizing the use of shared memory on the device. Sometimes, the best optimization might even be to avoid any data transfer in the first place by simply recomputing the data whenever it is needed.
The effective bandwidth can vary by an order of magnitude depending on the access pattern for each type of memory. The next step in optimizing memory usage is therefore to organize memory accesses according to the optimal memory access patterns. This optimization is especially important for global memory accesses, because latency of access costs hundreds of clock cycles. Shared memory accesses, in counterpoint, are usually worth optimizing only when there exists a high degree of bank conflicts.
As for optimizing instruction usage, the use of arithmetic instructions that have low throughput should be avoided. This suggests trading precision for speed when it does not affect the end result, such as using intrinsics instead of regular functions or single precision instead of double precision. Finally, particular attention must be paid to control flow instructions due to the SIMT (single instruction multiple thread) nature of the device.
19. nvcc Compiler Switchesï
19.1. nvccï
The NVIDIA nvcc compiler driver converts .cu files into C++ for the host system and CUDA assembly or binary instructions for the device. It supports a number of command-line parameters, of which the following are especially useful for optimization and related best practices:
-maxrregcount=N specifies the maximum number of registers kernels can use at a per-file level. See Register Pressure. (See also the__launch_bounds__ qualifier discussed in Execution Configuration of the CUDA C++ Programming Guide to control the number of registers used on a per-kernel basis.)
--ptxas-options=-v or -Xptxas=-v lists per-kernel register, shared, and constant memory usage.
-ftz=true (denormalized numbers are flushed to zero)
-prec-div=false (less precise division)
-prec-sqrt=false (less precise square root)
-use_fast_math compiler option of nvcc coerces every functionName() call to the equivalent __functionName() call. This makes the code run faster at the cost of diminished precision and accuracy. See Math Libraries.
20. Noticesï
20.1. Noticeï
This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (âNVIDIAâ) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (âTerms of Saleâ). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.
NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customerâs own risk.
NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customerâs sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customerâs product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, âMATERIALSâ) ARE BEING PROVIDED âAS IS.â NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIAâs aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.
20.2. OpenCLï
OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.
20.3. Trademarksï
NVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.
Privacy Policy
|
Manage My Privacy
|
Do Not Sell or Share My Data
|
Terms of Service
|
Accessibility
|
Corporate Policies
|
Product Security
|
Contact
Copyright © 2007-2025, NVIDIA Corporation & affiliates. All rights reserved.
Last updated on Jan 21, 2025.
|
The AI CUDA Engineer: Agentic CUDA Kernel Discovery, Optimization and Composition
Note: Updated on February 21, 2025.
At Sakana AI, we believe the path to develop much stronger AI systems is to automate the development of AI using AI. We aim to develop AI systems that can create even more capable and efficient AI systems.
In the past year, we introduced an AI system that can automate the creation of new AI foundation models, at a fraction of the cost. We showed that LLMs can invent more efficient methods to train LLMs. Recently, we proposed the first comprehensive agentic framework for fully automating the entire AI research process in The AI Scientist. This led us to the question: If AI can be used to conduct AI research, can we use AI to research ways to make AI run faster?
Introduction
Just like the human brain, modern AI systems also rely heavily on parallel processing, enabled by hardware accelerators such as GPUs. But unlike the human brain which is evolved (biologically and culturally) to operate efficiently under resource constraints, recent advances in AI foundation models have led to large-scale deployment and ever-growing inference time and energy demand, leading to exponentially increasing resource requirements to train and deploy AI models.
We believe that fundamentally, modern AI systems can and should be as efficient as the human brain, and that the best path to achieve this efficiency is to use AI to make AI more efficient! Inspired by our earlier work on The AI Scientist, we are proud to announce The AI CUDA Engineer, the first comprehensive agentic framework for fully automatic CUDA kernel discovery and optimization.
CUDA is a low-level software layer that gives direct access to the NVIDIA GPU’s hardware instruction set for parallel computation. CUDA kernels are functions written in the CUDA language that run on GPUs. By writing instructions directly at the CUDA kernel level, we can achieve much higher performance for AI algorithms. However, working with CUDA requires quite a bit of GPU knowledge, and in practice, most machine learning algorithms are written in a higher level abstraction layer such as PyTorch or JAX.
The AI CUDA Engineer is an agentic framework that leverages frontier LLMs with the goal of automating the conversion of standard PyTorch code into highly optimized CUDA kernels. Through the use of evolutionary optimization, and leveraging concepts in evolutionary computation, such as ‘crossover’ operations and ‘innovation archive’ to discover promising ‘stepping stone’ kernels, our proposed framework is able to not only automate the process of converting PyTorch modules to CUDA kernels, but our highly optimized CUDA kernels often achieve speedups that have significantly faster runtime.
We believe this technology can enable speedups that will accelerate both the training and running (inference) of foundation models like LLMs or other generative AI models, eventually making AI models run much faster on NVIDIA hardware.
The AI CUDA Engineer is able to generate CUDA Kernels with speedups of 10—100x over common PyTorch operations. Our framework is also able to produce highly optimized CUDA Kernels that are much faster than existing CUDA Kernels that are already commonly used in production (up to 5x speedups).
Stage 1 and 2 (Conversion and Translation): The AI CUDA Engineer first translates PyTorch code into functioning CUDA kernels. We already observe initial runtime improvements without explicitly targeting these.
Stage 3 (Evolutionary Optimization): Inspired by biological evolution, our framework utilizes evolutionary optimization (‘survival of the fittest’) to ensure only the best CUDA kernels are produced. Furthermore, we introduce a novel kernel crossover prompting strategy to combine multiple optimized kernels in a complementary fashion.
Stage 4 (Innovation Archive): Just as how cultural evolution shaped our human intelligence with knowhow from our ancestors through millennia of civilization, The AI CUDA Engineer also takes advantage of what it learned from past innovations and discoveries it made (Stage 4), building an Innovation Archive from the ancestry of known high-performing CUDA Kernels, which uses previous stepping stones to achieve further translation and performance gains.
Kernel Runtime Speedups Discovered by the AI CUDA Engineer
The AI CUDA Engineer robustly discovered CUDA kernels used for common machine learning operations, with speedups as high as 10—100x faster than native and compiled kernels in PyTorch. Our approach is also able to convert entire machine learning architectures into optimized CUDA kernels. Here we highlight a couple of significant speedup discoveries made completely autonomously:
Our approach finds more efficient CUDA kernels for fundamental operations such as matrix multiplications to common deep learning operations, and as of writing, the performance of our discovered CUDA kernels achieved state-of-the-art performance on KernelBench.
Technical Report and Dataset Release
We believe that this is just the beginning of the great optimization of AI!
We’re excited to release our new paper, The AI CUDA Engineer: Agentic CUDA Kernel Discovery and Optimization.
In our report:
We introduce an end-to-end agentic workflow capable of translating PyTorch code to working CUDA kernels, optimizing CUDA runtime performance, and automatically fusing multiple kernels.
Furthermore, we construct various techniques for enhancing the consistency and performance of the pipeline including LLM ensembling, an iterative profiling feedback loop, local kernel code-editing, and crossover kernel optimization.
We show that The AI CUDA Engineer robustly translates more than 230 out of 250 considered torch operations and achieves strong runtime performance improvements for the majority of kernels. Furthermore, our approach is capable of efficiently fusing various kernel operations and can outperform several existing accelerated operations.
We release a dataset of over 17,000 verified kernels for operations covering a wide range of PyTorch operations.
We highlight some notable examples of discovered CUDA kernels that achieved significant speedups on key computation operations in AI models.
Highlighted AI CUDA Engineer-Discovered Kernels
Leveraging our novel LLM-driven evolutionary kernel optimization procedure we robustly obtain speedups for a diverse range of considerations. More specifically, we outperform PyTorch Native runtimes for 81% out of 229 considered tasks. Furthermore, 20% of all discovered CUDA kernels are at least twice as fast as their PyTorch implementations.
Below we show a subset of kernels. They highlight the diversity of different operations for which the AI CUDA Engineer can successfully be deployed. This includes normalization methods, loss functions, special matrix multiplications and even entire neural network architectures:
The AI CUDA Engineer Archive: A Dataset of 17,000+ Verified CUDA Kernels
A Text Embedding visualization of the AI CUDA Engineer Archive shows that the discovered kernels group into tasks (e.g. MatMul, Pooling, Convolution) and implementation strategies (unrolling, fusing, vectorization). The Archive is openly accessible and can be used for downstream fine-tuning of LLMs.
Along with this paper, we release The AI CUDA Engineer archive, a dataset consisting of more than 30,000 CUDA kernels generated by The AI CUDA Engineer. It is released under the CC-By-4.0 license and is accessible via HuggingFace. The dataset includes a torch reference implementation, torch, NCU and Clang-tidy profiling data, multiple kernels per task, error messages and speedup scores against torch native and compile runtimes.
Summary statistics of the AI CUDA Engineer Archive consisting of more than 30,000 kernels and more than 17,000 correct verified implementations. Approximately, 50% of all kernels improve over the torch native runtime.
We envision that this dataset can enable post-training of open-source models to perform better CUDA-enabling modules. This includes offline Reinforcement Learning, preference optimization, and standard supervised fine-tuning.
Explore 17,000+ Kernels in The AI CUDA Engineer Archive
We also published an interactive website for interactively inspecting more than 17,000 verified kernels and their profiles including torch, NCU and Clang-Tidy data. You can access our interactive website here.
The website allows you to explore various high-performing kernels across 230 tasks. It comes with a custom leaderboard that can be used to inspect related kernels across experiments and LLMs.
Furthermore, you can visualize the kernel, retrieve related kernels, download code to verify the implementation and speedup as well as view the obtained profiling data. Finally, you can take an in-depth look at the optimization experiment.
Detailed view of an optimized kernel including profiling data, downloading of evaluation scripts, related kernels and discovery experiment details.
Limitations and Bloopers
Combining evolutionary optimization with LLMs is powerful but can also find ways to trick the verification sandbox. We are fortunate to have readers, like @main_horse test our CUDA kernels, to identify that the system had found a way to “cheat”. For example, the system had found a memory exploit in the evaluation code which, in a number of cases, allowed it to avoid checking for correctness.
We have since made the evaluation harness more robust to eliminate this loophole and have updated our results.
Furthermore, we find the system could also find other novel exploits in the benchmark’s tasks. We are in the process of revising our paper and updating results, with further imporvements to the evaluation and runtime profiling harness, to reflect and discuss the effects, and mitigation of LLM reward hacking for CUDA kernel optimization.
In addition, we observed limitations in frontier LLMs’ ability to effectively utilize TensorCore WMMA capabilities. While LLMs could generate basic CUDA code, they often struggled to implement the specialized matrix multiplication acceleration features offered by modern GPU architectures. This suggests a potential gap in the training data or the models’ understanding of advanced hardware-specific optimizations.
As frontier LLMs, especially those with advanced coding reasoning capabilities become more capable, we expect code-optimization systems, such as ours, will continue to face these challenges. We envision a future where it is the role of human engineers to work with code optimization systems as tools, to produce the best and most reliable results.
Future Implications of The AI CUDA Engineer
The AI revolution is just getting started, and we are just at the very beginning of the transformation cycle. It is our view that today’s LLMs are our generation’s “Mainframe Computers”. We are still in the very early stages of AI, and it is inevitable, due to market competition and global innovation (especially from those innovating with resource constraints), that this technology will become a million times more efficient.
Currently, our AI systems consume immense resources, and if the technology continues to scale without thought for efficiency and energy consumption, the result will not be sustainable. There is no fundamental reason why our AI systems can’t be as efficient (or even more efficient) than human intelligence. We believe that the best path to achieve this greater efficiency is to use AI to make AI more efficient.
This is the direction that Sakana AI is pursuing, and this project is an important step towards making AI a million times faster. Just like the evolution of early clunky mainframe computers to modern computing, how we use AI today will look very different in a few years, compared to today’s ‘clunky’, inefficient LLMs.
Sakana AI
Want to make the AI that improves AI? Please see our Careers page for more information.
|
ROCm Documentation
Install
How to
Conceptual
Reference
Contribute
Optimizing with Composable Kernel
Contents
Optimizing with Composable Kernel#
2025-01-27
19 min read time
The AMD ROCm Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for machine learning workloads. It generates a general-purpose kernel during the compilation phase through a C++ template, enabling developers to achieve operation fusions on different data precisions.
This article gives a high-level overview of CK General Matrix Multiplication (GEMM) kernel based on the design example of 03_gemm_bias_relu. It also outlines the steps to construct the kernel and run it. Moreover, the article provides a detailed implementation of running SmoothQuant quantized INT8 models on AMD Instinct MI300X accelerators using CK.
High-level overview: a CK GEMM instance#
GEMM is a fundamental block in linear algebra, machine learning, and deep neural networks. It is defined as the operation:
\(E = α \times (A \times B) + β \times (D)\), with A and B as matrix inputs, α and β as scalar inputs, and D as a pre-existing matrix.
Take the commonly used linear transformation in a fully connected layer as an example. These terms correspond to input activation (A), weight (B), bias (D), and output (E), respectively. The example employs a DeviceGemmMultipleD_Xdl_CShuffle struct from CK library as the fundamental instance to explore the compute capability of AMD Instinct accelerators for the computation of GEMM. The implementation of the instance contains two phases:
Template parameter definition
Instantiating and running the templated kernel
Template parameter definition#
The template parameters of the instance are grouped into four parameter types:
Parameters for determining matrix data precision
Parameters for determining matrix data layout
Parameters for determining extra operations on matrix elements
Performance-oriented tunable parameters
The template parameters of the selected GEMM kernel are classified into four groups. These template parameter groups should be defined properly before running the instance.#
Matrix data precision#
A, B, D, and E are defined as half-precision floating-point datatypes. The multiply-add results of matrix A and B are added with a pre-existing matrix D (half-precision), and the final GEMM results are also half-precision floating-points.
using ADataType = F16;
using BDataType = F16;
using AccDataType = F32;
using CShuffleDataType = F16;
using DDataType = F16;
using EDataType = F16;
ADataType and BDataType denote the data precision of the A and B input matrices. AccDataType determines the data precision used for representing the multiply-add results of A and B elements. These results are stored in a CShuffle module in local data share (LDS), a low-latency and high-bandwidth explicitly-addressed memory used for synchronization within a workgroup LDS for later use.
CShuffleDataType denotes the data precision of CShuffle in LDS.
DDataType denotes the data precision of the pre-existing D matrix stored in GPU global memory, while EDatatype denotes the data precision of the final output. The CK kernel supports a fusion strategy so that CShuffle can be added with a single pre-existing matrix in the same GPU kernel for better performance.
Matrix data layout#
using ALayout = Row;
using BLayout = Col;
using DLayout = Row;
using ELayout = Row;
Following the convention of various linear algebra libraries, CK assumes that the input matrix A is an M x K matrix, meaning the matrix has M rows and K columns. Similarly, matrix B is assumed to be K x N, meaning it has K rows and N columns. In computing, row-major order and column-major order are commonly used ways to store matrices in linear storage. After understanding the matrix storage pattern, the underlying optimized memory access manner can be applied to achieve better performance depending on the storage ordering of these matrices.
Matrix element operation#
using AElementOp = PassThrough;
using BElementOp = PassThrough;
using CDEElementOp = AddRelu;
CK supports the pre-processing of the matrix before calculating GEMM, that is, C = AElementOp(A) * BElementOp(B). It similarly supports the post-processing of GEMM results the same way, that is, E = CDEElementOp(C, D).
AElementOp and BElementOp determine the operation applied to matrix A and B separately before GEMM, which is achieved by binding the operation with a C++ struct function.
The above PassThrough denotes no operations are performed on the target matrix. CDEELementOp determines the operations applied to CShuffle output and matrix D. The following binding struct AddRelu shows an example of adding the CShuffle output and matrix D, and ReLU (Rectified Linear Unit) operations to the addition result. It then passes the results to matrix E.
struct AddRelu
{
__host__ __device__ void operator()(ck::half_t& e, const ck::half_t& c, const ck::half_t& d) const
{
const ck::half_t x = c + d;
e = x > 0 ? x : 0;
}
};
Tunable parameters#
The CK instance includes a series of tunable template parameters to control the parallel granularity of the workload to achieve load balancing on different hardware platforms.
These parameters include Block Size, M/N/K Per Block, M/N per XDL, AK1, BK1, etc.
Block Size determines the number of threads in the thread block.
M/N/K Per Block determines the size of tile that each thread block is responsible for calculating.
M/N Per XDL refers to M/N size for Instinct accelerator Matrix Fused Multiply Add (MFMA) instructions operating on a per-wavefront basis.
A/B K1 is related to the data type. It can be any value ranging from 1 to K Per Block. To achieve the optimal load/store performance, 128bit per load is suggested. In addition, the A/B loading parameters must be changed accordingly to match the A/B K1 value; otherwise, it will result in compilation errors.
Conditions for achieving computational load balancing on different hardware platforms can vary.
Instantiating and running the templated kernel#
After determining the template parameters, we instantiate the kernel with actual arguments. Do one of the following:
Use GetDeviceBuffer from CK’s custom struct DeviceMem to pass the element values of the matrices that need to be calculated.
Allocate device buffer via hipMalloc. Ensure the device buffer size can fit the matrix size.
Pass matrix elements through the data_ptr method in the Tensor object if the matrix to be calculated is of Tensor type.
The row and column, and stride information of input matrices are also passed to the instance. For batched GEMM, you must pass in additional batch count and batch stride values. The extra operations for pre and post-processing are also passed with an actual argument; for example, α and β for GEMM scaling operations. Afterward, the instantiated kernel is launched by the invoker, as illustrated in Figure 3.
Templated kernel launching consists of kernel instantiation, making arguments by passing in actual application parameters, creating an invoker, and running the instance through the invoker.#
Developing fused INT8 kernels for SmoothQuant models#
SmoothQuant (SQ) is a quantization algorithm that enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLM. The required GPU kernel functionalities used to accelerate the inference of SQ models on Instinct accelerators are shown in the following table.
Functionality descriptions
Corresponding wrappers
\(E = α \times (A \times B) + β \times (D)\), where A, B, D, E are INT8 2-D tensors;
E = Linear_ABDE_I8(A, B, D, \(\alpha\), \(\beta\))
\(E = RELU (α \times (A \times B) + β \times (D))\), where A, B, D, E are INT8 2-D tensors;
E = Linear_ReLU_ABDE_I8(A, B, D, \(\alpha\), \(\beta\))
\(E = α \times (A \times B) + β \times (D)\), where A, B are INT8 2-D tensors, D and E are FP32 2-D tensors;
E = Linear_AB_I8_DE_F32(A, B, D, \(\alpha\), \(\beta\))
\(E = α \times (A \times B)\), where A, B, E are INT8 3-D tensors;
E = BMM_ABE_I8(A, B, \(\alpha\))
\(E = α \times (A \times B)\), where A, B are INT8 3-D tensors, E is FP32 3-D tensor;
E = BMM_AB_I8_E_F32(A, B, \(\alpha\))
Operation flow analysis#
The following section discusses the analysis of the operation flow of Linear_ReLU_ABDE_I8. The rest of the wrappers in Table 1 can be analyzed similarly.
The first operation in the process is to perform the multiplication of input matrices A and B. The resulting matrix C is then scaled with α to obtain T1. At the same time, the process performs a scaling operation on D elements to obtain T2. Afterward, the process performs matrix addition between T1 and T2, element activation calculation using ReLU, and element rounding sequentially. The operations to generate E1, E2, and E are encapsulated and completed by a user-defined template function in CK (given in the next sub-section). This template function is integrated into the fundamental instance directly during the compilation phase so that all these steps can be fused in a single GPU kernel.
Operation flow.#
The CK library contains many fundamental instances that implement different functions. Familiarize yourself with the names of various CK instances and determine whether they meet the target functional requirements.
Second, consider whether the format of input data meets your actual calculation needs. For SQ models, the 8-bit integer data format (INT8) is applied for matrix calculations.
Third, consider the platform for implementing CK instances. The instances suffixed with xdl only run on AMD Instinct accelerators after being compiled and cannot run on Radeon-series GPUs. This is due to the underlying device-specific instruction sets for implementing these basic instances.
Here, we use DeviceBatchedGemmMultiD_Xdl as the fundamental instance to implement the functionalities in the previous table.
Use the ‘DeviceBatchedGemmMultiD_Xdl’ instance as a root.#
The DeviceBatchedGemmMultiD_Xdl instance realizes the batched GEMM BMM_ABE_I8 and BMM_AB_I8_E_F32 kernels directly by using the proper input and output data precision types.
Based on the two batched GEMM kernels, GEMM kernel Linear_ABDE_I8 and Linear_AB_I8_DE_F32 can be implemented by expanding their input 2-D tensors to 3-D tensors. Then, the 3-D output tensors produced by the root instance are squeezed back to 2-D output tensors before returning back.
For example, unsqueeze A (M, K) to A (1, M, K) before assigning it into the root instance and squeeze E (1, M, N) to (M, N) after the calculations of the root instance return back. Linear_ReLU_ABDE_I8 is implemented by adding a ReLU operation on the result output of Linear_ABDE_I8.
Developing the complete function#
The inference of SQ quantized models relies on using PyTorch and Transformer libraries, and a tensor type is used to represent matrices and vectors in torch, the C++ data types in CK need to be replaced with the torch::tensor type. The data types of the input and output matrices should be a tensor type.
In GEMM, the A and B inputs are two-dimensional matrices, and the required input matrices of the selected fundamental CK instance are three-dimensional matrices. Therefore, we must convert the input 2-D tensors to 3-D tensors, by using tensor’s unsqueeze() method before passing these matrices to the instance. For batched GEMM in the preceding table, ignore this step.
// Function input and output
torch::Tensor linear_relu_abde_i8(
torch::Tensor A_,
torch::Tensor B_,
torch::Tensor D_,
float alpha,
float beta)
{
// Convert torch::Tensor A_ (M, K) to torch::Tensor A (1, M, K)
auto A = A_.unsqueeze(0);
// Convert torch::Tensor B_ (K, N) to torch::Tensor A (1, K, N)
auto B = B_.unsqueeze(0);
...
As shown in the following code block, we obtain M, N, and K values using input tensor size values. This stride size information is used to reshape the input vector D and allocate the storage space of tensor E. Stride reflects the exact size of continuous elements in memory, which are passed as important parameters to the fundamental instance for GPU kernel use.
// Return the batch count from the size of dimension 0
int batch_count = A.size(0);
// Return the M, N, K from the size of dimension 1 & 2
int M = A.size(1);
int N = B.size(1);
int K = A.size(2);
// Initialize the stride size for A, B, D and E
int stride_A = K;
int stride_B = K;
int stride_D0 = N;
int stride_E = N;
// Initialize the stride size for batched A, B, D and E
long long int batch_stride_A = M * K;
long long int batch_stride_B = K * N;
long long int batch_stride_D0 = M * N;
long long int batch_stride_E = M * N;
// Convert the tensor of 2-D to 3-D
auto D = D_.view({1,-1}).repeat({M, 1});
// Allocate memory for E
auto E = torch::empty({batch_count, M, N},
torch::dtype(torch::kInt8).device(A.device()));
In the following code block, ADataType, BDataType and D0DataType are used to denote the data precision of the input tensors A, B and D, respectively. EDataType is used to denote the data precision of output tensor E. These parameters are specified to I8 data format (8-bit integer data format) to meet the kernel’s design requirements.
AccDataType determines the data precision used to represent the multiply-add results of A and B elements. Generally, a larger range data type is applied to store the multiply-add results of A and B to avoid result overflow; I32 is applied in this case. The CShuffleDataType I32 data type indicates that the multiply-add results continue to be stored in LDS as an I32 data format. All of this is implemented through the following code block.
// Data precision
using ADataType = I8;
using BDataType = I8;
using AccDataType = I32;
using CShuffleDataType = I32;
using D0DataType = I8;
using DsDataType = ck::Tuple<D0DataType>;
using EDataType = I8;
Following the convention of various linear algebra libraries, row-major and column-major orders are used to denote the ways of storing matrices in linear storage. The advantage of specifying matrix B as column major is that all the relevant matrix elements are stored continuously in GPU global memory when a row in A is multiplied by a column in B, which can help GPU achieve data consistency access to improve access performance.
// Specify tensor order
using ALayout = RowMajor;
using BLayout = ColumnMajor;
using D0Layout = RowMajor;
using DsLayout = ck::Tuple<D0Layout>;
using ELayout = RowMajor;
In CK, PassThrough is a struct denoting if an operation is applied to the tensor it binds to. To fuse the operations between E1, E2, and E introduced in section Operation flow analysis, we define a custom C++ struct, ScaleScaleAddRelu, and bind it to CDEELementOp. It determines the operations that will be applied to CShuffle (A×B results), tensor D, α, and β.
// No operations bound to the elements of A and B
using AElementOp = PassThrough;
using BElementOp = PassThrough;
// Operations bound to the elements of C, D and E
using CDEElementOp = ScaleScaleAddRelu;
In the binding struct, operator() performs an addition operation between CShuffle and matrix D, a ReLU operation on the addition results, and a rounding operation on the output elements. It then returns the results to E.
struct ScaleScaleAddRelu {
template <>
__host__ __device__ constexpr void
operator()<I8, I32, I8>(I8& e, const I32& c, const I8& d) const
{
// Scale AxB result with alpha
const F32 c_scale = ck::type_convert<F32>(c) * alpha;
// Scale D with beta
const F32 d_scale = ck::type_convert<F32>(d) * beta;
// Perform addition operation
F32 temp = c_scale + d_scale;
// Perform RELU operation
temp = temp > 0 ? temp : 0;
// Perform rounding operation
temp = temp > 127 ? 127 : temp;
// Return to E
e = ck::type_convert<I8>(temp);
}
F32 alpha;
F32 beta;
};
The original input tensors need to be padded to meet GPU tile-based parallelism.
static constexpr auto GemmDefault = ck::tensor_operation::device::GemmSpecialization::MNKPadding;
The template parameters of the target fundamental instance are initialized with the above parameters and includes default tunable parameters. For specific tuning methods, see Tunable parameters.
using DeviceOpInstance = ck::tensor_operation::device::DeviceBatchedGemmMultiD_Xdl<
// Tensor layout
ALayout, BLayout, DsLayout, ELayout,
// Tensor data type
ADataType, BDataType, AccDataType, CShuffleDataType, DsDataType, EDataType,
// Tensor operation
AElementOp, BElementOp, CDEElementOp,
// Padding strategy
GemmDefault,
// Tunable parameters
tunable parameters>;
Return the address of the first element of tensors:
auto A_ref = A.data_ptr<ADataType>();
auto B_ref = B.data_ptr<BDataType>();
auto D0_ref = D.data_ptr<D0DataType>();
auto E_ref = E.data_ptr<EDataType>();
The fundamental instance is then initialized and run with actual arguments:
auto device_op = DeviceOpInstance{};
auto invoker = device_op.MakeInvoker();
auto argument = device_op.MakeArgument(
A_ref, B_ref, {D0_ref}, E_ref,
M, N, K,
batch_count,
stride_A, stride_B, {stride_D0}, stride_E,
batch_stride_A, batch_stride_B, {batch_stride_D0}, batch_stride_E,
AElementOp{}, BElementOp{}, CDEElementOp{alpha, beta});
invoker.Run(argument, StreamConfig{nullptr, 0});
The output of the fundamental instance is a calculated batched matrix E (batch, M, N). Before the return, it needs to be converted to a 2-D matrix if a normal GEMM result is required.
// Convert (1, M, N) to (M, N)
return E.squeeze(0);
Binding to Python#
Since these functions are written in C++ and torch::Tensor, you can use pybind11 to bind the functions and import them as Python modules. For the example, the necessary binding code for exposing the functions in the table spans but a few lines.
#include <torch/extension.h>
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m){
m.def("linear_ab_i8_de_f32", &linear_ab_i8_de_f32);
m.def("linear_relu_abde_i8", &linear_relu_abde_i8);
m.def("linear_abde_i8", &linear_abde_i8);
m.def("bmm_abe_i8", &bmm_abe_i8);
m.def("bmm_ab_i8_e_f32", &bmm_ab_i8_e_f32);
}
Build the C++ extension by writing a setup.py script that uses setuptools to compile the C++ code. A reference implementation of the setup.py script is as follows.
import os
from setuptools import setup, find_packages
from torch.utils import cpp_extension
from torch.utils.cpp_extension import BuildExtension
os.environ["CC"] = "hipcc"
os.environ["CXX"] = "hipcc"
sources = [
'torch_int/kernels/linear.cpp',
'torch_int/kernels/bmm.cpp',
'torch_int/kernels/pybind.cpp',
]
include_dirs = ['torch_int/kernels/include']
extra_link_args = ['libutility.a']
extra_compile_args = ['-O3','-DNDEBUG', '-std=c++17', '--offload-arch=gfx942', '-DCK_ENABLE_INT8', '-D__HIP_PLATFORM_AMD__=1']
setup(
name='torch_int',
ext_modules=[
cpp_extension.CUDAExtension(
name='torch_int.rocm',
sources=sources,
include_dirs=include_dirs,
extra_link_args=extra_link_args,
extra_compile_args=extra_compile_args
),
],
cmdclass={
'build_ext': BuildExtension.with_options(use_ninja=False)
},
packages=find_packages(
exclude=['notebook', 'scripts', 'tests']),
)
Run python setup.py install to build and install the extension. It should look something like Figure 6:
Compilation and installation of the INT8 kernels.#
INT8 model inference and performance#
The implementation architecture of running SmoothQuant models on MI300X GPUs is illustrated in Figure 7, where (a) shows the decoder layer composition components of the target model, (b) shows the major implementation class for the decoder layer components, and (c) denotes the underlying GPU kernels implemented by CK instance.
The implementation architecture of running SmoothQuant models on AMD MI300X accelerators.#
For the target SQ quantized model, each decoder layer contains three major components: attention calculation, layer normalization, and linear transformation in fully connected layers. The corresponding implementation classes for these components are:
Int8OPTAttention
W8A8B8O8LinearReLU
W8A8BF32OF32Linear
These classes’ underlying implementation logits will harness the functions in previous table. Note that for the example, the LayerNormQ module is implemented by the torch native module.
Testing environment:
The hardware platform used for testing equips with 256 AMD EPYC 9534 64-Core Processor, 8 AMD Instinct MI300X accelerators and 1.5T memory. The testing was done in a publicly available Docker image from Docker Hub:
rocm/pytorch:rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2
The tested models are OPT-1.3B, 2.7B, 6.7B and 13B FP16 models and the corresponding SmoothQuant INT8 OPT models were obtained from Hugging Face.
Note that since the default values were used for the tunable parameters of the fundamental instance, the performance of the INT8 kernel is suboptimal.
Figure 8 shows the performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X accelerator. The GPU memory footprints of SmoothQuant-quantized models are significantly reduced. It also indicates the per-sample inference latency is significantly reduced for all SmoothQuant-quantized OPT models (illustrated in (b)). Notably, the performance of the CK instance-based INT8 kernel steadily improves with an increase in model size.
Performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X accelerator.#
For accuracy comparisons between the original FP16 and INT8 models, the evaluation is done by using the first 1,000 samples from the LAMBADA dataset’s validation set. We employ the same Last Token Prediction Accuracy method introduced in SmoothQuant Real-INT8 Inference for PyTorch as our evaluation metric. The comparison results are shown in Table 2.
Models
Hugging Face FP16 model accuracy
SmoothQuant quantized INT8 model accuracy
opt-1.3B
0.72
0.70
opt-2.7B
0.76
0.75
opt-6.7B
0.80
0.79
opt-13B
0.79
0.77
Conclusion#
CK provides a rich set of template parameters for generating flexible accelerated computing kernels for difference application scenarios.
CK supports multiple instruction sets of AMD Instinct GPUs, operator fusion and different data precisions. Its composability helps users quickly construct operator performance verification.
With CK, you can build more effective AI applications with higher flexibility and better performance on different AMD accelerator platforms.
previous
Model acceleration libraries
next
Optimizing Triton kernels
|
The world’s leading publication for data science, AI, and ML professionals.
Unleashing the Power of Triton: Mastering GPU Kernel Optimization in Python
Accelerating AI/ML Model Training with Custom Operators – Part 2
According to Greek mythology, Triton, a god of the sea, would calm or stir the sea waters by using his conch shell to control its tides and waves. In one story, in particular, Triton is depicted as having used his powers to guide the Argonauts through particularly dangerous sea waters. In this post, we similarly call upon Triton for navigation through complex journeys, although this time we refer to the Triton language and compiler for writing Deep Learning (DL) kernels and to our journeys through the world of AI/ML development.
This is a sequel to a previous post on the topic of accelerating AI/ML applications with custom operators in which we demonstrated the potential for performance optimization by developing custom CUDA kernels. One of our intentions was to emphasize the accessibility of custom kernel development and the opportunities it provides even for non-expert CUDA developers. However, there are challenges to CUDA development that may prove insurmountable for some. For one, while many a modern-day AI/ML developer are well-versed in Python, they may not feel comfortable developing in C++. Furthermore, tuning a CUDA kernel to take full advantage of the GPU’s capabilities requires an intimate understanding of the underlying HW architecture and could take a non-trivial amount of work. This is particularly true if you want your kernel to run optimally on a variety of GPU architectures. Much of the complexity results from CUDA’s "thread-based" development model in which the developer is responsible for designing and optimizing all elements of the GPU kernel threads, including all details related to the use of GPU memory, thread-concurrency, TensorCore scheduling, and much more.
The Power of Triton
The Triton library aims to democratize and simplify Gpu kernel development in two primary ways. First, it provides an API for building custom operators in Python (rather than C++). Second, it enables kernel development at the block level (rather than the thread level) thereby abstracting away and automating all issues related to optimizing performance within CUDA thread blocks. Rather than taking the laborious steps of programming the details of the thread invocation, including the intricacies related to memory management, scheduling of on-chip acceleration engines, thread-synchronization, etc., kernel developers can rely on Triton to do it all for them. One important byproduct of the high-level API abstraction of Triton’s programming model is that it reduces the burden of needing to tune the kernel for multiple different GPU types and architectures.
Of course, as is usually the case when up-leveling an API, the Triton programming model does have its disadvantages. Some kernels might benefit from the thread-level control enabled by CUDA (e.g., they might benefit from the conditional execution flow discussed in our previous post). Other kernels might require very specialized and delicate treatment to reach peak performance and may suffer from the automated result of the Triton compiler. But even in cases such as these, where the development of a CUDA kernel may ultimately be required, the ability to quickly and easily create a temporary Triton kernel could greatly facilitate development and boost productivity.
For more on the motivations behind Triton and on the details of its programming model, see the Triton announcement, the official Triton documentation, and the original Triton white-paper.
Disclaimers
Similar to our [previous post](https://chaimrand.medium.com/accelerating-ai-ml-model-training-with-custom-operators-163ef2a04b12), our intention is to provide a simple demonstration of the opportunity offered by Triton. Please do not view this post as a replacement for the official Triton documentation or its associated tutorials. We will use the same face-detection model as in our previous post as a basis for our demonstration and perform our experiments in the same Google Cloud environment – a g2-standard-16 VM (with a single L4 GPU) with a dedicated deep learning VM image and Pytorch 2.4.0. As before, we make no effort to optimize our examples and/or verify their robustness, durability, or accuracy. It should be noted that although we will perform our experiments on a PyTorch model and on an NVIDIA GPU, Triton kernel development is supported by additional frameworks and underlying HWs.
Triton as a Component of Torch Compilation
In previous posts (e.g., here) we demonstrated the use of PyTorch compilation and its potential impact on runtime performance. The default compiler used by the [torch.compile](https://pytorch.org/docs/stable/generated/torch.compile.html)r is TorchInductor which relies heavily on Triton kernels for its GPU acceleration. Thus, it seems only appropriate that we begin our Triton exploration by assessing the automatic Triton-backed optimization afforded by torch.compile. The code block below includes the same forward pass of the face detection model we introduced in our previous post along with the compiled GIOU loss function. For the sake of brevity, we have omitted some of the supporting code. Please refer to our previous post for the full implementation.
def loss_with_padding(pred, targets):
mask = (targets[...,3] > 0).to(pred.dtype)
total_boxes = mask.sum()
loss = generalized_box_iou(targets, pred)
masked_loss = loss*mask
loss_sum = masked_loss.sum()
return loss_sum/torch.clamp(total_boxes, 1)
device = torch.device("cuda:0")
model = torch.compile(Net()).to(device).train()
loss_fn = torch.compile(loss_with_padding)
# forward portion of training loop wrapped with profiler object
with torch.profiler.profile(
schedule=torch.profiler.schedule(wait=5, warmup=5, active=10, repeat=1)
) as prof:
for step, data in enumerate(train_loader):
with torch.profiler.record_function('copy data'):
images, boxes = data_to_device(data, device)
torch.cuda.synchronize(device)
with torch.profiler.record_function('forward'):
with torch.autocast(device_type='cuda', dtype=torch.bfloat16):
outputs = model(images)
torch.cuda.synchronize(device)
with torch.profiler.record_function('calc loss'):
loss = loss_fn(outputs, boxes)
torch.cuda.synchronize(device)
prof.step()
if step > 30:
break
# filter and print profiler results
event_list = prof.key_averages()
for i in range(len(event_list) - 1, -1, -1):
if event_list[i].key not in ['forward', 'calc loss', 'copy data']:
del event_list[i]
print(event_list.table())
The performance results (averaged over multiple runs) are captured below:
------------- ------------ ------------
Name CPU total CPU time avg
------------- ------------ ------------
copy data 56.868ms 5.687ms
forward 1.329s 132.878ms
calc loss 8.282ms 828.159us
------------- ------------ ------------
Recall that the average time of the original loss function (on padded input) was 1.844ms. Thus the performance boost resulting from torch compilation is greater than 2X(!!).
The Triton kernels automatically generated by torch.compile can actually be viewed by setting the TORCH_LOGS environment variable, as explained in this PyTorch tutorial. In fact, some have proposed the use of these kernels as a starting point for Triton development (e.g., see here). However, in our experience these kernels can be somewhat difficult to decipher.
In the next section we will attempt to further improve on the results of PyTorch compilation by implementing a GIOU Triton kernel.
Creating a Custom Triton Kernel
A great place to start your Triton development journey is with the official Triton tutorials. The tutorials are introduced in incremental order of complexity, with each one expanding on one or more of Triton’s unique features. Our GIOU Triton kernel most closely resembles the most basic vector addition example. As in our CUDA implementation, we assign a block to each sample in the input batch, and program it to operate on all of the bounding boxes in the sample. Note the use of tl.load and tl.store for reading and writing data from and to memory, as well as the block programs use of vectorized arithmetic.
import triton
import triton.language as tl
@triton.jit
def giou_kernel(preds_ptr,
targets_ptr,
output_ptr,
valid_ptr,
BLOCK_SIZE: tl.constexpr):
pid = tl.program_id(axis=0)
box_id = tl.arange(0, BLOCK_SIZE)
box_offsets = pid * BLOCK_SIZE + box_id
preds_left = tl.load(preds_ptr + 0 + 4 * box_offsets)
preds_top = tl.load(preds_ptr + 1 + 4 * box_offsets)
preds_right = tl.load(preds_ptr + 2 + 4 * box_offsets)
preds_bottom = tl.load(preds_ptr + 3 + 4 * box_offsets)
gt_left = tl.load(targets_ptr + 0 + 4 * box_offsets)
gt_top = tl.load(targets_ptr + 1 + 4 * box_offsets)
gt_right = tl.load(targets_ptr + 2 + 4 * box_offsets)
gt_bottom = tl.load(targets_ptr + 3 + 4 * box_offsets)
epsilon = 1e-5
# Compute the area of each box
area1 = (preds_right - preds_left) * (preds_bottom - preds_top)
area2 = (gt_right - gt_left) * (gt_bottom - gt_top)
# Compute the intersection
left = tl.maximum(preds_left, gt_left)
top = tl.maximum(preds_top, gt_top)
right = tl.minimum(preds_right, gt_right)
bottom = tl.minimum(preds_bottom, gt_bottom)
inter_w = tl.maximum(right - left, 0)
inter_h = tl.maximum(bottom - top, 0)
inter_area = inter_w * inter_h
union_area = area1 + area2 - inter_area
iou_val = inter_area / tl.maximum(union_area, epsilon)
# Compute the smallest enclosing box
enclose_left = tl.minimum(preds_left, gt_left)
enclose_top = tl.minimum(preds_top, gt_top)
enclose_right = tl.maximum(preds_right, gt_right)
enclose_bottom = tl.maximum(preds_bottom, gt_bottom)
enclose_w = tl.maximum(enclose_right - enclose_left, 0)
enclose_h = tl.maximum(enclose_bottom - enclose_top, 0)
enclose_area = enclose_w * enclose_h
# Compute GIOU
delta_area = (enclose_area - union_area)
enclose_area = tl.maximum(enclose_area, epsilon)
giou = iou_val - delta_area / enclose_area
# Store results
tl.store(output_ptr + (box_offsets),
tl.where(gt_bottom > 0, giou, 0))
tl.store(valid_ptr + (box_offsets), gt_bottom > 0)
def loss_with_triton(pred, targets):
batch_size = pred.shape[0]
n_boxes = pred.shape[1]
# convert to float32 (remove to keep original dtypes)
pred = pred.to(torch.float32)
targets = targets.to(torch.float32)
# allocate output tensors
output = torch.empty_strided(pred.shape[0:2],
stride=(n_boxes,1),
dtype = pred.dtype,
device = pred.device)
valid = torch.empty_strided(pred.shape[0:2],
stride=(n_boxes,1),
dtype = torch.bool,
device = pred.device)
# call Triton kernel
giou_kernel[(batch_size,)](pred, targets, output, valid,
BLOCK_SIZE=n_boxes)
total_valid = valid.sum()
loss_sum = output.sum()
return loss_sum/total_valid.clamp(1)
The results of running with our Triton kernel are captured below. While somewhat worse than in our previous experiment, this could be a result of additional optimizations performed by torch.compile.
------------- ------------ ------------
Name CPU total CPU time avg
------------- ------------ ------------
copy data 57.089ms 5.709ms
forward 1.338s 133.771ms
calc loss 8.908ms 890.772us
------------- ------------ ------------
Following the recommendation of PyTorch’s documentation on the use of Triton kernels, we further assess the performance of our kernel, this time in combination with PyTorch compilation. The results (averaged over multiple runs) are slightly better than the auto-compiled loss of our first experiment.
------------- ------------ ------------
Name CPU total CPU time avg
------------- ------------ ------------
copy data 57.008ms 5.701ms
forward 1.330s 132.951ms
calc loss 7.189ms 718.869us
------------- ------------ ------------
When developing our custom GIOU CUDA kernel, we noted the overhead of converting the input tensors to float32, and the need to enhance our kernel to support various input types in order to avoid this conversion. In the case of our Triton kernel this can be accomplished quite easily by simply removing the conversion operations. The custom kernel will be auto-generated (JIT-compiled) with the original types.
------------- ------------ ------------
Name CPU total CPU time avg
------------- ------------ ------------
copy data 57.034ms 5.703ms
forward 1.325s 132.456ms
calc loss 6.219ms 621.950us
------------- ------------ ------------
Our final results are on par with CUDA kernel results that we saw in our previous post.
Results
The following table summarizes the results of our experimentation. The results were averaged over multiple runs due to some variance that we observed. We have included the results of our custom CUDA kernel from our previous post, for reference. Keep in mind that the comparative results are likely to vary greatly based on the details of the kernel and the runtime environment.
While our first Triton kernel experiment resulted in reduced performance, compared to our custom CUDA operator, by applying compilation and removing the data type conversions, we were able to match its speed.
These findings are in line with what one might expect from Triton: On the one hand, its high-level API abstraction implies a certain loss of control over the low-level flow which could result in reduced runtime performance. On the other hand, the (relative) simplicity and power of its APIs enable users to close the performance gap by implementing features with much greater ease than in CUDA.
One could make a strong argument that the Triton kernel we chose to evaluate is what the documentation would refer to as "embarrassingly parallel", i.e., comprised of element-wise operations, and that as such, is a terrible kernel on which to demonstrate the value of Triton. Indeed, a more complex program, requiring more sophisticated memory management, scheduling, synchronization, etc., may be required to showcase the full power of Triton.
Next Steps
Several additional steps are required to complete our task. These include tuning our custom kernel and implementing the backward function.
1. Kernel Optimization
Although, Triton abstracts away a lot of the low-level kernel optimization, there remain many controls that could greatly impact runtime performance. These include the size of each block, the number of thread warps to use (as demonstrated in the softmax tutorial), and how L2 memory is accessed (see the [matrix multiplication tutorial](https://triton-lang.org/main/getting-started/tutorials/03-matrix-multiplication.html) for an example of swizzling). Triton includes an autotuning feature for optimizing the choice of hyper-parameters (as demonstrated in the matrix multiplication tutorial and in the PyTorch Triton example). Although we have omitted autotuning from our example, it is an essential step of Triton kernel development.
2. Backward Pass Implementation
We have limited our example to just the forward pass of the GIOU loss function. A full solution would require creating a kernel for the backward pass, as well (as demonstrated in the layer normalization tutorial). This is usually a bit more complicated than the forward pass. One may wonder why the high-level kernel development API exposed by Triton does not address this challenge by supporting automatic differentiation. As it turns out, for reasons that are beyond the scope of this post (e.g., see here), automatic differentiation of custom kernels is extremely difficult to implement. Nonetheless, this would be an absolute killer of a feature for Triton and we can only hope that this will be supported at some point in the future.
Summary
Triton is easily one of the most important and impactful AI/ML libraries of the past few years. While it is difficult to assess the amount of innovation and progress it has enabled in the field of AI, its footprints can be found everywhere – from the core implementation of PyTorch 2 and its dependencies, to the specialized attention layers within the advanced LLM models that are slowly perforating our every day lives.
Triton’s popularity is owed to its innovative programming model for kernel development. Once limited to the domain of CUDA experts, Triton makes creating customized DL primitives accessible to every Python developer.
In this post we have only touched the surface of Triton and its capabilities. Be sure to check out the Triton’s online documentation and other resources to learn more.
Written By
Topics:
Share this article:
Related Articles
Implementing Convolutional Neural Networks in TensorFlow
Step-by-step code guide to building a Convolutional Neural Network
What Do Large Language Models “Understand”?
A deep dive on the meaning of understanding and how it applies to LLMs
How to Forecast Hierarchical Time Series
A beginner’s guide to forecast reconciliation
Deep Dive into LSTMs & xLSTMs by Hand ✍️
Explore the wisdom of LSTM leading into xLSTMs - a probable competition to the present-day LLMs
Does Your Company Have a Data Strategy?
This sophistication matrix can show you where you need to go
Speeding Up the Vision Transformer with BatchNorm
How integrating Batch Normalization in an encoder-only Transformer architecture can lead to reduced training time…
The Math Behind Keras 3 Optimizers: Deep Understanding and Application
This is a bit different from what the books say.
Build Your Own Modular Audio Course on AI Ethics and Safety
A hand-picked “listening list” on the questions and stakes at the forefront of artificial intelligence…
Latest picks: The ethics of AI
Your daily dose of data science
Your home for data science and Al. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.
Sign up to our newsletter
|
Scaling Intelligence Lab
KernelBench: Can LLMs Write GPU Kernels?
Anne Ouyang*
Stanford
Simon Guo*
Stanford
Azalia Mirhoseini
Stanford
A benchmark designed to evaluate the ability of LLMs to generate efficient GPU kernels for optimizing neural network performance
TL;DR
We introduce KernelBench, a benchmark designed to evaluate the ability of large language models (LLMs) to generate efficient GPU kernels for optimizing neural network performance. With 250 well-defined neural network tasks spanning foundational operators, simple fusion patterns, and full ML architectures, the benchmark tasks LLMs to replace PyTorch implementations with custom kernels that are correct and performant. KernelBench highlights the potential for agentic optimization for computer systems with dense feedback signal, where systems iteratively refine kernel designs using profiling tools and tight feedback loops to achieve near-peak hardware utilization. As models scale, well-optimized kernels have far-reaching implications, from reducing the massive energy demands of AI systems to enabling fair and efficient comparisons of novel architectures. By providing aspirational tasks and focusing on agentic approaches, KernelBench envisions a future where LLMs can autonomously drive innovation in GPU programming and ML system optimization.
Kernels are the kernel of deep learning.
…but writing kernels sucks.
Consider a machine learning researcher with a promising new attention mechanism that could improve LLM efficiency by 30%. To actually test out this idea, they need to:
In an ideal world, you could:
This future (hopefully) isn’t science fiction: we think it’s possible. To measure progress, we’re introducing KernelBench, a dataset of 250 well-defined neural network operations with reference implementations given in Pytorch. KernelBench measures the ability of LLMs to write custom GPU kernels that implement and accelerate these operations. Beyond the 250 core tasks in KernelBench, we also introduce 20 aspirational tasks from HuggingFace models to benchmark the ability of LLM systems in not only just writing GPU kernels but also working on integrating GPU code optimizations in a software library setting.
Why Are Kernels Important?
As models grow larger and become more embedded into our daily lives, having fine-grained control over hardware resources to extract the most performance out of GPUs directly translates to significant energy and cost reductions. For example, ChatGPT alone is estimated to consume over half a million kilowatt-hours daily — roughly equivalent to the power usage of 180,000 U.S. households. At this scale, a 5% speedup isn’t just a number on a benchmark, it’s real energy and money saved. Beyond savings, optimized GPU kernels also allow machine learning researchers to fairly evaluate and compare new model architectures, and efficiency often means unlocking new capabilities that push the field of AI forward.
Big O is not all you need.
In algorithm classes we are taught to view Big O as the gold standard for measuring the efficiency of algorithms. In ML research, new model architectures may have better theoretical complexity, implying they should outperform traditional architectures in speed or efficiency, but when it comes down to real-world performance, these newer models can struggle to keep up with established architectures.
(Meme credit to Michael Zhang)
Why doesn’t Big O analysis match actual performance?
Established architectures benefit from years of optimization in their underlying kernels. These kernels are tailored to run efficiently on specific hardware, exploiting all the features of the hardware to maximize performance. On the flip side, newer models often lack this level of optimization and lack adequate hardware utilization, which can result in disappointing performance despite their appealing theoretical claims.
Optimized GPU kernels are important for designing ML architectures. The lack of well-written GPU kernels makes it difficult to do apples-to-apples model architecture comparisons given a fixed compute budget and a fixed hardware platform, so we cannot effectively determine the effectiveness of an architecture. Consider the following example scenarios:
Can we use LLMs to generate correct and performant GPU kernels?
Unlike many other coding tasks, writing efficient GPU kernels is challenging due to the need for parallelization scheme design, memory management, and hardware-specific optimizations. GPU programming isn’t just about writing syntactically correct code; it requires a deep understanding of GPU architecture to ensure that code is both correct (produces the right output) and performant (fully utilizes the GPU’s capabilities). These factors make GPU programming a rich problem for LLMs, as it involves a bigger optimization search space beyond basic syntax or logic generation.
Recent work on inference scaling laws shows that when you have automatic verifiers, throwing more compute at generation can dramatically improve success rates. Our lab’s recent work (Large Language Monkeys) showed that with coding tasks, going from 1 to 250 samples boosted the solve rate from 15.9% to 56% on SWE-Bench Lite with DeepSeek-Coder-V2-Instruct. GPU programming is a task with strong verification mechanisms and clear feedback signals. For correctness, ground truth is determined by running the generated code on random inputs and comparing the outputs with those of the baseline to check if they match. Performance is measured as the wallclock time and comparing speedup over the reference baseline.
GPU programming is also great for agentic and RL approaches, as the system can iteratively refine kernel designs with reliable, measurable outcomes. Profiling tools like NVIDIA’s Nsight Compute (NCU) provide in-depth feedback on performance bottlenecks, memory usage, and thread utilization, which gives the agent a lot of data to adjust optimizations and improve efficiency. Together, these qualities create a structured environment and tight feedback loop where an agent has verifiable correctness and performance metrics to iterate toward increasingly optimized and correct kernel code.
KernelBench
We introduce KernelBench, a collection of 250 PyTorch neural network operations that we think systems should be able to automatically write optimized kernels for. KernelBench provides the reference implementation in PyTorch, and the task for an LLM is to replace torch layers with custom implementations. We currently only focus on the forward pass as a first step. The core tasks in KernelBench are divided into three levels, with an additional level 4 of 20 aspirational tasks:
We only provide baseline evaluations for the first three levels. Level 4 is currently a far-reaching aim and we do not provide baseline evaluations for this level; however, we believe that this level could ultimately play a significant role in advancing the capabilities of LLMs to interact with complex, real-world codebases, where they not only assist with code generation but can also drive architectural improvements and optimizations in widely-used frameworks.
The tasks in KernelBench are a mix of written manually, generated by an LLM or script, and collected from Github. All tasks are manually cleaned up and verified. Each problem has a class named Model to denote which torch-based architecture we want optimized. The torch reference implementations are cleaned up to be self-contained in one file, with the modules containing only the init and forward functions (and helper functions called in the init and forward functions). In addition to the torch module, we also provide functions get_inputs() and get_init_inputs() for generating random parameters for the forward pass and the initialization, respectively. The shapes of random inputs for testing are manually chosen. We also modify the architecture manually to eliminate operations such as dropout to make the results deterministic (within a generous tolerance threshold).
While the tasks (architecture implementations) are given in Pytorch, KernelBench is language agnostic, allowing the solutions to use any libraries and DSLs (including Triton, ThunderKittens, CUTLASS, …) such that different levels of abstractions for GPU programming can be explored. It is also fully flexible for the LLMs to determine the optimizations to apply (e.g. making decisions such as kernel fusion).
Here’s a simple example of vector addition to illustrate our task format and a CUDA-based solution:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
def forward(self, a, b):
return a + b
def get_inputs():
# randomly generate input tensors based on the model architecture
a = torch.randn(1, 128).cuda()
b = torch.randn(1, 128).cuda()
return [a, b]
def get_init_inputs():
# randomly generate tensors required for initialization based on the model architecture
return []
Here’s an example of a CUDA based solution using custom CUDA C++ operators in torch via load_inline(). This entire file is LLM-generated. The custom CUDA code is supplied as a string and JIT compiled. In this example, the torch addition expression for two vectors is swapped out with the custom elementwise_add_kernel.
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.cpp_extension import load_inline
# Define the custom CUDA kernel for element-wise addition
elementwise_add_source = """
#include <torch/extension.h>
#include <cuda_runtime.h>
__global__ void elementwise_add_kernel(const float* a, const float* b, float* out, int size) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx < size) {
out[idx] = a[idx] + b[idx];
}
}
torch::Tensor elementwise_add_cuda(torch::Tensor a, torch::Tensor b) {
auto size = a.numel();
auto out = torch::zeros_like(a);
const int block_size = 256;
const int num_blocks = (size + block_size - 1) / block_size;
elementwise_add_kernel<<<num_blocks, block_size>>>(a.data_ptr<float>(), b.data_ptr<float>(), out.data_ptr<float>(), size);
return out;
}
"""
elementwise_add_cpp_source = "torch::Tensor elementwise_add_cuda(torch::Tensor a, torch::Tensor b);"
# Compile the inline CUDA code for element-wise addition
elementwise_add = load_inline(
name='elementwise_add',
cpp_sources=elementwise_add_cpp_source,
cuda_sources=elementwise_add_source,
functions=['elementwise_add_cuda'],
verbose=True,
extra_cflags=[''],
extra_ldflags=['']
)
class ModelNew(nn.Module):
def __init__(self) -> None:
super().__init__()
self.elementwise_add = elementwise_add
def forward(self, a, b):
return self.elementwise_add.elementwise_add_cuda(a, b)
Evaluation
When evaluating GPU kernels, we focus on three criteria with each building upon the previous:
In our context, correctness specifically means, given randomized input values for a predefined set of shapes, the optimized kernel should yield outputs that are numerically equivalent (within an acceptable margin of error, if necessary for floating-point operations) to those produced by the baseline implementation. We choose our numerical equivalent threshold as having absolute and relative tolerances being 1e-02, a generous threshold enabling precision changes and alternative algorithms. A common tradeoff in GPU kernel design is specialization versus generality. Specialized kernels, tuned for particular input shapes or patterns, can often achieve significant performance gains; general-purpose kernels, by contrast, aim for broader compatibility but may sacrifice peak performance. For our purpose, since the aim of our project is to cheaply and quickly generate specialized kernels, we choose to constrain our correctness checks to specified input shapes without requiring broad generalization across all possible shapes. We generate 5 sets of random inputs with fixed shapes, and the kernel is considered to be correct if it produces the numerical equivalent outputs as the unoptimized baseline for all 5 inputs. It is possible to have a stricter measurement of correctness by using more random inputs (the number of correctness trials is a customizable parameter), but we capped it at 5 due to evaluation speed.
Caption: Illustration of KernelBench design
In KernelBench, we made the decision not to provide a predefined train/validation/test set split; however, users are welcome to create their own splits based on their specific needs and goals. Our benchmark doesn’t include additional information to distinguish between training and testing examples, as the focus is on real-world challenges that demand open-ended, high-performance solutions—writing custom GPU kernels and optimizing them to their absolute limits (you’re only constrained by the speed of light!). These tasks, which revolve around foundational operators for machine learning, are designed to have a meaningful impact. Improving these kernels, in any way, can potentially lead to substantial real-world benefits.
Initial Evaluation
Baseline Greedy Evaluation
We evaluate the 250 problems from Levels 1 to 3 from KernelBench on various frontier models, with greedy decoding parameters (temperature = 0).
Compilation and Correctness
Across the 3 levels of the problems, most models do generate compilable CUDA code. However, maintaining correctness (same output as torch reference code) becomes increasingly challenging as the reference torch code gets more complex (simple operators in level 1 to fused operators in level 2 to whole model architecture in level 3).
Comparing across various models, we note while some models do well on Level 1 tasks, correctness quickly drops off for Level 2 and 3 tasks. Larger models of the same family also seem to get more correct solutions. It is also particularly interesting that the o1 model does significantly better than gpt-4o on correctness for more challenging Level 2 and Level 3 problems, highlighting scaling inference time compute might have played a role here.
Caption: Percent of Correct Samples across 3 Levels of problems across models
Pass@k
Beyond greedy decoding, we are also interested in pass@k, having at least 1 correct (and successfully compiled) solution given k attempts, as introduced in the HumanEval paper. We sample models with high decoding temperature (deepseek-coder with temp=1.6, and Llama 3.1 70b-Instruct with temp=0.8) for more diverse samples, compute pass@1,3,5,10 with N=100 samples.
Pass@k is defined as
\(\text{pass@$k$} := \mathop{\mathbb{E}}_{\text{problems}} \left[ 1 - \frac{\binom{n - c}{k}}{\binom{n}{k}} \right]\)
where $n$ is the total number of samples and $c$ is the number of correct samples.
Caption: Pass@k performance for Deepseek-coder and Llama 3.1 70B Instruct
As we increase k, correctness improves, suggesting it might be easier to solve such tasks with more parallel samples (as introduced in the Large Language Monkeys paper). However, we see a stark difference between deepseek and llama 3.1 70b performance, highlighting the importance of base model capability even when conducting inference time scaling.
Tradeoff between Correctness and Performance
We only analyze correctness in the section above. However, in the case of kernel engineering, we care deeply about performance. Looking at the generated kernels, we found there is a tradeoff between correctness and performance, two objectives that are often at odds with each other. Code with more optimization could give better performance gain, but could also risk making more errors and hence likely to fail correctness. Optimizing for performant code while guaranteeing correctness creates a new direction for code generation, while most existing benchmarks and methodologies focus on passing correctness; we are excited to keep exploring that.
Performance: Percentiles of Speedups
When evaluating performance, we prioritize correctness, as incorrect but fast code is not useful. Therefore, speedups are calculated using only the correct samples. To present a comprehensive view of performance, we report speedups in percentiles. The count of correct samples for each model is indicated in parentheses after the model name in the table below.
In addition to the baseline PyTorch implementation, we also compare speedups against torch.compile() using its default mode. The speedup is defined as
\(\frac{t\_baseline}{t\_generated}\)
Caption: Percentile of Speedups vs. Baseline for both Torch and Torch Compile across 3 levels
Among the samples that are correct, we see that most generated kernels exhibit relatively slow speedups over torch and torch.compile baseline, but a few are notably faster as outliers! This piqued our interest and led us to the following investigations.
“Kernelsseum” –– A Per Problem Leaderboard
To better understand the LLM-generated kernels, we also present a leaderboard to inspect the kernels generated by greedy evaluation on KernelBench. This shows the top 5 LLM-generated kernels per problem, and some problems might lack any correct solutions. Note the performance result is hardware-dependent and currently evaluated on the Nvidia L40S GPU.
You can click on entries to see the generated code for each kernel.
Right now, the leaderboard only features solutions generated through greedy evaluation. In the future, we aim to make it an open submission leaderboard to allow contributions from the broader community.
Interesting Kernels
Diagonal Matrix Multiplication
Problem 13 in level 1 involves multiplying a matrix by another diagonal matrix:
torch.diag(A) @ B
torch.diag() takes in a vector of the diagonal elements of a matrix and returns a 2-D square tensor with the elements of input as the diagonal. The result is a matrix-matrix multiplication.
Mathematically, multiplying a matrix by a diagonal matrix is equivalent to scaling each row (or column, if the diagonal matrix is on the right side) of the original matrix by the corresponding diagonal element. As a result, the diagonal matrix doesn’t need to be explicitly constructed, reducing both memory usage and computational overhead.
This is the problem that gets the >12x speedup over torch and torch.compile() in level 1 for multiple models, one example of these generated CUDA kernel is below:
__global__ void diag_matmul_kernel(
const float* diag,
const float* mat,
float* out,
const int N,
const int M) {
const int row = blockIdx.y * blockDim.y + threadIdx.y;
const int col = blockIdx.x * blockDim.x + threadIdx.x;
if (row < N && col < M) {
out[row * M + col] = diag[row] * mat[row * M + col];
}
}
Kernel Fusion
Problem 14 in level 2 performs a matrix multiplication, division, summation, and then scaling:
x = torch.matmul(x, self.weight.T) # Gemm
x = x / 2 # Divide
x = torch.sum(x, dim=1, keepdim=True) # Sum
x = x * self.scaling_factor # Scaling
There’s a solution generated by claude-3.5-sonnet that has an approximately 3x speed up over both torch and torch compile:
// Fused kernel for matmul + divide + sum + scale
__global__ void fused_ops_kernel(
const float* input,
const float* weight,
float* output,
const float scaling_factor,
const int batch_size,
const int input_size,
const int hidden_size
) {
// Each thread handles one element in the batch
const int batch_idx = blockIdx.x * blockDim.x + threadIdx.x;
if (batch_idx < batch_size) {
float sum = 0.0f;
// Compute matmul and divide for this batch element
for(int h = 0; h < hidden_size; h++) {
float elem = 0.0f;
for(int i = 0; i < input_size; i++) {
elem += input[batch_idx * input_size + i] *
weight[h * input_size + i];
}
// Divide by 2 as we go
sum += (elem / 2.0f);
}
// Scale and store final result
output[batch_idx] = sum * scaling_factor;
}
}
The solution fuses all four operations into a single GPU kernel, eliminating the overhead of writing intermediate results to memory and reading them back for subsequent operations. Additionally, combining the matrix multiplication with the dimension-wise summation reduces the size of the final output, minimizing memory bandwidth usage.
Next steps and public involvement
In the Scaling Intelligence Lab, we plan to continue extending this work to enable LLMs to write efficient GPU kernels. The initial results show significant room for improvement, and we are optimistic about the potential for significant advancements in future iterations.
There is a lot of interest in the community in GPU programming and LLMs for GPU code generation. In particular, Project Popcorn of the GPU Mode Discord aims to build “an LLM that can actually write good GPU code”. There is also interest in running GPU programming competitions for humans as a way to collect high quality training tokens for the LLM. We look forward to seeing how KernelBench can contribute to these initiatives.
Our vision for the longer term future is to simplify the generation of high-performance kernels that seamlessly adapt to diverse hardware architectures, enabling developers to achieve optimal performance with minimal effort. By accelerating the iteration cycles for machine learning model architecture design, we aim to empower researchers and practitioners to explore, prototype, and optimize ideas faster than ever.
In addition, the ability to generate kernels quickly is very important for adapting to new hardware architectures, which is often a barrier for adoptions of new computing platforms. We think KernelBench and related techniques could enable faster development cycles for new hardware by lowering the amount of human engineering effort to write new kernels for architecture.
Let’s make writing high-performance kernels far more accessible and convenient!
FAQ
Why not a compiler?
The current development cycle—from efficient implementations to generalizations to compiler integration—is lengthy. Efficient compilers often lag behind new GPU architectures by over two years: approximately one year for CUDA experts to develop optimized implementations and another year to generalize these optimizations into compilers. Traditional compilers excel in generating provably-correct, robust, and general-purpose solutions, making them indispensable for a wide range of applications. However, developing compilers remains a labor-intensive and time-consuming process.
Many design patterns and optimizations are reusable across GPU kernels –– fundamental principles such as overlapping, fusion, efficient memory access, and maximizing occupancy. Our approach seeks to complement traditional compilers by focusing on a different objective. Rather than striving for general-purpose, provably-correct compiler solutions, we aim to distill human intuition directly into specialized, high-performance code with correctness tested empirically. This enables the generation of code highly optimized for specific input shapes and computational patterns, a level of specialization that would otherwise require extensive pattern-matching rules and manual engineering in traditional compilers.
Acknowledgements
We would like to thank Aaryan Singhal, AJ Root, Allen Nie, Anjiang Wei, Benjamin Spector, Bilal Khan, Bradley Brown, Dylan Patel, Genghan Zhang, Hieu Pham, Hugh Leather, John Yang, Jon Saad-Falcon, Jordan Juravsky, Mark Saroufim, Michael Zhang, Ryan Ehrlich, Sahan Paliskara, Sahil Jain, Shicheng (George) Liu, Simran Arora, Suhas Kotha, Vikram Sharma Mailthody, and Yangjun Ruan for insightful discussions and constructive feedback in shaping this work. We would also like to thank SWEBench for its inspiration and reference, which greatly contributed to the development of this work.
Citing
@misc{ouyang2024kernelbench,
title={KernelBench: Can LLMs Write GPU Kernels?},
author={Anne Ouyang and Simon Guo and Azalia Mirhoseini},
year={2024},
url={https://scalingintelligence.stanford.edu/blogs/kernelbench/},
}
Materials
|
ROCm blogs
GEMM Kernel Optimization For AMD GPUs
Contents
GEMM Kernel Optimization For AMD GPUs#
Matrix multiplication underlies critical computational pathways in AI, with General Matrix Multiplication (GEMM) operations serving as performance-critical kernels in neural network architectures. From fully connected layers to convolutions and transformer attention mechanisms, GEMMs consume substantial computational and memory resources in large language models (LLMs). This blog explores GEMM optimization techniques for AMD GPUs, demonstrating methodologies to significantly enhance computational efficiency and performance scaling.
ROCm software tools for GEMM Tuning#
To assist AMD GPU developers in efficiently discovering the best GEMM solutions, the ROCm software suite offers multiple tools designed to tune GEMM operation performance. Developers can select the appropriate tool based on their specific use case, as illustrated in the diagram below.
Let’s dive into the various GEMM tuning tools available for AMD GPU developers to use.
GEMM Tuning Techniques on AMD Instinct GPUs#
Technique 1: Optimizing Performance with Pre-Tuned GEMM Operations#
AMD provides an optimized ROCm docker for an out-of-the-box experience including a pre-tuned GEMM, in the vLLM docker. This rocm/vllm Docker has already integrated the pre-tuned GEMM solution with BLAS libraries, supporting most GEMM shapes for LLM inference. We highly recommend GPU developers to try AMD optimized Docker first.
To get started, follow the detail steps below:
1). Pull the optimized docker image from the ROCm/vLLM Docker Hub website.
docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
2). Run the LLM performance benchmark using the vLLM benchmarking tool. Since the pre-tuned GEMM configuration files (.csv) are integrated into the optimized Docker, use the vLLM benchmarking tool, it automatically utilize the pre-tuned GEMM for optimal performance. We use vllm latency benchmarking tool as the example, and the detailed info of vllm benchmarking tool can be found from vLLM benchmark.
python /app/vllm/benchmarks/benchmark_latency.py \
--model ${model_path} \
--trust-remote-code \
--num-iters-warmup 3 \
--num-iters 5 \
--dtype float16 \
--input-len {in_len} \
--output-len {out_len} \
--batch-size ${bs} \
--tensor-parallel-size ${tp_nums} \
--num-scheduler-steps 10
Technique 2: Optimizing Performance with PyTorch TunableOp (Framework Level GEMM Tuning)#
PyTorch TunableOp provides a GEMM tuning wrapper for both rocBLAS and hipBLASLt. Instead of relying on default GEMMs, TunableOp automatically searches for the optimal solution by querying the underlying BLAS library for all available solutions for a given GEMM, benchmarking each one, and selecting the fastest. The chosen solution is then stored on disk for use in subsequent runs.
For applications leveraging popular frameworks like PyTorch and vLLM, users can leverage PyTorch TunableOp online tuning. This process allows tuning to occur seamlessly while running training or inference workloads, requiring only a few environment setting adjustments. Detailed information about these environment variables can be found from PyTorch TunableOp.
To optimize performance with tuned GEMM operations at the framework level, follow below steps:
1). Configure the related settings to enable PyTorch TunableOp
export PYTORCH_TUNABLEOP_ENABLED=1
export PYTORCH_TUNABLEOP_TUNING=1
export PYTORCH_TUNABLEOP_VERBOSE=1
export PYTORCH_TUNABLEOP_FILENAME=/dockerx/tunableop-config.csv
2). GEMM tuning results will be saved to above tunableop-config.csv file. The GEMM tuning, described in the CSV file, will be integrated into the specific workload associated with your application.
3). Now, turn off tuning before running your application.
export PYTORCH_TUNABLEOP_ENABLED=1
export PYTORCH_TUNABLEOP_TUNING=0
export PYTORCH_TUNABLEOP_VERBOSE=1
export PYTORCH_TUNABLEOP_FILENAME=/dockerx/tunableop-config.csv
4). Run your application. The tuning result integration will work automatically. With native PyTorch support for AMD ROCm, developers can seamlessly leverage the PyTorch TuneableOps flow. In our experiments, this approach has yielded over 20% performance improvement in GEMM operations. Developers can check the details from TunableOp Blog.If developers meet questions or issues about TunableOp GEMM tuning,please submit them in PyTorch issues.
Technique 3: Optimizing Performance with Tuned GEMM Operations at Ops/Library Level#
AMD offers rocBLAS, the AMD library for Basic Linear Algebra Subprograms (BLAS), internally uses Tensile, which supplies the high-performance implementation of GEMM. Additionally, hipBLASLt is a library that provides general matrix-matrix operations.
Based on a developer’s preference they can choose either of the two Ops/Librariesfor GEMM tuning tools, rocBLAS tuning tool (rocblas-gemm-tune) or hipBLASLt tuning tool (hipblaslt-bench).
First, use the logging scheme of either rocBLAS or hipBLASLt (depending on the library in use) to capture the required GEMM shape information. Then, apply the respective GEMM tuning tools (rocblas-gemm-tune or hipblaslt-bench) to optimize performance.
GEMM Tuning with rocblas-gemm-tune#
The rocblas-gemm-tune tool works by using Tensile to heuristically search through various kernel parameters in order to find the optimal configuration that provides high GPU performance for performing GEMM operations.
The detail steps are as below:
1). Installing/rocBLAS Setup: In the ROCm Docker image, the rocBLAS library is pre-installed but if the rocBLAS client related executable bin files (rocblas-bench and rocblas-gemm-tune) are not pre-installed, you may need to build them from source code.
2). Generating GEMM Problem Sizes: rocBLAS provides the logging scheme to dump GEMM shapes info for further performance tuning, which is enabled by rocBLAS environment settings.
- Environment variable `ROCBLAS_LAYER=4` turns on log_profile, and outputs a YAML description of each rocBLAS function called, along with its arguments and number of times it is called. This list of entries can be used directly as input to `rocblas-gemm-tune` utility to do performance tuning.
- Use environment variable `ROCBLAS_LOG_PATH` to set the full path name for all logs, and store the grabbed GEMM shapes information into a YAML file, `ROCBLAS_LOG_PATH=~/dir/rocblas_gemms.YAML`
By using the two settings described above, developers can yield the GEMM shape information.
ROCBLAS_LAYER=4 ROCBLAS_LOG_PATH=./rocblas_gemm.YAML ./gemm-app
3). GEMM Tuning with rocblas-gemm-tune: At this stage, use the dumped YAML file to run GEMM tuning by running rocblas-gemm-tune. The sample command:
```bash
/opt/rocm/bin/rocblas-gemm-tune --YAML /home/rocblas_gemms.YAML
```
Running this will output the fastest solutions for each GEMM in the YAML file. Each solution is identified by an unique solutions index. It generates a CSV file by aggregating the output solution index, and the CSV file form looks like:
transA,transB,M,N,batch_count,K,alpha,beta,lda,ldb,ldc,input_type,output_type,comput_type,solution_index
N, N, 320,588,1,4096,1,0,320,6144,320,f32_r,f32_r,f32_r,3788
N, N, 512,3096,1,512,1,0,512,512,512,f16_r,f16_r,f16_r,4566
4). Integration: Now we have a list of faster solutions for all the GEMM problems, users can integrate this into the application, to pick these faster implementations in rocBLAS by setting the environment variable. Use below example command:
export ROCBLAS_TENSILE_GEMM_OVERRIDE_PATH = csv_file_path
If developers meet questions or issues about rocblas-gemm-tune,please submit them in rocBLAS issues.
GEMM Tuning with hipblaslt-bench#
hipBLASLt-bench is another GEMM tuning tool within hipBLASLt library and can be used to search the best-performing GEMM kernel for a given set of GEMM problems.
To use hipBLASLt, follow below steps:
1). Installing hipBLASLt: In the ROCm Docker image, the hipBLASLt library is pre-installed, however the hipBLASLt client executables, such as hipblaslt-bench, may not be included by default and you may need to build these executables from source.
2). Generating GEMM Problem Size: Similar with rocBLAS, hipBLASLt can also dump the required GEMM problem/shape sizes by its own logging scheme. Detailed info about hipBlASLt logging scheme in logging-heuristics. Use below sample command to generate the GEMM problem sized YAML file:
HIPBLASLT_LOG_MASK=32 HIPBLASLT_LOG_FILE=log_file_name.log ./application_bin
To organize the output logs further, you can get unique calls with call counts like below shell command:
cat log_file_name.log | sort | uniq -c > unique_log_file.log
3). GEMM Tuning with hipblaslt-bench: Set the environment variable HIPBLASLT_TUNING_FILE=<file_name> to tune and store the
tuning result of the best solution indices for the GEMM problems. The <file_name> points to the tuning file. GEMM tuning will be
completed by launching hipblaslt-bench, which input parameters can be set according to the log file of step 2.
A sample command to save file with below user-defined name in the current working directory:
export HIPBLASLT_TUNING_FILE=tuning.txt
/opt/rocm/bin/hipblaslt-bench --api_method c -m 28672 -n 8192 -k 8192 --lda 8192 --ldb 8192 --ldc 28672 --ldd 28672 --stride_a 0 --stride_b 0 --stride_c 0 --stride_d 0 --alpha 1.000000 --beta 0.000000 --transA T --transB N --batch_count 1 --scaleA 1 --scaleB 1 --a_type f8_r --b_type bf8_r --c_type bf16_r --d_type bf16_r --scale_type f32_r --bias_type f32_r --compute_type f32_r --initialization trig_float -i 100 -j 100 --flush --rotating 512 --algo_method all
4). Integration:
Unset tuning file name once tuning is complete:
unset HIPBLASLT_TUNING_FILE
Override the hipBLASLt library with the tuned file info:
export HIPBLASLT_TUNING_OVERRIDE_FILE=tuning.txt
Now we can replace the default GEMM kernel with the tuned GEMM kernel.If developers meet questions or issues about hipblaslt-bench GEMM tuning,please submit them in hipBLASLt issues.
Summary#
Given the pivotal role of GEMM operations in AI workloads, particularly for LLM applications, AMD offers a suite of powerful tuning tools, including rocblas-gemm-tune, hipblaslt-bench, and PyTorch TuneableOps. These tools provide GPU developers with the flexibility to optimize GEMM performance, allowing precise fine-tuning for maximum efficiency on AMD GPUs. By leveraging these resources, developers can enhance workload performance, ensuring optimal execution and superior results in AI-driven tasks.
Additional Resources#
Optimized docker hub: https://hub.docker.com/r/rocm/vllm/tags
Optimized docker image: rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
The opimized docker blog: https://www.amd.com/en/developer/resources/technical-articles/how-to-use-prebuilt-amd-rocm-vllm-docker-image-with-amd-instinct-mi300x-accelerators.html
PyTorch TunableOp: https://pytorch.org/docs/stable/cuda.tunable.html
Improve Performance :Accelerating models on ROCm using PyTorch TunableOp — ROCm Blogs
rocBLAS: https://rocm.docs.amd.com/projects/rocBLAS/en/latest/index.html
hipBLASLt: https://rocm.docs.amd.com/projects/hipBLASLt/en/latest/
|
Reasoning about performance from first principles
When trying to optimize performance on a computer, we can simplify the hardware into two things [1]
For the RTX 3090 this is around 35.5 TFLOPs with a max global memory bandwidth of 936 GB/s. We can idealize the program running on the computer as loading K bytes and performing N operations with each byte, giving us the arithmetic intensity. When trying to understand what the upper bound of tasks that can be performed per second, we can use the equation below.
$$P = \min\left(\frac{\text{flop}}{s}, \frac{N}{K} \times \frac{\text{bytes}}{s}\right)$$
By multiplying the memory bandwidth by the arithmetic intensity we adjust for the fact that each loaded byte results in K operations. The resulting number P is the minimum of the memory bandwidth (adjusted for arithmetic intensity) and the theoeretical FLOPS/s. This tells us the upper bound for performance as well as whether the program will be compute-bound or memory-bound. This beautiful abstraction is true for any machine based on the Von Neumann architecture, not just GPUs.
â
â
The plot above shows a blue line, which is AI * Mem-Bandwidth, as well as a FLOPS/s line which is horizontal. The point at which the blue line crosses the red-line shows what the arithmetic intensity of a program would need to be to fully saturate the 3090s compute units. We need to perform ~38 operations per loaded byte to get to this point! Lets look at arithmetic intensity for vector addition/matrix multiplication, and quantify their AI.
â
FP32 Vector Addition (4 bytes per element)
1) Load \(4 N\) bytes for Vector A.
2) Load \(4 N\) bytes for Vector B.
2) Perform ~\(N\) Â FP32 add operations to add Vector AÂ &Â B.
3) Store \(4 N\) bytes.
Arithmetic Intensity (AI) is \(\frac{N}{12N}\) operations/bytes or 0.0833.
Vector addition is heavily memory bound, since the arithmetic intensity is so low.
â
FP32 Matrix Multiplication (all matrix dimensions `N` for simplicity)
1) Load \(4 N^2\) bytes for Matrix A.
2) Load \(4 N^2\) bytes for Matrix B.
3) Perform \(2 N^3\) operations (there are \(N^2\) outputs, each output requires a dot-product of vectors with size \(N\), each dot-product requires ~\(2 N\) additions & multiplies).
3) Store \(4 N^2\) bytes.
Arithmetic Intensity (AI) is `\(\frac{2 N^3}{12 N^2} = 0.167 N\) ops/bytes.
Matrix multiplication is interesting because its AI is a linear function of the sizes of the input matrices. As a result, the program is memory bound at smaller sizes but becomes compute bound at larger ones.
â
This analysis gives us the âlight-speedâ for a given program. In reality, we won't load at theoretical bandwidth, we won't compute at theroetical max FLOPs, we may perform more than the ideal number of operations specified during program execution, there will be wind up/wind down effects, etc. By comparing an actual kernelâs effective ops/s to the idealized we can invest our efforts in tackling the right bottle-neck, and better understand how far we are from theoretical limits.
Memory Optimizations
The optimizations below are some of the most important for getting good performance. If the compute units arenât getting a high throughput stream of bytes to crunch on, the fact that the GPU has an absurd number of compute units wonât matter. [2]
Coalesced and Aligned Global Memory Access [3]
When accessing global memory always do your best to have each thread access sequential addresses where the array is aligned, ideally to 128B (using cuda malloc will default align to 256B). When an RTX 3090 performs a cached GMEM load, it will pull in cache-lines of size 128byte. If a warp (32 threads) accesses 32-bit addresses the entire load can be processed in a single transaction. Deviating from the sequential nature of the access, or misaligning memory in GMEM during kernel launch will reduce effective GMEM utilization.
As we learned in the intro post, unaligned access can either cause
a) excessive pre-charging of row-buffers due to having to pull in multiple DRAM rows
b) excessive memory controller overhead from having to pull multiple cache-lines within the same DRAM row
Both lead to unnecessary overhead. If strided access is necessary, use shared memory as an intermediary to allowing for coalescing.
Use Vectorized Memory Access Instructions [4]
Lines issuing memory loads in CUDA typically compile to a 32-bit (one word) load using the LD.E/ST.E instructions in SASS. If you know the thread will require multiple sequential words of memory, and have a piece of data that is aligned in memory to that multiple, you can issue vectorized instructions to load multiple words in a single transaction. Loading in this way reduces instruction overhead and can be combined with coalescing to improve memory throughput/latency. Vectorized loads are accomplished by using vector data types (float4, float2, int4, int2, etc) and typecasting. An example of a kernel that uses vectorizes loading of two consecutive 32-bit integers is shown below.
â
__global__ void device_copy_vector2_kernel(int* d_in, int* d_out, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx < N / 2) {
reinterpret_cast<int2*>(d_out)[idx] = reinterpret_cast<int2*>(d_in)[idx];
}
// Only one thread processes the final element, if N is odd
if (idx == 0 && N % 2 == 1) {
d_out[N - 1] = d_in[N - 1];
}
}
â
Avoid Shared Memory Bank Conflicts
Shared memory is divided into 32 banks of SRAM cells, with a controller for each bank that can serve 1byte/clock cycle. When multiple threads access a bank simultaneously, these accesses will serialize. The easiest way to avoid shared memory bank conflicts is to access sequential shared memory addresses with each thread within a warp (similar to coalesced GMEM access).
If assigning each thread to a separate bank isnât feasible, memory padding can be used to introduce an offset that eliminates the conflict. The padding is at the cost of higher shared memory utilization per block which may impact occupancy.
Keep Re-Used Data in the fastest Memory Stores
Many operations involve data re-use. When performing a matrix multiplication for example, we load 2N^2 elements, but perform 2N^3 operations, meaning each element has N operations associated with it. It would be extremely wasteful to go back to global memory for each of the N operations. When performing operations with the same data multiple times, consider keeping it in shared memory or thread registers. Simon Boehm's post on optimizing GEMM makes heavy use of this method and I would highly reccomend giving it a thorough read. NVIDIA GPUs also have constant and texture memory, which to be totally honest I have not used. But from what I understand, these stores are read-only and can provide efficient access if many threads will need to access the same memory address many times [7]. When initially thinking about kernel architecture, spend a fair amount of time understanding the graph of memory dependencies for each output element that your kernel produces. Visualizing this graph can provide a clearer picture of what bytes should be put where.
Avoid Register Spilling [8]
GPUs have a maximum number of registers that can be allocated to each thread. When a thread violates this limit (255 for RTX 3090), the data will spill to âlocal memoryâ which is actually just a section global memory set aside for that thread! If we are lucky, L2 cache will intercept the read/writes and we wonât have to pay full latency associated with GMEM. But if we are not careful, we can inadvertently perform tons of slow GMEM accesses and delude ourselves into thinking we are using fast thread registers. When compiling a CUDA kernel, you can add â-Xptxas -vâ to your nvcc command to see a print out of register use per thread and make sure you arenât close to any limits.
â
Compute Optimizations
Getting bytes to compute units efficiently is important, and so is make sure the compute units themself are adequately saturated with arithmetic instructions that operate on the incoming bytes. [2]
Maximize Number of Active Warps (Occupancy) [9]
I mentioned in the previous post that its up to us to make sure all of compute units are executing useful operations during the course of the kernel execution. This can be done by thinking carefully about memory resources and thread count in each block, since this is the fundamental limiter to how much warps can be active at the same time. On an RTX 3090, each SM can support:
â
â
Notice that while the number of max active threads is 1536, there are only 128 CUDA cores on an SM. Whats happening here is the warp scheduler tries to make as many warps âactiveâ as possible by assigning them the registers and shared memory they request. When warps are stalled due to memory latency, the scheduler can swap active threads in and out of CUDA cores to make sure the hardware is fully utilized. This is what occupancy measures: how many of the maximum possible active warps are able to be added to the pool of warps ready to run on a core? In this way the GPU hides memory latency by oversubscribing the hardware. The GPU can only oversubscribe if its fed kernels that make availiable blocks with the appropriate number of threads/warps, and donât hog all the registers/shared memory. While low occupancy will certainly hurt performance, high occupancy doesnât guarante compute unit saturation. According to this post on NVIDIA forums, 50% - 75% occupancy is usually acceptable. Also note that while per SM occupancy is important, we also want to saturate all SMs with work. There are 84 SMs on an RTX 3090 so we need atleast 84 blocks to make sure each has something to do.
â
Tangent
I havenât tried this out myself but I do think going forward it would be best to write kernels that are as agnostic to register/shared-mem use/block dimension as possible in order to let an auto-tuner figure out what allocation of resources is optimal for performance. Sometimes it can be better to have fewer threads per block and more work per thread. This opens up more thread-level ILP (instruction level parallelism) and can enable more thread registers per thread. This is particularly true when performing block-level reductions as fewer threads means less thread â thread comms. This could lead to lower occupancy but better over all performance.
Use Tensor Cores & FMA Units [14] [15]
Tensor cores are designed specficially to accelerate matrix-multipy-accumulate operations on GPUs. Use them whenever an operation can be represented as an MMA. On a similar grain of thought, scalar-multiply-accumulates are also hardware accelerated and can be called using the fmaf() function in CUDA. The NVCC compiler typically optimizes operations of the format âa = a + (b*c)â into FMA instructions anyway but using the function call can make this explicit. One thing to keep in mind though, is they donât benefit memory-bound workloads. For example, a convolution can be performed as an implicit GEMM in order to utilize tensor cores, but the memory overhead of the transforms needed to achieve this may far outweigh efficiency gains from tensor core utilization for low arithmetic intensity workloads. Donât worry if this statement is confusing for now, future posts go into roof-line models and their implications.
Minimize Warp Divergence [10]
When each thread in a warp of 32 threads executes, control flow overhead is minimized when each thread is performing the exact same operation as the same time. Certain types of data-dependent control flow can cause this to no longer be the case. In these situations, the threads will effectively split into multiple diverged execution paths, with each chunk executing independent of each other. Obviously this hurts hardware utilization as we are running fewer than 32 threads per warp. Try to minimize warp divergence to the extent possible by making sure all threads in a warp will follow the same execution path.
Unroll Loops [11]
When CUDA code is compiling to PTX, loops with loop counts that are defined at compiled time will get unrolled. Unrolling eliminates overhead associated with checking loop conditionals/incrementing loop count and enables instruction level parallelism in the case of loops that unroll to multiple indepedent instructions. Lets take a look an example with a simple for-loop. We can hint to the compiler to unroll a loop by using â#pragma unrollâ. Appending an integer after pragra unroll tells the compiler how many to iterations to unroll. By putting a 1 after pragma unroll we can effectively prevent the compiler from unrolling the loop.
Standard Loop CUDAâ
float temp = data[idx];
#pragma unroll 1
for (int i = 0; i < 10; ++i) {
temp += i;
}
data[idx] = temp;
â
Standard Loop PTX
$L__BB0_1:     cvt.rn.f32.s32  %f4, %r5; //convert value in r5 to float and move to f4     add.f32     %f5, %f5, %f4; //add value from f4 to f5 and store in f5     add.s32     %r5, %r5, 1; //increment loop counter in r5 by 1     setp.ne.s32   %p1, %r5, 10; //compare loop counter (r5) to 10 and set predicate register p1 to True if it is     @%p1 bra     $L__BB0_1; //branch conditionally based on the value in p1     st.global.f32  [%rd1], %f5;     ret;
Unrolled Loop PTX (CUDA code uses #pragma unroll instead of #pragma unroll 1)
add.f32 Â Â Â Â %f2, %f1, 0f00000000;add.f32 Â Â Â Â %f3, %f2, 0f3F800000;add.f32 Â Â Â Â %f4, %f3, 0f40000000;add.f32 Â Â Â Â %f5, %f4, 0f40400000;add.f32 Â Â Â Â %f6, %f5, 0f40800000;add.f32 Â Â Â Â %f7, %f6, 0f40A00000;add.f32 Â Â Â Â %f8, %f7, 0f40C00000;add.f32 Â Â Â Â %f9, %f8, 0f40E00000;add.f32 Â Â Â Â %f10, %f9, 0f41000000;add.f32 Â Â Â Â %f11, %f10, 0f41100000;st.global.f32 Â [%rd4], %f11;ret;
Note that in the unrolled case the compiler turned the loop into 10 distinct add.f32 instructions, and derived the constant values based on the loop count at compile time. This change eliminates loop related overhead. In this case there is a dependency between each instruction but in some cases independent instructions resulting from unrolling can also allow the thread to utilize greater instruction level parallelism. The CUDA compiler is quite good at spotting loops that can be unrolled but throw in a â#pragma unrollâ for loops you think could benefit from unrolling.
Use Signed-Ints for Loop Counters [12]
Unsigned integers have defined overflow behavior as they are expected to loop back around, whereas signed ints result in undefined behavior at runtime. Ensuring the former holds true reduces the compilerâs ability to optimize loop execution. As a result you may see a small pef improvement in hot-loops by using signed-ints for loop counters.
Use Fast Math Library (when precision isnât critical) [13]
CUDA provides a pretty extensive math library for operations that execute on the special functions unit. Some examples of functions include - sin(x), cos(x), log(x), exp(x), etc. If you donât care as much about precision and can accept some rounding errors (SIDE BAR ABOUT HOW INT8 works so rounding errs prob fine), using the fast version of these calls can improve performance. Examples - __sinf(x), __cosf(x) vs. sin(x), cos(x)
Maximize Instruction Level Parallelism via Dual-Issue Instruction Dispatch
According to discussion in this thread, dual-instruction dispatch on NVIDIA GPUs isnât a huge driver of improved performance and not worth too much thought. But it is worth noting that the warp scheduler can issue up to two instructions per cycle IF the are multiple instructions with no data or control flow dependencies. Writing a kernel such that there are fewer dependencies and diversity in the types of execution units being used (FP32/tensor core/load-store units/etc) may enable higher instruction dispatch per cycle.
â
References
[1] https://people.eecs.berkeley.edu/~kubitron/cs252/handouts/papers/RooflineVyNoYellow.pdf
[2] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/contents.html
[3] https://developer.nvidia.com/blog/how-access-global-memory-efficiently-cuda-c-kernels/
[4] https://developer.nvidia.com/blog/cuda-pro-tip-increase-performance-with-vectorized-memory-access/
[5] http://homepages.math.uic.edu/~jan/mcs572f16/mcs572notes/lec35.html
[6] https://slideplayer.com/slide/12553635/
[7] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#constant-memory
[8] https://developer.download.nvidia.com/CUDA/training/register_spilling.pdf
[9] https://on-demand.gputechconf.com/gtc-express/2011/presentations/cuda_webinars_WarpsAndOccupancy.pdf
[10] https://people.maths.ox.ac.uk/gilesm/cuda/lecs/lec3-2x2.pdf
[11] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#branch-predication
[12] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#loop-counters-signed-vs-unsigned
[13] https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#math-libraries
[14] https://developer.nvidia.com/blog/programming-tensor-cores-cuda-9/
[15] https://forums.developer.nvidia.com/t/fma/32965
|
OpenCL Kernel Memory Optimization - Local vs. Global Memory
Hi,
I’m new to OpenCL and I consider using it for some graphics computation where using an OpenGL shader seems not to be natural. Before I actually do so I thought I’d try how much of a performance improvement I could get using OpenCL on my Nvidia GTX 460 over my CPU. For this reason, I implemented a simple skeleton skinning algorithm, once on the CPU, without multithreading but using the Eigen library, which provides SSE-optimized vector and matrix libraries, and once in an OpenCL kernel executing on the GPU. The vertices, bone matrices etc. are generated randomly on application start. I repeat the whole skinning several times so that it executes long enough to get meaningful timing results.
First I simply tried a kernel where I have as much work-items as I have vertices, each one generating one output vertex. I quickly saw that this is not a good idea because performance was even worse than on the CPU. I figured this was in essence a problem of too many memory accesses, mainly to the bone matrices, which are an array of float16-vectors that is addressed four times in each work-item. Then I changed the algorithm so that each work-item handles multiple output vertices, one after the other, so that I have less work-items. In each work-group I create a copy of the bone matrices in local space, and further accesses to these matrices come from local space. The interesting part of my C++ code looks like this:
#define NUM_BONES 30
#define NUM_VERTICES 30000
#define NUM_VERTICES_PER_WORK_ITEM 100
#define NUM_ANIM_REPEAT 1000
uint64_t PerformOpenCLSkeletalAnimation(Matrix4* boneMats, Vector4* vertices, float* weights, uint32_t* indices, Vector4* resVertices)
{
File kernelFile("/home/alemariusnexus/test/skelanim.cl");
char opts[256];
sprintf(opts, "-D NUM_VERTICES=%u -D NUM_REPEAT=%u -D NUM_BONES=%u -D NUM_VERTICES_PER_WORK_ITEM=%u", NUM_VERTICES, NUM_ANIM_REPEAT, NUM_BONES, NUM_VERTICES_PER_WORK_ITEM);
cl_program prog = BuildOpenCLProgram(kernelFile, opts);
cl_kernel kernel = clCreateKernel(prog, "skelanim", NULL);
cl_mem boneMatBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_BONES*sizeof(Matrix4), boneMats, NULL);
cl_mem vertexBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*sizeof(Vector4), vertices, NULL);
cl_mem weightBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*4*sizeof(float), weights, NULL);
cl_mem indexBuf = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, NUM_VERTICES*4*sizeof(uint32_t), indices, NULL);
cl_mem resVertexBuf = clCreateBuffer(ctx, CL_MEM_WRITE_ONLY, NUM_VERTICES*sizeof(Vector4), NULL, NULL);
uint64_t s, e;
s = GetTickcount();
clSetKernelArg(kernel, 0, sizeof(cl_mem), &boneMatBuf);
clSetKernelArg(kernel, 1, sizeof(cl_mem), &vertexBuf);
clSetKernelArg(kernel, 2, sizeof(cl_mem), &weightBuf);
clSetKernelArg(kernel, 3, sizeof(cl_mem), &indexBuf);
clSetKernelArg(kernel, 4, sizeof(cl_mem), &resVertexBuf);
size_t globalWorkSize[] = { NUM_VERTICES / NUM_VERTICES_PER_WORK_ITEM };
size_t localWorkSize[] = { NUM_BONES };
for (size_t i = 0 ; i < NUM_ANIM_REPEAT ; i++) {
clEnqueueNDRangeKernel(cq, kernel, 1, NULL, globalWorkSize, localWorkSize, 0, NULL, NULL);
}
clEnqueueReadBuffer(cq, resVertexBuf, CL_TRUE, 0, NUM_VERTICES*sizeof(Vector4), resVertices, 0, NULL, NULL);
e = GetTickcount();
return e-s;
}
The associated program/kernel looks like this:
inline float4 MultiplyMatrixVector(float16 m, float4 v)
{
return (float4) (
dot(m.s048C, v),
dot(m.s159D, v),
dot(m.s26AE, v),
dot(m.s37BF, v)
);
}
kernel void skelanim(global const float16* boneMats, global const float4* vertices, global const float4* weights, global const uint4* indices, global float4* resVertices)
{
int gid = get_global_id(0);
int lid = get_local_id(0);
local float16 lBoneMats[NUM_BONES];
lBoneMats[lid] = boneMats[lid];
barrier(CLK_LOCAL_MEM_FENCE);
for (int i = 0 ; i < NUM_VERTICES_PER_WORK_ITEM ; i++) {
int vidx = gid*NUM_VERTICES_PER_WORK_ITEM + i;
float4 vertex = vertices[vidx];
float4 w = weights[vidx];
uint4 idx = indices[vidx];
resVertices[vidx] = (MultiplyMatrixVector(lBoneMats[idx.x], vertex * w.x)
+ MultiplyMatrixVector(lBoneMats[idx.y], vertex * w.y)
+ MultiplyMatrixVector(lBoneMats[idx.z], vertex * w.z)
+ MultiplyMatrixVector(lBoneMats[idx.w], vertex * w.w));
}
}
Now, per work-item I have only one access to the global boneMats, when I create the local copy, and it’s even a lot less work-items executing altogether. Then I have NUM_VERTICES_PER_WORK_ITEM*4 accesses to the local array afterwards. As I understand, local memory should be way faster than global memory, so I thought this would greatly improve performance. Well, the opposite is the cause: When I let lBoneMats alias to the global boneMats instead, I get actually better performance than with the kernel listed above.
What did I get wrong here?
Thanks in advance!
I have still not found a solution for this problem. Does nobody have any idea, or anything I could try?
There is the Nvidia Visual Compute Profiler which can tell you some performance information.
Looking at your launch parameters, you are launching 300 work items arranged into 10 groups of 30 work items each. On Nvidia GPUs, threads are grouped into warps - a group of 32 threads. Each multi processor executes hundreds of threads. Such a large number of threads are needed to hide the latency involved in accessing either global or local memory (although local memory accesses are not as costly).
There are several multi-processors, hence you need thousands of threads to adequately use a GPU. That is why your global memory version is faster I think. I would suggest changing your launch settings so that there are more threads (keep the shared memory though, just do less work per work item). Also, use multiples of 32 threads.
a couple of things:
a) even if the gpu is slower than a cpu, not having to copy the data to/from the device could help. (i.e. go straight from opencl to opengl buffer or whatever)
b) looks like your loop is accessing memory (vidx) in pretty much worst-case access pattern. each work-item should access adjacent values where possible.
To me it looks like it would be best implemented as a one-work-item-per-output algorithm as you said you first tried. Or even use 4 work items per result (one for each of idx.xyzw). Assuming the indexing is correct (i.e. set vidx == get_global_id(0)), i would have expected that to be faster.
I think you’re also confusing what local work size is - it is just the modulo of the total work size which is allocated to a given work-unit (i.e. shares LDS and some other resources). It isn’t a separate dimension from global work size. It is only a ‘coincidence’ that your code is working and num_vertices/num_vertices_per_work_item is a multiple of num_bones.
LDS is way faster than uncached global memory, but if you’ve only accessing 30 ‘bones’, then it should fit into L1 cache, in which case LDS isn’t that much of a boost (it depends on the hardware, not sure what it is on nvidia).
There is the Nvidia Visual Compute Profiler which can tell you some performance information.
I have tried it with CUDA 4.2, but couldn’t really see what it was trying to tell me. With CUDA 5.0 I can’t get OpenCL profiling to work at all. As I’ve read, OpenCL profiling seems to be broken in 5.0, and the driver of 4.2 does not compile on my 3.5 kernel, so I guess I have to wait until Nvidia fixes that (I’m sceptic as to whether they will at all).
I would suggest changing your launch settings so that there are more threads
Seems like this was the main problem. I have no idea what I do now that I haven’t done with my first implementation, but now it’s about four times faster than on my CPU+SSE.
Also, use multiples of 32 threads.
I don’t really know how to do that with my algorithm, as I have no way of controlling the number of vertices, but when I launch one thread for every vertex as I do now, I guess the bit of processing time wasted is not really significant anymore.
even if the gpu is slower than a cpu, not having to copy the data to/from the device could help. (i.e. go straight from opencl to opengl buffer or whatever)
Maybe I’ll do this in my final implementation.
looks like your loop is accessing memory (vidx) in pretty much worst-case access pattern. each work-item should access adjacent values where possible.
I can’t see any nice way to do this. Maybe presorting the vertices by their bone matrix indices, but that would be quite costly (although it would have to be done only once) and I don’t like the idea of changing the vertex order.
To me it looks like it would be best implemented as a one-work-item-per-output algorithm as you said you first tried. Or even use 4 work items per result (one for each of idx.xyzw). Assuming the indexing is correct (i.e. set vidx == get_global_id(0)), i would have expected that to be faster.
As I mentioned before, I do this now, and for whatever reason it’s faster than what I first tried.
I think you’re also confusing what local work size is - it is just the modulo of the total work size which is allocated to a given work-unit (i.e. shares LDS and some other resources). It isn’t a separate dimension from global work size. It is only a ‘coincidence’ that your code is working and num_vertices/num_vertices_per_work_item is a multiple of num_bones.
That’s what I thought it is: The number of work-items (threads) per work-group (thread block). I know I have to choose execution parameters so that the total number of work-items is evenly dividable by it.
In my understanding, changing local work size should not affect performance, assuming shared memory is not used (otherwise the more work groups you have, the more global-to-shared memory copies have to be done, assuming every work group always copies the same amount of data) and it is still a multiple of the warp size (because otherwise the warps aren’t fully utilized).
One question I still have which I couldn’t guess from Nvidias docs is: Can a single warp be made up of threads from different work groups (thread blocks)?
Although there might be room for further improvement, at least I can see that the GPU is actually faster than the CPU, so I’m satisfied for now. The only thing I can’t quite guess is why the same program runs about 8 times slower than the CPU on my old GeForce 8200, even when I optimize the execution parameters. I guess that is because it’s an onboard GPU and global memory accesses are even slower than on a GPU with dedicated memory. The same is true when I execute the CL program on my CPU device, but it might just be too massively multithreaded for a CPU, I haven’t tested this enough yet.
Anyway, thanks for your help!
That’s what I thought it is: The number of work-items (threads) per work-group (thread block). I know I have to choose execution parameters so that the total number of work-items is evenly dividable by it.
In my understanding, changing local work size should not affect performance, assuming shared memory is not used (otherwise the more work groups you have, the more global-to-shared memory copies have to be done, assuming every work group always copies the same amount of data) and it is still a multiple of the warp size (because otherwise the warps aren’t fully utilized).
Changing local work size will affect performance outside of just using LDS for a bunch of reasons: everything in the workgroup executes in lock-step, which affects cache and branching stuff, it affects how many registers are required which affects how many workgroups can be executed concurrently, etc.
BTW use a worksize multiple of 64 if you also want it to work well on AMD hardware, as that is the minimum it requires.
One question I still have which I couldn’t guess from Nvidias docs is: Can a single warp be made up of threads from different work groups (thread blocks)?
A warp is just a hardware implementation thing specific to nvidia. But afaik, all threads in a warp are executing the same code at the same time: so they have to be part of the same opencl workgroup for it to make any sense.
i.e. i believe there is a 1:N mapping of opencl workgroup to nvidia warp.
Although there might be room for further improvement, at least I can see that the GPU is actually faster than the CPU, so I’m satisfied for now. The only thing I can’t quite guess is why the same program runs about 8 times slower than the CPU on my old GeForce 8200, even when I optimize the execution parameters. I guess that is because it’s an onboard GPU and global memory accesses are even slower than on a GPU with dedicated memory. The same is true when I execute the CL program on my CPU device, but it might just be too massively multithreaded for a CPU, I haven’t tested this enough yet.
Anyway, thanks for your help!
Well there’s a large variation in performance of gpu cards, so it can’t speed them up.
And to get that performance you need to access memory properly - i.e. coalesced.
I have a follow up question to this. In my GPU there are 384 cores, 8 compute units (streaming multiprocessors), so there 384/8 = 48 streaming processors on each compute unit. Given that NVidia warp size is 32, which means 32 threads execute in step, doesn’t that mean 16 SPs are not doing anything on each cycle? That doesn’t seem to make sense to me. Can someone help to clarify?
Thanks,
J
Powered by Discourse, best viewed with JavaScript enabled
|
How to optimize tail effect?
Hi Experts,
I have been optimizing a kernel function for several days now, and I believe there is no more room for optimization in terms of mathematics and algorithms. It’s time to focus on CUDA programming for further optimization.
According to Nsight Compute, the tail effect is the biggest bottleneck.
My device is the AGX Orin, with 16 SMs. Currently, gridDim = dim3(121, 3), blockDim = 128 (4 warps).
I understand that the tail effect is caused by an imbalance in the workload of the last wave, and I may need to fill the remaining blocks in the last wave.
I have also referred to CUDA Pro Tip: Minimize the Tail Effect | NVIDIA Technical Blog
I would like to know if my understanding is correct, and what would be the best approach for optimization.
Thanks!
圖片1703×102 17.1 KB
Tail Effect: Est. Speedup: 50%
A wave of thread blocks is defined as the maximum number of blocks that can be executed in parallel on the target GPU. The number of blocks in a wave depends on the number of multiprocessors and the theoretical occupancy of the kernel. This kernel launch results in 1 full waves and a partial wave of 171 thread blocks. Under the assumption of a uniform execution duration of all thread blocks, the partial wave may account for up to 50.0% of the total kernel runtime with a lower occupancy of 32.6%. Try launching a grid with no partial wave. The overall impact of this tail effect also lessens with the number of full waves executed for a grid.
3 * 121 = 363 = 192 + 171
You have two waves, one with 192 blocks, one with 171 blocks. That is quite balanced.
Not sure, where the 32.6% is coming from.
I would doubt that you achieve a speed-up of 50% without getting more work to the GPU with one invocation (e.g. two iterative kernel launches put into one kernel launch).
Thank you for your reply!
This is a single kernel, so there’s no issue with iterative or consecutive launches.
In fact, I often doubt whether the optimization suggestions provided by Nsight Compute can actually be achieved…
This is a single kernel, so there’s no issue with iterative or consecutive launches.
I was more thinking of combining with other launches to make the invocation more efficient.
With two waves the amount of tail effect is higher than with more waves. Or with parallel kernel launches on other streams, etc.
If you want to explore it further, the usual suggestion (I think) would be to rewrite the kernel as a grid-stride loop, and choose a grid size to exactly fill your GPU. That will reduce your kernel to a single wave. The 50% number is based on the idea that that will be the most efficient launch (with respect to occupancy and the tail effect), and “could” result in a kernel duration equivalent to a single wave of your current realization.
That could be viewed as an “upper bound” or best case outcome. So if you wish, interpret the nsight compute suggestions that way. The percentage speedup is “the most you could possibly get from this change, if all other conditions were perfect for the effect.”
Since nsight compute can’t (currently) do the level of analysis needed to accurately predict the actual speedup from a complex refactoring, then it gives the output in that sense.
Thanks for the hints!
After switching to grid-stride and reusing threads, the duration dropped to 18 microseconds, showing 25% improvement. Additionally, the Tail Effect: Est. Speedup: 50% suggestion has disappeared from the optimization opportunities section.
I’m curious whether the 50% improvement mentioned in the NCU report can really be achieved.
圖片1544×305 85.1 KB
Hi,
with just two original two waves, one should in general take special care, when using grid-stride loops, how the work is distributed on different SMs and SM Partitions to avoid that one SM Partition has finished early and others have several more blocks to do (which is like an implicit tail effect). That is especially relevant, if one SM does more than 4 warps (i.e. one SM Partition has more than 1 warp).
In your case, each SM Partition has one warp, so it is less relevant.
I want to mention briefly that the previous results had some bugs.
The final results, which used grid-stride loops, did not show significant gains. I might share the code later.
I wouldn’t really expect much gains from the grid-stride refactoring by itself, for a case like this (2 waves). As curefab points out, you may not be changing the work breakdown structure very much, and even though you have eliminated the “wave effect”, you probably haven’t done much to actually make the imbalance go away. Naturally this would depend to some degree on how much work you are actually doing per element or per thread. If the work per element or per thread is “large” then the refactoring is likely to make little difference.
To see something approaching a large difference in performance (like 50%) you would need a situation where the work per thread or per element was almost zero, such that the overhead of work scheduling (e.g. deposition of blocks, traversal of loops, etc.) was dominating your performance. Such characteristics are not a hallmark of good CUDA code anyway, although people sometimes wrestle with those cases as well.
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
vLLM Office Hours: Get the latest updates, connect with committers, and up-level your vLLM skills. Join us!
Products
Discover faster ways to inference your ML model.
Products
nm-vllm
Enterprise inference server for LLMs on GPUs.
Neural Magic Compress
Developer subscription for enterprises aiming to build and deploy efficient GenAI models.
DeepSparse
Sparsity-aware inference server for LLMs, CV and NLP models on CPUs.
Community
Explore essential resources for every ML practitioner.
Community
vLLM Office Hours
Join our bi-weekly vLLM office hours to learn, ask, and give feedback.
GitHub
Look under the hood and contribute to our open-source code.
SparseZoo
Get started faster with our open-source model repository.
Hugging Face
Deliver fast inference with our pre-optimized, open-source LLMs.
Docs
Access the tutorials, guides, examples, and more.
Blog
Resources
Peruse our research. Ask a question.
Resources
Research Papers
Learn more about the magic behind Neural Magic.
Support
Get the answers you need.
Company
Get to know us better.
Company
About Us
Who's Neural Magic?
Our Technology
How does it work?
Careers
Interested in joining our team?
Contact
Have a question for us?
Let's Connect
Let's Connect
Introducing Machete, a Mixed-Input GEMM Kernel Optimized for NVIDIA Hopper GPUs
Oct 14, 2024
Author(s)
Mixed-input quantization is a technique that processes weights and activations at different precisions in neural networks. The most common implementation is w4a16 quantization (e.g., GPTQ or AWQ), which uses 4-bit quantized weights and 16-bit activations (float16 or bfloat16). This approach primarily aims to reduce GPU memory requirements for model execution.
In most Large Language Model (LLM) workloads, model weights consume the majority of GPU memory. By quantizing weights from float16 to 4-bit, a remarkable ~4x reduction in memory needed for weight storage can be achieved. Additionally, smaller weights can lead to speedups when the linear layer is memory-bound (i.e., limited by weight loading), which occurs when the batch or sequence length is small, resulting in activations being much smaller than the weights.
LLM inference involves a mix of both compute-bound and memory-bound iterations. On modern NVIDIA Hopper GPUs, current state-of-the-art mixed-input linear kernels struggle in compute-bound scenarios (as illustrated in Figure 1).
We are excited to announce Machete, Neural Magic's latest advancement in mixed-input quantization performance. This kernel is the spiritual successor to the Marlin kernels created by Elias Frantar and integrated into vLLM by Neural Magic. While Marlin was specifically designed for Ampere generation GPUs and struggles on Hopper GPUs (namely H100), Machete was built on top of the work highlighted in CUTLASS 3.5.1 (see example #55 as our initial starting point). This allows it to efficiently target Hopper and beyond, performing well in both compute and memory-bound regimes. It optimizes the on-the-fly upconversion of weights required in mixed-input scenarios, and hides this latency by overlapping it with compute and data-movement.
Machete is now available in vLLM 0.6.2+ as a backend for w4a16 and w8a16 compressed-tensors models, for GPTQ models, and more to come. With Machete, you can now serve Llama 3.1 70B on a single H100 GPU with up to 5 user requests per second while maintaining a median time to first token (TTFT) of <250ms and a median time per output token (TPOT) of <100ms (using chunked prefill on the ShareGPT dataset).
With Machete you can now also hit those same serving targets for Llama 3.1 405B using 4 H100 GPUs with up to 3 user requests per second.
NOTE: Our use of the term "mixed-input" rather than "mixed-precision" is deliberate, as it more accurately describes the specific case we're addressing. The term "mixed-precision" has traditionally been used to describe a broader range of cases, namely including the case where activations and weights share the same type but are accumulated into a different type (i.e. w8a8).
Optimizing Mixed-Input Linear Operations: Weight Pre-Shuffling
While Neural Magic's previous blog post on Marlin covered many optimizations for mixed-input linear operations, this article focuses on an important previously undiscussed optimization used by both Marlin and Machete: weight pre-shuffling. To understand the benefits of weight pre-shuffling, we first need to examine how data is fed into the tensor cores in NVIDIA GPUs.
When performing matrix-input multiplication on a GPU, the process begins by loading data for a small subproblem from global memory to an SM's local shared memory. This data is then transferred to threads and passed to the tensor cores. Each thread is responsible for loading and holding a specific piece of data in its registers. The layout of this data in registers follows a fixed, complex pattern that varies depending on the instructions used (such as mma or wgmma).
In PyTorch, weights are typically stored in row-major or column-major format, which doesn't align with the intricate layout required by tensor cores. This mismatch creates a challenge: while we can load data into shared memory in row-major format, we must shuffle it when loading into registers to match the tensor core requirements.
For the purposes of illustration in the following animations, we're using a fictitious GPU that has 8 threads per warp and tensor cores that operate on 8x8 chunks of the weight matrix. While simplified, this closely matches the types of layouts used by NVIDIA tensor cores, albeit scaled down. In these diagrams, we only show the weight matrix (and not activations) as loading and up-converting the weight matrix is the main challenge in mixed-input linear layers.
For standard data types (16, and 32-bit), NVIDIA provides an efficient ldmatrix instruction to perform this shuffling in hardware; i.e. ensuring that the right data gets shuffled to the right thread.
However, this instruction isn't available for 4-bit types. When working with 4-bit elements and using float16 or bfloat16 as the compute type, we need to load 4-bit elements to match the thread layout for a 16-bit type. Without a 4-bit ldmatrix instruction, we would naively need to resort to performing four 8-bit shared memory loads per tensor core operation. In this case the data shuffling is being handled by software using multiple shared memory loads. These additional shared memory loads are detrimental to performance, as they add latency and the use of only 8-bit loads restricts the shared-memory to registers bandwidth.
To overcome this limitation, we can reorder the data ahead of time. By doing so, we can perform a single 32-bit load from shared memory instead of four 8-bit loads. This approach is much more efficient in terms of shared memory bandwidth and latency, ensuring we don't get bottlenecked waiting for shared memory. Importantly, all global memory reordering is done in advance, so it doesn't impact inference time. This pre-shuffling and its effect on how the data gets loaded into memory can be seen in Animation 3.
We can push this optimization even further. By interleaving data for four tensor operations together (e.g., interleaving four 8x8 tiles in the visualization), we can perform 128-bit loads—the widest shared load instruction currently available on CUDA devices.
After loading the weight parameters into the correct registers, they must be upconverted to 16-bit. Animation 5 demonstrates this process, highlighting how the interleaving of tiles can simplify upconversion and save instructions. By interleaving tiles in global memory, the data is arranged so that, once in registers, multiple nibbles can be efficiently extracted and up-converted in parallel. This is achieved by shifting the nibbles into the lower four bits of their destination registers using simple bit shifts and masking operations and then expanding in-place. If you're curious about these interleaved upconverts, you can find them here.
What's New in Machete vs. Marlin?
These types of repackaging techniques have already been used in previous mixed-input kernels (namely Marlin and AWQ), so what does Machete do differently? The motivation for developing Machete mainly stems from the poor performance of current mixed-input linear kernels on the NVIDIA Hopper architecture when it comes to larger, more compute bound matrix-multiplications, as can be seen in Figure 1.
New Tensor Core Instructions (wgmma)
For Marlin, the poor performance on Hopper architecture primarily stems from the use of outdated 'mma' tensor core operations. To achieve peak FLOPs on NVIDIA Hopper, the new 'wgmma' instructions must be utilized. Using only 'mma' instructions results in a loss of approximately 37% of peak compute throughput [1, 2].
Marlin's weight pre-shuffling, being hand-derived and implemented, makes it challenging to easily adapt to the new 'wgmma' layouts. Machete circumvents this issue by employing CUTLASS CUTE layout algebra to construct a description of the repacked layout for a full weight matrix using instruction layout definitions available in CUTLASS. This approach should, in theory, facilitate easier adaptation to any future instructions as well as different types (w4a8 is already in progress).
A key challenge with 'wgmma' is that for a matrix multiplication C = AB, only A and C can be sourced from registers, while B must be sourced from shared memory. Since we upconvert the 4-bit bit weights to 16-bit floating point values in registers, we can avoid restoring them into shared memory by computing Y^T = W^T * X^T instead of Y = XW. This ensures the weights are ‘A’ (left side input-operand) with respect to the ‘wgmma’ instructions, allowing them to be sourced directly from registers. CUTLASS enables us to more easily compute the transpose problem by simply manipulating layouts.
Tensor Memory Accelerator (TMA)
The Tensor Memory Accelerator (TMA) represents a significant advancement in NVIDIA Hopper GPUs' memory handling capabilities. This new hardware feature is designed to asynchronously copy blocks of multidimensional data, known as subtensors, from global memory to shared memory. The introduction of TMA brings several important benefits to the table.
Primarily, TMA reduces register pressure by offloading data movement operations, thereby freeing up CUDA cores for other computational tasks. It also simplifies address calculations by handling these complex operations in hardware. Furthermore, TMA's ability to operate independently of compute operations allows for better overlap between memory transfers and computations. Machete takes advantage of this new hardware feature by leveraging CUTLASS's existing TMA infrastructure.
Warp-specialization
Warp-specialization, introduced in CUTLASS 3.0, divides warps into data movement (producer) and computation (consumer) roles. This technique aims to better overlap data movement and computation, improving memory and tensor core latency hiding. Machete incorporates this approach by leveraging existing infrastructure in CUTLASS. For a more detailed explanation of warp-specialization in CUTLASS, refer to this COLFAX Research blog.
Machete Performance
With all of the above optimizations in place, we can see that Machete outperforms the other mixed input linear kernels for batch size / prefill seq. len 128+. At batch sizes of 128 and above, the performance is competitive with FP16, meaning there is no longer a trade-off between prefill performance or high-batch size performance and improved low-batch and decode performance.
In Figure 6 we can see end-to-end serving performance of these kernels on a 4bit Llama 3.1 70B on a single H100. In the higher user requests rates (3+ req/s), we see a geomean speedup of 29% for input token throughput and 32% for output token throughput.
In Figure 7 we can see end-to-end serving performance of these kernels on a 4bit Llama 3.1 405b on 4 H100s. In the higher user requests rates (3+ req/s),we see a geomean speedup of 42% for both input token and output token throughput.
Future Work
As we continue to develop and refine Machete, we have several exciting areas of focus for future improvements:
These initiatives underscore our commitment to pushing the boundaries of mixed-input quantization performance. By addressing these areas, we aim to make Machete an even more powerful and flexible tool for efficient LLM inference on NVIDIA Hopper GPUs and beyond. We're excited about the potential impact of these improvements and look forward to sharing updates as we progress. Subscribe to our blog, follow us on X, and join our bi-weekly vLLM office hours to stay tuned for more exciting AI developments.
About Neural Magic
Neural Magic is advancing the performance of AI inference by optimizing large language models (LLMs) for efficient and scalable deployments. As a leading contributor to the open-source vLLM project, we develop and implement key techniques like sparse architectures, mixed-precision quantization, and performance optimizations to enhance inference speed, reduce memory footprint, and maintain model accuracy. Neural Magic is also a member of NVIDIA Inception, a program designed to nurture startups, and is thankful to the CUTLASS team for their valuable work. Our goal is to empower developers to build and deploy high-performance LLMs across different hardware configurations without compromise. To learn more, visit neuralmagic.com or check out our GitHub to accelerate your AI workloads today.
Author(s)
Spread the Word
Stay Up to Date
SUBMIT
Join the Conversation
slack
Continue Reading
Recent Blogs
Open Source
Feb 27, 2025
Quantized DeepSeek-R1 Models: Deployment-Ready Reasoning Models
Newsletter
Feb 24, 2025
Friends of vLLM: February 2025
Open Source
Feb 20, 2025
Driving Enhanced Support for Multimodal LLMs With vLLM V1
view all blogs
SUBSCRIBE
Subscribe to Neural Magic events & news
Community
Blog
Let's Connect
Contact Us
Company Policies
BACK TO TOP
© 2024 Neuralmagic, Inc.
Neuralmagic, Inc. 55 Davis Sq STE 3 Somerville, MA 02144 United States
BACK TO TOP
|
Optimizing memory-bound kernel (memory dependency around 95% in NVVP)
I have a piece of code that, according to Nvidia Visual Profiler, is memory bound and so far I haven’t managed to improve it further after passing some arguments as constants.
If you copy/paste and compile the following code, NVVP shows that both kernels are memory bound, have around 89% occupancy even though the kernel configurations should fully saturate the device, and the SMs from 1 to 7 are around 88-90% utilization while the other ones are closer to 100%.
Error checking was omitted for easier reading, but cuda-memcheck reports no errors for any array length I use.
#include <iostream>
__global__ void init_array(float *array, size_t len)
{
for(size_t idx = blockDim.x * blockIdx.x + threadIdx.x; idx < len; idx += gridDim.x * blockDim.x)
array[idx] = idx;
}
__global__ void transform_array(float *in, float *out, const float scale_factor, size_t len)
{
for(size_t idx = blockDim.x * blockIdx.x + threadIdx.x; idx < len; idx += gridDim.x * blockDim.x)
out[idx] = in[idx] * scale_factor;
}
int main(void)
{
float *array_in, *array_out;
size_t length = 100000000;
const unsigned short block_Size = 256, grid_Size = 200;
const float factor = 0.5;
// Allocate and initialize memory
cudaMallocManaged(&array_in, length * sizeof(float));
cudaMallocManaged(&array_out, length * sizeof(float));
cudaMemset(array_in, 0, length * sizeof(float));
cudaMemset(array_out, 0, length * sizeof(float));
cudaDeviceSynchronize();
// Fill the input array
init_array <<< grid_Size, block_Size >>> (array_in, length);
cudaDeviceSynchronize();
// Transform input and write to output array
transform_array <<< grid_Size, block_Size >>> (array_in, array_out, factor, length);
cudaDeviceSynchronize();
cudaFree(array_in);
cudaFree(array_out);
return 0;
}
The first kernel just initializes the input array with some numbers using a strided loop, and second kernel saves the multiplication between the input element and some scaling factor (which I calculate with other functions, but here it is just an arbitrary value) to the output array, again using the same strided loop. Essentially doing a lot of work in global memory.
How do you normally get rid of/alleviate this bottleneck?
You won’t eliminate the memory bottleneck for a memory bound code. The operations you are doing here are so trivial they are going to be memory bound.
There is likely very little you can do to make them run substantially faster. At this point, if you want to improve things, you are in the realm of what I call “ninja methods”. Things like tuning kernel size (e.g. number of blocks - easily doable with your grid-stride loop method) for the number of SMs in your device to minimize the tail effect, attempting to see if larger vector loads will improve things (slightly), etc.
Ninja methods are referred to here:
[url]http://on-demand.gputechconf.com/gtc/2012/presentations/S0514-GTC2012-GPU-Performance-Analysis.pdf[/url]
These methods in my experience don’t usually provide more than a few percent improvement.
At a higher level of abstraction, programmers who have multiple operations like this to do will sometimes seek to fuse operations. This means combining multiple kernel calls to do more work in a single kernel call. The objective is to do as much work as possible per load and store operation in global memory. Your two operations could be trivially fused into a single kernel, for example. Fusing to reduce kernel calls also saves the overhead of additional kernel calls - another ninja topic (usually).
In any event, these trivial memory-bound kernels are “fully optimized” when the kernel runs at the rate of memory bandwidth. For example, determine the total number of loads and stores done by a kernel, in bytes, and divide by the kernel execution time, in seconds. This bytes/sec number is then compared to a proxy measurement of peak achievable bandwidth (e.g. such as the device-to-device memory bandwidth reported by bandwidthTest sample code). When your kernel is running at that rate, it probably cannot be optimized further. You are done, excepting higher-level “meta” work like algorithm redesign or fusing of operations/kernels.
Thanks for these clarifications, txbob. So I think it is just what it is. And the Titan V with that ridiculous memory bus is probably laughing at it.
But while I was reading this document and trying some of the ninja techniques, like increasing the grid size to raise the occupancy (with 200 it had 89%, with 1000 it goes to 98%) and shaving some milliseconds here and there, I found by accident, clicking the wrong kernel to profile, that the array reduction we worked some weeks ago actually has some branch divergence, doesn’t it?
It is exactly the last lines:
if (tid == 0)
array_out[blockIdx.x] = sdata[0];
Only a few threads will execute it, so I don’t think it is all that harmful, yes? NVVP shows an increase in divergence as the grid size increases.
There is probably an expression in English like, you aim at something but hit something else…
Many, many kernels will have some divergence. your grid-stride loops, for example, are prone to some small divergence as well. These sorts of things are usually insignificant, from a performance perspective.
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
Keeping data between kernels calls
Hi,
I have a quite long kernel that I want to optimize. My idea is to split this single kernel into multiple smaller ones that can be better optimized. The problem I see is now that the result of one kernel is the input data of the next. So far the only solution I found to keep the data between different kernel calls is the use of global memory. If I unterstand correctly the lifetime of shared memory does not allow the use for different kernels - so that is unfortunately not an option.
Is there any other way to keep the data?
Thanks.
My idea is to split this single kernel into multiple smaller ones that can be better optimized.
Better optimized in which sense? What factors will be driving the performance improvements that are being envisioned?
So far the only solution I found to keep the data between different kernel calls is the use of global memory.
Keeping data resident on the GPU and minimizing data transfers between host and device is the correct approach.
Hi,
thanks for your fast reply. Some part of the kernel can make use of a whole warp to solve a single problem, other parts may use just a single thread for a fast calculation. This is alternating through the kernel. So I want to separate the parts that can make use of more than a warp to solve a problem from the other that needs just a single thread.
Hope I could make my point clear?
I would hope that the kernel launch comprises more than one warp, because that would still be very inefficient. As for the single-threaded portions: I can see how this may be necessary if there is simply no inherent parallelism available, but I am not sure what kind of situation “just a single thread for a fast calculation” refers to.
Generally speaking, the kind of situation you describe, with varying degrees of parallelism available throughout a kernel, is not uncommon and may well be the highest-performing option. A thorough design process would prototype both this design approach as well as the split-kernel approach to assess which one is faster for this particular use case.
The other aspect that is likely worth investigating is whether there are algorithmic changes that could be used to increase the amount of available parallelism, so as to avoid single-threaded computation. Alternatively or additionally, since single-threaded execution is ideally suited to the CPU, one might alternatively look into how these portions of the computation could be moved to the host, maybe in the form of pre-computation.
A thorough search of the literature might turn up interesting ideas. How to best map work to execution resources on massively parallel computational platforms is still an area of active research. Sometimes a switch from traditional data structures can be the key to better parallelization. By now the applicability of GPUs to just about any kind of computational task has been looked into, so a literature search is highly likely to yield some useful information.
The process I like to use for best results is to first spend some quality time thinking up a design and prototyping it (usually with variants) by myself. I put serious effort into this. In a second stage I then search the literature extensively, going back multiple decades, looking for anything even remotely applicable.
Having made a serious first effort by myself usually provides me with enough insight that I can make a reasonable assessment whether some idea from the literature is (1) roughly along my own line(s) of though, (2) inferior to my own ideas (3) superior (at least in parts) to my best efforts. It is fairly rare (but happens), that I realize that an approach from group (3) is so vastly superior to my own ideas that I adopt it wholesale. Most of the time my final design is the synthesis of my original ideas and ideas from the literature: the “stand on the shoulder of giants” approach.
Thanks again for your detailed answer. Yes, I agree that prototype is one way to go (already started).
Just to clarify my approach since I think I didn’t explain very well what I try to do.
With ‘single threaded’ I do not mean that I start a kernel with a single grid and 32 threads. What I mean is that each thread gives me a single solution. I’m also happy with the performance of these parts.
In other parts of the kernel I can use 4 thread to improve performance, also giving me a single solution, and in other parts I can make use of a complete warp.
If all of these parts work in a single kernel I need to configure the kernel in that way that the part that needs most of the threads determine how many solution my kernel can calculate.
Sample:
If I start now a kernel like this:
grid <64, 1, 1> and block <128, 1, 1>
I use 64 * 128 threads that gave me 256 solutions since the last part needs 32 threads for a single result. And on the other side I have most of the GPU doing nothing for part 1 and 2.
My idea for splitting the kernel is now to run part 1 in its own kernel with a good occupancy and store all results in memory. The run part 2, reading the results from part 1 and continue calculation also in configuration that its kernel make good use of the gpu - then finnaly run the last part - again with its own kernel configuration.
I hope to get better performance because part 1 and 2 can use their own configuration resulting in a better occupancy. I can also use the unused MC for something else in a different stream (if the used resources will allow this). But this something for a later development if splitting the kernel gives the expected results.
Thanks for your help and insights.
Hi TrailingStop,
Variant 1:
it can be done by e.g. using a single kernel and have each warp calculate 32 solutions. In your example: For the first part of the kernel, each thread of the warp calculates one solution; for the second part, you have a for loop from 0 to 3 and within the loop body 4 threads each cooperate for 8 independent calculations concurrently times 4 loop iterations; for the third part you have a for loop from 0 to 31 with only one solution calculated for each iteration.
The data can be exchanged by warp shuffle or by shared memory.
Variant 2:
A different approach would be to have whole warps doing nothing instead of some threads of a warp. Keeping warps idle (e.g. by blocking for a _syncthreads()) is free, because the blocked warps are not scheduled and do not use up compute resources, except that the effective occupancy lowers.
Your example would be a (too) extreme case. But basically it would go: Start 32x32 thread blocks. For part 1 one warp is active; for part 2 fours warps are active; for part 3 thirty-two warps are active.
The data would be exchanged by shared memory.
Generally:
The thread indices of threads within a block can have different meanings throughout your kernel. You are not bound to map them 1:1 to your problem space indices. Just see the threads as numbered resources. Between each _syncwarp() (or even _syncthreads()) you can redefine, how the work is distributed onto threads, and you can add any for loop within the kernel on top of it.
Hi Curefab,
thanks a lot for this information. I will give them a try and check which one works best in my case.
Thanks!
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
Why performance is worse with CUBLAS- than with kernel-function
System:
CPU: Intel Core i5-4570
MSVS Community 2017 v15.9.7
Platform Toolset: Visual Studio 2017 (v141)
Build: Release x64
GPU: GeForce GT 640 (c.c. 3.0)
CUDA Compilation tools R10.1, V10.1.105
CUDA Driver Ver.: 10.1
I am using CUDA for last couple of months. Goal of my research is to develop performance optimized 2D DCT transform kernel function. Optimization targets short processing time. Since transform is used for video processing batches of data are processed. Transform can be described with mathematical equation C = A * B * AT where A and AT are predefined matrices. All matrices are of size 32 x 32.
Own kernel function was developed at first and to check potential improvement variant with CUBLAS function was developed as well. Function cublasgemmBatched()was used for this purpose. It was used twice for two multiplications from math eq. Batch size is 12960. Results were compared at the end for both variants. I expected that transform variant with CUBLAS will be faster but processing time with kernel function is almost 10x faster. How to explain this ?
Any idea where to search for an answer ?
Should I try with strided batched matrix multiplication cublasgemmStridedBatched() since I operate with square matrices only ? Or there is another library which should be used to outperform the kernel function ?
I know I ask to many questions :) but any suggestion is welcomed.
There was problem that I didn’t have warmup call. Its absence was of significant importance for CUBLAS variant since after warmup was added for both, CUBLAS and own kernel, processing time for CUBLAS variant was 1,38x shorter.
In next iteration I moved to cublasgemmStridedBatched(). There CUBLAS showed even better performance, it was 1,73x faster.
Using NVIDIA visual profiler I identified that low shared efficiency (50%) in my own kernel had improvement potential. After rewriting my kernel for vectorized memory access with float2 data type shared efficiency increased to about 98% and my kernel was 1,3 faster than CUBLAS.
Now, I don’t know why CUBLAS exhibits such low performance. In the profiler I see that block size for CUBLAS kernel is relatively small (8x8) what decreases number of possible active warps per SM and decreases the occupancy (30%). Global store efficiency is also low, 25%. Is there an issue that CUBLAS GEMM is optimized for specific matrix sizes and my application with 32x32 matrices is not in that group ? Probably I will move to Tesla K40 GPU and see does GPU architecture makes the difference.
Generally speaking, GEMM maps to dozens of different kernels, optimized for different GPU architectures, matrix sizes, transpose modes, matrix aspect ratios etc. For a given GEMM invocation a heuristic picks the most appropriate kernel(s). The heuristic may not always pick the optimal kernel, or none of the available kernels may be the perfect fit for a particular call to GEMM.
As I recall, batched GEMMs in particular were introduced primarily to deal with very small matrices, as some applications need to handle tons of matrices of size 3x3, 4x4, or thereabouts. Matrices of size 32x32 may be close to the upper limit of what batched GEMMs were targetted to handle; check the documentation.
With regard to the sub-optimal performance observed, consider filing an enhancement request with NVIDIA. You can file one by using the bug reporting form and prefixing the synopsis with “RFE:” to mark it as an enhancement request.
Realistically, given the age of the Kepler architecture, it is unlikely that improvements will be made for compute capability 3.x, but equivalent issues may affect newer architectures. The primary targets for performance improvements in libraries are the latest GPU architectures (at this time: Turing and Volta) although some amount of back-porting of such improvements to older architectures may occur.
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
For context: this WebGPU version achieves ~17% of peak theoretical performance of M2. With CUDA (i.e. CuBLAS), you can reach ~75% of peak performance for same matrix config (without tensor core).
Not on the same computer, CUDA doesn’t run on the integrated GPU of the Apple M2 Pro.
That single lib call must have used the AMX accelerator, which is separate from the cores and shared by a group of cores.So that AMX accelerator performance may be greater than of all CPU cores together. AFAIK, some Apple CPUs have one AMX accelerator for the big cores and another AMX accelerator for the smaller cores, but in any case there is no chance to hope that if you have obtained 1 TFLOP/s when running the program on 1 core you will get much more when running it on multiple cores, because all cores of the same type will use the same shared accelerator.
So that AMX accelerator performance may be greater than of all CPU cores together. AFAIK, some Apple CPUs have one AMX accelerator for the big cores and another AMX accelerator for the smaller cores, but in any case there is no chance to hope that if you have obtained 1 TFLOP/s when running the program on 1 core you will get much more when running it on multiple cores, because all cores of the same type will use the same shared accelerator.
nVidia tensor cores support int8, couple versions of FP16 (BF16 and the standard IEEE one) and FP19 which they call TensorFloat-32. I think Intel AMX only supports int8 and BF16.None of them supports FP32 let alone FP64 input numbers, which makes them completely useless for traditional GEMM stuff.
None of them supports FP32 let alone FP64 input numbers, which makes them completely useless for traditional GEMM stuff.
The Apple’s version is indeed interesting. I wonder why haven’t Apple exposed it to programmers, or implemented a BLAS library on top of that thing?
Using the Accelerate framework (which includes Apple's BLAS) is the only supported way for programmers to access the AMX. Reverse engineering the instruction set to access it directly is discouraged, because it's not a documented stable interface.
Now that they’re using “standard” SME, it shouldn’t be a problem to write SME assembly opcodes directly, although I suspect Apple themselves is still probably sparse on the documentation. I’m not aware if there’s any way to use intrinsics or something slightly higher level than inline-ASM, but lower level than the Accelerate framework.
It sounded like you claimed that using only one core you already reach 1 TFLOP/s, implying that you could reach more than that by using more cores, which is false.Now you have clarified that you actually claim that it is good that when using a single core you can reach the maximum throughput of the shared matrix operation accelerator.This is correct, but there is no essential difference between this and a Zen 5 CPU that reaches this throughput by using only half of the cores, while having the other half of the cores free to do any other tasks.
Now you have clarified that you actually claim that it is good that when using a single core you can reach the maximum throughput of the shared matrix operation accelerator.This is correct, but there is no essential difference between this and a Zen 5 CPU that reaches this throughput by using only half of the cores, while having the other half of the cores free to do any other tasks.
This is correct, but there is no essential difference between this and a Zen 5 CPU that reaches this throughput by using only half of the cores, while having the other half of the cores free to do any other tasks.
(Also, that’s a M2 number, since that’s what OP was talking about. Someone will presumably post M4 benchmarks for BLAS sometime soon, if they haven’t already.)
This is way way lower than what you claim M2 Pro is capable of and since I'm comparing it against the state-of-the-art datacenter CPU I'm curious how did you get to this number?M2 Pro core runs at much lower frequency, what it seems to be around ~3.4GHz. And I couldn't find any information about SVE vector widths supported nor number of FMAs.
M2 Pro core runs at much lower frequency, what it seems to be around ~3.4GHz. And I couldn't find any information about SVE vector widths supported nor number of FMAs.
In my desktop computer, I have Ryzen 7 8700G CPU, which has 8 Zen 4 cores, 4.2 GHz base frequency, 65W TDP. Theoretically, when doing FP32 FMA, each CPU core can do 32 FLOP/cycle. At the base frequency, this translates into 134 GFlops per core. You gonna need all 8 cores to achieve 1 theoretical TFlops.BTW, integrated GPU inside the same 8700G processor can theoretically do 8.2 TFlops FP32.
BTW, integrated GPU inside the same 8700G processor can theoretically do 8.2 TFlops FP32.
Isn't it that zen4 doesn't have "native" support for AVX-512 but "mimics" it through 2x 256-bit FMA units?Because of this, a single AVX-512 instruction will occupy both FMA units and therefore I think that the theoretical limit for a single zen4 core should be half of the 134 GFLOPS number?
Because of this, a single AVX-512 instruction will occupy both FMA units and therefore I think that the theoretical limit for a single zen4 core should be half of the 134 GFLOPS number?
According to uops.info, Zen 4 cores can do two 8-wide FMA instructions per cycle, or one 16-wide FMA per cycle. See VFMADD132PS (YMM, YMM, YMM) and VFMADD132PS (ZMM, ZMM, ZMM) respectively, the throughput column is labelled TP. That’s where 32 FLOP/cycle number comes from.> doesn't have "native" support for AVX-512 but "mimics" it through 2x 256-bit FMA unitsThat’s correct, AVX512 doesn’t deliver more FLOPs on that CPU. The throughput of 32-byte FMA and 64-byte FMA is the same, 32 FLOP/cycle for FP32 numbers.
> doesn't have "native" support for AVX-512 but "mimics" it through 2x 256-bit FMA unitsThat’s correct, AVX512 doesn’t deliver more FLOPs on that CPU. The throughput of 32-byte FMA and 64-byte FMA is the same, 32 FLOP/cycle for FP32 numbers.
That’s correct, AVX512 doesn’t deliver more FLOPs on that CPU. The throughput of 32-byte FMA and 64-byte FMA is the same, 32 FLOP/cycle for FP32 numbers.
Right. This is where the discrepancy comes from. I counted FMA as a single FLOP.
For example, I have GeForce 4070 Ti Super in my desktop. The chip has 8448 execution units; nVidia calls them CUDA cores but I don’t like the name, the correct number is 66 cores where each core can do 4 wavefronts of 32 threads each.
Anyway, these EUs can do one FP32 FMA each cycle, and the boost clock frequency is 2.61 GHz.
Multiplying these two numbers results in 22.04928E+12 cycles*EU/second, and nVidia reports 44E+12 FLOPs peak FP32 performance of the GPU.
That can lead you to some pretty counter-intuitive optimizations because it's often faster to do more compute work if it means you touch less memory in the process.
It is not specific to the GPUs: this kind of optimizations are pretty common on CPU too where latency kills you and 200 cycles spent wasted on doing compute can actually be faster than a single cache miss trying to fetch data. This is pretty common for many SIMD algorithms actually.Memory is currently lagging behind compute on almost every type of modern hardware, and it will very likely become worst, not better.
Memory is currently lagging behind compute on almost every type of modern hardware, and it will very likely become worst, not better.
As for handcoded assembly, do you believe that it would be financially sound to hand code and maintain thousands of kernels that way, even if you believed that they would be faster?
Why not? We do so for cryptographic primitives and video codecs. And why are you talking about “thousands of kernels”? AI programs only need a small amount of different kernels so it doesn't sound intractable.
That is not the case. What appears like a simple matmul operation actually requires these libraries to select which specific kernel out of the many internally available to execute.If you are curious to learn more, NVidia open sourced a library called Cutlass some years ago. And remember that is only what they are willing to open source.
If you are curious to learn more, NVidia open sourced a library called Cutlass some years ago. And remember that is only what they are willing to open source.
How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance
(https://siboehm.com/articles/22/CUDA-MMM)(It's CUDA-specific, so there may be aspects that can't yet be ported to WGPU)
(It's CUDA-specific, so there may be aspects that can't yet be ported to WGPU)
there are a few things that i wasn't able to figure out how to get access to/i wasn't sure if they were possible. for example, a lot of Simon's article takes advantage of the warp scheduler and warp tiling.i had a hard time finding information on if that's even possible with my M2/metal and the general memory access patterns. it seems like CUDA does have better documentation in this regard
i had a hard time finding information on if that's even possible with my M2/metal and the general memory access patterns. it seems like CUDA does have better documentation in this regard
The datasheet for the H100 SXM seems to indicate that it can only do ~1000 TFLOP/s peak.
I am not an expert in LLM but I don't think you can end up having a significant amount of zeroed weights (~50%) in a converged network so I think it is safe to say that the theoretical throughput for 99% of cases is really ~800 TFLOPS and not ~1600 TFLOPS as advertised.
Also does quantized matmuls.
As you see, I have implemented 32×32 tiling, using thread groups of 32×8 threads, two groupshared buffers to load tiles of the input matrices, and I accumulate numbers into local variables, 32 / 8 = 4 accumulators per thread.
I have implemented a profiler on top of D3D11_QUERY_TIMESTAMP and D3D11_QUERY_TIMESTAMP_DISJOINT queries, and tweaked the compute shader to minimize the time reported by these queries for my specific use case.
I have no experience with WebGPU but if you mean group shared memory, I think the support is available. See the demo: https://compute.toys/view/25
i'm excited to try subgroups though: https://developer.chrome.com/blog/new-in-webgpu-128#experime...
it would be cool to see if there's some way to get better access to those lower-level primitives but would be surprisedit does seem like subgroup support are a step in the right direction though!
it does seem like subgroup support are a step in the right direction though!
The smoothness of an iPhone map zoom, on any device.
Any device except an iPhone, until Apple finally gets around to shipping WebGPU in Safari. Any year now...
https://developer.apple.com/documentation/safari-release-not...
"I have found that WebGPU is enabled by default now with iOS 18.2.
Apple has been working in the open on WebGPU. The WebKit source code has their latest WebGPU work in it. What hasn’t been known is their release schedule, but now with 18.2 it’s looking very promising that it will be on by default in that version."
Edit: I just pressed “Reset All to Defaults” under “WebKit Feature Flags” on my device running 18.2 beta, and the switch for WebGPU is on!! <3
|
How to optimize my cuda code?
I have a simple program, I just want to verify my GPU real performance. but, its result is out of my expectation. I don’t know how to explain it, and how to optimize my program. So, I hope NV’s experts can help me.
the detail about my GPU as following:
Device 0: “NVIDIA RTX A4000”
CUDA Driver Version / Runtime Version 11.6 / 11.3
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 16109 MBytes (16891379712 bytes)
(48) Multiprocessors, (128) CUDA Cores/MP: 6144 CUDA Cores
GPU Max Clock rate: 1560 MHz (1.56 GHz)
Memory Clock rate: 7001 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 4194304 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 102400 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
ok, let me introduce my program, my program is very very simple, just execut “ffma”, details as following:
the performance should be ~10Tflops(x2=20Tflops),but the result of my program is about 6.5T, just about 65% peak performance.
I modified the blocksPerSM to 2, 4, 8, and I modified the threadPerBlock to 128/256/512。 unfortunately, these results are very similar, about 60% - 65%。
and then, I profiled my program with NCU,it tell me “The ratio of peak float (fp32) to double (fp64) performance on this device is 64:1. The kernel achieved 61% of this device’s fp32 peak performance and 0% of its fp64 peak performance.” in “Roofline Analysis”
I think my program have avoid memory-access, avoid register dependency, I don’t know why my peak performance is 61%
I tried to read the profile information in NCU,but, I found I still cannot found the reason of poor performance.
I’ve uploaded my program and the profile file from NCU,
base_mac.tar.gz (3.7 KB)
repoprt.ncu-rep (12.6 MB)
Is there anyone would like to teach me? I think the keypoint is reading the profile file, but I cannot understand them, is there anyone would like to help me?
I implement the loop body with ptx, I think it can avoid the optimization behavior of nvcc.
Highly unlikely to be a good idea. The CUDA compiler is based on LLVM, an extremly powerful framework for code transformations, i.e. optimizations. If you run into the compiler optimizing away code that you don’t want to have optimized away, create dependencies that prevent that from happening. Your chosen approach for measuring peak FP32 throughput appears to be the common method of using independent dot products. You would want to sum these dot products at the end and write the result to global memory to avoid dead code elimination.
Instruction caches on GPUs tend to be pretty small, and your massive loop may exceed the size of the instruction cache, which based on past experience may cost 3% of performance.
It is easier to fill up the SMs as much as possible using relatively fine granularity, e.g. use 128 threads per thread block as a starting point.
You may be better off using dot-products floats instead of float4s. The latter is likely to result in higher register pressure.
You are unlikely to achieve more than 85% of theoretical FP32 throughput, as the ptxas compiler is unlikely to produce a perfect instruction scheduling with perfect register assignment. So there will be bubbles in the pipeline caused by register bank conflicts and execution pipe contention.
[Later: ]
Below is a simple test scaffold for measuring FP32 throughput. It is currently configured for the 9 year old low-end GPU in my web-browsing machine, for which it achieves 86% of theoretical peak FP32 throughput. Increase MAX_BLOCKS, REPS, ITER to adapt to your hardware. Then vary POLY_DEPTH to see how throughput changes.
#include <stdlib.h>
#include <stdio.h>
#define MAX_BLOCKS (65520)
#define THREADS_PER_BLK (128)
#define LEN (MAX_BLOCKS * 1024)
#define POLY_DEPTH (512)
#define REPS (2)
#define ITER (10)
#if defined(_WIN32)
#if !defined(WIN32_LEAN_AND_MEAN)
#define WIN32_LEAN_AND_MEAN
#endif
#include <windows.h>
double second (void)
{
LARGE_INTEGER t;
static double oofreq;
static int checkedForHighResTimer;
static BOOL hasHighResTimer;
if (!checkedForHighResTimer) {
hasHighResTimer = QueryPerformanceFrequency (&t);
oofreq = 1.0 / (double)t.QuadPart;
checkedForHighResTimer = 1;
}
if (hasHighResTimer) {
QueryPerformanceCounter (&t);
return (double)t.QuadPart * oofreq;
} else {
return (double)GetTickCount() * 1.0e-3;
}
}
#elif defined(__linux__) || defined(__APPLE__)
#include <stddef.h>
#include <sys/time.h>
double second (void)
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (double)tv.tv_sec + (double)tv.tv_usec * 1.0e-6;
}
#else
#error unsupported platform
#endif
// Macro to catch CUDA errors in CUDA runtime calls
#define CUDA_SAFE_CALL(call) \
do { \
cudaError_t err = call; \
if (cudaSuccess != err) { \
fprintf (stderr, "Cuda error in file '%s' in line %i : %s.\n",\
__FILE__, __LINE__, cudaGetErrorString(err) ); \
exit(EXIT_FAILURE); \
} \
} while (0)
// Macro to catch CUDA errors in kernel launches
#define CHECK_LAUNCH_ERROR() \
do { \
/* Check synchronous errors, i.e. pre-launch */ \
cudaError_t err = cudaGetLastError(); \
if (cudaSuccess != err) { \
fprintf (stderr, "Cuda error in file '%s' in line %i : %s.\n",\
__FILE__, __LINE__, cudaGetErrorString(err) ); \
exit(EXIT_FAILURE); \
} \
/* Check asynchronous errors, i.e. kernel failed (ULF) */ \
err = cudaDeviceSynchronize(); \
if (cudaSuccess != err) { \
fprintf (stderr, "Cuda error in file '%s' in line %i : %s.\n",\
__FILE__, __LINE__, cudaGetErrorString( err) ); \
exit(EXIT_FAILURE); \
} \
} while (0)
__global__ void kernel (const float * __restrict__ src,
float * __restrict__ dst, int len)
{
int stride = gridDim.x * blockDim.x;
int tid = blockDim.x * blockIdx.x + threadIdx.x;
for (int i = tid; i < len; i += stride) {
float p = src[i] + 1.000001f;
float q = src[i] + 1.000002f;
for (int k = 0; k < REPS; k++) {
#pragma unroll POLY_DEPTH
for (int j = 0; j < POLY_DEPTH; j++) {
p = fmaf (p, p, 1.000001f);
q = fmaf (q, q, 1.000002f);
}
}
dst[i] = p + q;
}
}
int main (int argc, char *argv[])
{
double start, stop, nbr_of_fma;
float *d_a, *d_b;
/* Allocate memory on device */
CUDA_SAFE_CALL (cudaMalloc((void**)&d_a, sizeof(d_a[0]) * LEN));
CUDA_SAFE_CALL (cudaMalloc((void**)&d_b, sizeof(d_b[0]) * LEN));
/* Initialize device memory */
CUDA_SAFE_CALL (cudaMemset(d_a, 0x00, sizeof(d_a[0]) * LEN)); // zero
/* Compute execution configuration */
dim3 dimBlock(THREADS_PER_BLK);
int threadBlocks = (LEN + (dimBlock.x - 1)) / dimBlock.x;
dim3 dimGrid(threadBlocks);
printf ("burn: using %d threads per block, %d blocks, %f GB used\n",
dimBlock.x, dimGrid.x, 2*1e-9*LEN*sizeof(d_a[0]));
start = second();
for (int k = 0; k < ITER; k++) {
kernel<<<dimGrid,dimBlock>>>(d_a, d_b, LEN);
CHECK_LAUNCH_ERROR();
}
stop = second();
nbr_of_fma = (2.0 * POLY_DEPTH * REPS + 3.0) * LEN * ITER;
printf ("flop=%13.6e elapsed=%.5f sec throughput=%.5f FP32 GFLOPS\n",
nbr_of_fma * 2, stop-start, nbr_of_fma * 2 * 1e-9 / (stop - start));
CUDA_SAFE_CALL (cudaFree(d_a));
CUDA_SAFE_CALL (cudaFree(d_b));
return EXIT_SUCCESS;
}
so kindly, thanks for your code.
I run your code, and I get the performance above 90%, so cool
but, I found a difference between my poor code and your good code.
I found the clause about fma in your code like this:
p = fmaf (p, p, 1.000001f);
q = fmaf (q, q, 1.000002f);
at the same time, my code like this:
c += a * b;
and then, I modified your code like this:
__global__ void kernel (const float * __restrict__ src,
float * __restrict__ dst, int len)
{
int stride = gridDim.x * blockDim.x;
int tid = blockDim.x * blockIdx.x + threadIdx.x;
for (int i = tid; i < len; i += stride) {
float p = src[i] + 1.000001f;
float q = src[i] + 1.000002f;
float r = 0.0f;
for (int k = 0; k < REPS; k++) {
#pragma unroll(512)
for (int j = 0; j < POLY_DEPTH; j++) {
r = fmaf (p, q, r);
}
}
dst[i] = r * 0.0001;
}
}
and modify the statistic clause like this:
nbr_of_fma = (POLY_DEPTH * REPS + 3.0) * LEN * ITER;
and then, the performance is about 50%.
as comparison, I modify my blockPerSM to 64, and modify the loop body like this:
#pragma unroll(128)
for(int i = 0 ; i < loop ; i ++){
vec_C.x = fmaf (vec_A.x, vec_B.x, vec_C.x);
vec_C.y = fmaf (vec_A.y, vec_B.y, vec_C.y);
vec_C.z = fmaf (vec_A.z, vec_B.z, vec_C.z);
vec_C.w = fmaf (vec_A.w, vec_B.w, vec_C.w);
}
*ptr_C = vec_C;
performance is about 60%.
another modifcation like this:
loop = loop >> 1;
#pragma unroll(128)
for(int i = 0 ; i < loop ; i ++){
vec_A.x = fmaf (vec_A.x, vec_A.x, 0.0001f);
vec_A.y = fmaf (vec_A.y, vec_A.y, 0.0001f);
vec_A.z = fmaf (vec_A.z, vec_A.z, 0.0001f);
vec_A.w = fmaf (vec_A.w, vec_A.w, 0.0001f);
vec_B.x = fmaf (vec_B.x, vec_B.x, 0.0002f);
vec_B.y = fmaf (vec_B.y, vec_B.y, 0.0002f);
vec_B.z = fmaf (vec_B.z, vec_B.z, 0.0002f);
vec_B.w = fmaf (vec_B.w, vec_B.w, 0.0002f);
}
vec_C.x = vec_A.x + vec_B.x;
vec_C.y = vec_A.y + vec_B.y;
vec_C.z = vec_A.z + vec_B.z;
vec_C.w = vec_A.w + vec_B.w;
*ptr_C = vec_C;
this performance is also about 90%,
I think the third parameter of fmaf is constant, so, I feel this result(90%) cannot prove the real performance.
I think the keypoint is “register reuse” and data dependency, but your code there are dependency(a = a * a + 1.0001f) also, and the performance is good. why?
I checkd their sass,
the sass about the fma in your code, like this:
/*0130*/ FFMA R6, R6, R6, 1.0000009536743164062 ; /* 0x3f80000806067423 */
/* 0x000fe20000000006 */
/*0140*/ FFMA R7, R7, R7, 1.0000020265579223633 ; /* 0x3f80001107077423 */
/* 0x000fc60000000007 */
/*0150*/ FFMA R6, R6, R6, 1.0000009536743164062 ; /* 0x3f80000806067423 */
/* 0x000fe20000000006 */
/*0160*/ FFMA R7, R7, R7, 1.0000020265579223633 ; /* 0x3f80001107077423 */
/* 0x000fc60000000007 */
the sass about fma in my code, like this:
/*0190*/ FFMA R17, R4, R8, R12 ; /* 0x0000000804117223 */
/* 0x020fe2000000000c */
/*01a0*/ FFMA R12, R5, R9, R13 ; /* 0x00000009050c7223 */
/* 0x000fe2000000000d */
/*01b0*/ FFMA R13, R6, R10, R14 ; /* 0x0000000a060d7223 */
/* 0x000fe2000000000e */
/*01c0*/ FFMA R14, R7, R11, R15 ; /* 0x0000000b070e7223 */
/* 0x000fe2000000000f */
/*01d0*/ FFMA R17, R4, R8, R17 ; /* 0x0000000804117223 */
/* 0x000fe20000000011 */
/*01e0*/ FFMA R12, R5, R9, R12 ; /* 0x00000009050c7223 */
/* 0x000fe2000000000c */
/*01f0*/ FFMA R13, R6, R10, R13 ; /* 0x0000000a060d7223 */
/* 0x000fe2000000000d */
/*0200*/ FFMA R14, R7, R11, R14 ; /* 0x0000000b070e7223 */
/* 0x000fe2000000000e */
/*0210*/ FFMA R17, R4, R8, R17 ; /* 0x0000000804117223 */
/* 0x000fe20000000011 */
/*0220*/ FFMA R12, R5, R9, R12 ; /* 0x00000009050c7223 */
/* 0x000fe2000000000c */
/*0230*/ FFMA R13, R6, R10, R13 ; /* 0x0000000a060d7223 */
/* 0x000fe2000000000d */
/*0240*/ FFMA R14, R7, R11, R14 ; /* 0x0000000b070e7223 */
/* 0x000fe2000000000e */
obviously:
So, would you like to teach me how to avoid these dependency, if these dependency really exist.
The code I posted above is an ad-hoc adaption of some code I have had sitting around for quite a few years. I seem to recall that I chose the particular arrangement of FMAs used so as to minimize register bank conflicts, but I do not know for sure.
Having been retired for almost a decade, I am a hobbyist these days who will, often on a whim, explore some issue for an extended afternoon, then forget the exploratory code once my curiosity is satisfied: “The journey is the reward”. I rarely keep notes on what I tried and why.
Sustaining three-input operations at full speed is a challenge in all processor architectures due to the tremendous bandwidth required (3 read ports, 1 write port on the register file). The problem is exacerbated by multi-issue capability. A common way to boost register file bandwidth on average is to use register banks, each of which provides fewer read (and possibly, write) ports. To my knowledge, all NVIDIA GPUs use a (publicly undocumented) scheme of this nature to boost practically available bandwidth. Bank conflicts causing pipeline bubbles may occur intra-instruction or inter-instruction in case of multi-issue capability.
thanks for your clear explanation
I see,R4, R8, R12 is conflict(register bank, 0 == 4%4,0 == 8%4, 0 == 12%4, they are in the same bank,0), and R5/R9/R13,R6/R10/R14, R7/R11/R15, they are all conflict. so, performance is poor as your explanation, right?
/*0190*/ FFMA R17, R4, R8, R12 ; /* 0x0000000804117223 */
/* 0x020fe2000000000c */
/*01a0*/ FFMA R12, R5, R9, R13 ; /* 0x00000009050c7223 */
/* 0x000fe2000000000d */
/*01b0*/ FFMA R13, R6, R10, R14 ; /* 0x0000000a060d7223 */
/* 0x000fe2000000000e */
/*01c0*/ FFMA R14, R7, R11, R15 ;
but I don’t know how to solve it, because the SASS is complied by nvcc.
maybe, I don’t know some tricky, would you like to give me more further advise?
but I don’t know how to solve it, because the SASS is complied by nvcc.
The nvcc compiler is aware of performance issues related to register usage. While its certainly possible that this can be improved, its also possible that this is the best tradeoff of choices about register usage.
It should be evident in a large dependent sequence like this that register usage changes will have side effects. Changes you make to address performance on one instruction may have a negative performance impact elsewhere in the dependent chain.
I don’t know of “tricks” to tell nvcc to reorganize its register usage. You could try playing with very crude, coarse controls like -maxrregcount switch to the compiler.
The other options I know of are:
Based on the 2nd option, you could file a bug to request study by the compiler team, if you think you can do better than the compiler. But you would need a well-documented example, showing a wholistic solution. Even then, there are probably knowledge gaps that make this sort of approach difficult.
To quote Wikipedia:
The problem of optimal register allocation is NP-complete. As a consequence, compilers employ heuristic techniques to approximate its solution.
I think it is reasonable to assume that the CUDA compiler engineers in charge of ptxas are fully aware of the latest developments in the field and that heuristics that consider the general problem constraint by GPU-specific restrictions (such as calling conventions , register aggregation for 64-bit operations or vector loads/stores, dual issue, and register banks) are in place. It would also be reasonable to assume that after 15 years of development, this fundamental building block of a compiler is mature.
That does not mean there could not be room for improvement, just that the burden of demonstrating noticeably improved performance from superior register allocation for specific cases lies with a prospective bug filer.
I just heard of register bank is “%4”,but, I didn’t find it in nv’s public document.
So, I must confirm the restriction for register bank at first, would you like to give me some documents about it?
my compiler is nvcc 11.8, and the arch is sm86
I am not aware that the details of the GPU register file organization are disclosed in official CUDA documentation. I also cannot find any relevant details in papers from people who have explored GPU microarchitectures with targeted microbenchmarks. An older paper by NVIDIA subject matter experts,
Mark Gebhart, Stephen W. Keckler, Brucek Khailany, Ronny Krashinsky, William J. Dally, “Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor”
gives some details, but I have no idea how these relate to newer architectures:
Each MRF bank is 16 bytes wide with 4 bytes allocated to the
same-named architectural register for threads in each of the 4 SIMT
lanes in the cluster. Each bank has a capacity of 8KB, providing
a total of 256KB of register file capacity per SM. Registers are
interleaved across the register file banks to minimize bank conflicts.
Instructions that access multiple values from the same bank incur a
cycle of delay for each access beyond the first. The operand buffering
between the MRF and the execution units represents interconnect
and pipeline storage for operands that may be fetched from the MRF
on different cycles. Stalls due to bank conflicts are rare and can be
minimized with compiler techniques
Note that the use of vector types, such as float4, tends to impose an additional burden on register allocation. That is why earlier in this thread, I suggested using scalar computation for the exercise at hand (maximize floating-point throughput), and did so in the sample code I posted.
thanks for your patient
In practical terms, I see the risk of becoming mired in microarchitectural details that have no bearing on 99% of real-life CUDA code out there. Yes, it is cool to figure out how to get close to the theoretical peak FLOPS, but the resulting code often has very little similarity with code people write to address actual use cases. Not much has changed in that regard since I showed how to get 1 GFLOPS out of the AMD K7 (Athlon) processor ca. 1999.
Outside of special scenarios, much forward progress can be made by simply relying on the CUDA compiler and using feedback provided by the CUDA profiler.
“relying on the CUDA compiler and using feedback provided by the CUDA profiler”, yes, you are right
but, my dilemma is I cannot obtain useful clue from NCU’s profile information.
You might want to start with the recommendations from the CUDA Best Practices guide before delving into the profiler. All profilers these days are very sophisticated utilities that one needs to spend some quality time with to get the full benefit. So give it some time, keep on experimenting and exploring.
When profiler use first became common some thirty years ago, their functionality was very limited and interacting with profilers was less overwhelming than today. There is a bit of a trade-off between ease of use and depth of analysis, I would say.
So, I must confirm the restriction for register bank at first, would you like to give me some documents about it?
One source of information is pages 6-8 of Dissecting Volta. The same authors wrote Dissecting Ampere, where they state on page 29, that the Ampere register layout is the same.
thanks for your help
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
An Almost Pointless Exercise in GPU Optimization
Not everyone is able to write funky
fused
operators to make ML models run faster on GPUs using clever quantisation tricks. However lots of
developers work with algorithms that feel like they should be able to leverage the thousands of
cores in a GPU to run faster than using the dozens of cores on a server CPU. To see what is possible
and what is involved, I revisited the first problem I ever considered trying to accelerate with a
GPU. What is unusual about my chosen problem is that it is officially
pointless, so you ought not to
be able to find any library that will accelerate this algorithm, because it isnât worth writing one!
That makes it an interesting proxy for algorithms which arenât catered for by high-performance
libraries written by experts, but can be structured to run thousands of threads in parallel.
TL;DRâ
Getting an existing C++ algorithm running on GPU is pretty easy, so it is a low bar to get started.
What I learned is the importance of minimizing thread divergence and maximizing effective memory
access speed. To do that effectively, I had to transform my algorithm into a state machine structure
so that every thread is operating mostly in lock-step, just with different data values.
My starting, interim and final code are open to see, along with a
summary of the steps I took, and the corresponding improvements or regressions at each stage. I want
to focus in this article on the thought process for deciding each step, mostly by explaining the
Nvidia Nsight Compute analysis which helped guide me.
In the end I managed to make my program run about 30x faster on my laptop using its GeForce GTX 1650
GPU, compared with its Core i7â9750H CPU. Only in the last two steps did it get meaningfully better
than with CPU though, so be prepared for early and frequent disappointment.
If you want just the summary of what worked, jump to Progression History.
A Pointless Programâ
Years ago, a colleague invited me to take on his Christmas programming challenge, which was to write
the fastest program he could to continuously play the card game Beggar My Neighbour. The aim, noted
by John Conway as definitely not worth
solving, is to try to find the
longest game â with the possibility that there might be a game which never ends. The longest game
found so far has 8344 turns â a rainy afternoon diversion perhaps, if you can sustain playing a card
every 2.589 seconds for six hours straight! You can see a history of new records, with a Python
program that verifies them, here:
https://github.com/matthewmayer/beggarmypython
The game play algorithm is almost trivial, but it has some notable features which turn out to be
relevant to the challenge of effectively leveraging a GPU.
Game play is completely deterministic, so the outcome is defined only by the initial sorting of
the deck. The problem is therefore embarrassingly parallel, since there are
653,534,134,886,878,245,000 (roughly 6.535Ã10206.535 \times 10^{20}6.535Ã1020) distinct deals, and we can play as
many as we want in parallel, in any order.
The algorithm tracks game state by using a few state variables and nested branching logic. This
is easy to write and verify, and actually reasonably efficient to run on a single CPU core.
The algorithm is very compact in terms of code size and data size. The search loop to play many
games while tracking the best ones so far is also very compact.
CPU Starting Pointâ
My initial C++ program to search for long games is
here. It is just a port of the Python
program with a search loop that runs continuously, shuffling the deck randomly between games. I
implemented two simple optimizations that seemed obvious:
a 64-entry circular buffer for each playerâs hand and the discard pile, combined with a bit mask
step to dereference the first and last index pointers. This avoids extra instructions and branches
to handle wrap-around of the first and last index values as cards are added and removed.
swap two cards between each game, to use fewer random numbers â we donât need each deal to be
completely random, but just have a random difference from the previous one.
The program can use multiple CPU cores by just running separate copies of the search loop in
different threads, starting with different RNG seeds. Each search loop does its own random walk
through potential deals.
On my laptop, throughput peaks at 2.9 million deals per second, using 12 threads to match the
number of logical CPU cores.
Initial GPU Portâ
The beauty of General Purpose GPU programming is that you can often get started quickly with the
code you already have. There is a good chance that only a few adaptations are needed to convert a
program that uses multiple threads on CPU to one that runs with even more threads on GPU, as long as
you donât have extreme memory requirements (for code or data or both) which force breaking up the
work into smaller pieces.
GPU cores have a similar range of machine instructions to those of CPUs, so plain algorithms will
compile readily enough to give reasonable single core efficiency; you donât need to transform your
program to use a subset of variable types or data structures or code constructs, or special
parallelization primitives or libraries. You mainly just need to cope with some changes to library
functions (such as the random number generator in my case), and finessing of class structures to
graft in the global functions needed to launch work on the GPU.
You do need to be ready for disappointment though.
If your code is similar to mine, founded on nested branching logic, you may find that the GPU wonât
even be able to match the CPU performance at first. This is to be expected: CPUs are designed for
running unrelated complex branching logic in each thread, dedicating significant chip area on
circuitry to predict branches, speculatively execute multiple paths, and do lots of caching â 1 MB
per logical CPU core in my laptop for instance. CPU cores also run fast, with 3â4GHz max speed being
fairly typical. By comparison, GPUs are much lighter on hardware per core for caching, branch
prediction etc, and top out at perhaps 1.5â2GHz (unless you have an extreme cooling gaming GPU).
They come with faster memory though, to help compensate.
But the net effect is that an algorithm will probably run 2â3 times faster on a single CPU core
than it will on a single GPU core, from a combination of the clock speed ratio and more aggressive
single thread acceleration. You need to figure out how to make good use of the thousands of slow GPU
cores in order to outperform a few fast CPU cores.
My initial port to GPU ran at about
1.4M deals per second (once thermal throttling kicks in), using 2048 threads (128 blocks of 16
threads each). Not an encouraging start, but in hindsight about what I should have expected.
Learn to use Nsight Computeâ
Early on, I decided to get comfortable using the Nsight Compute tool to analyse the GPU portion of
my code in spectacular detail, rather than trying to rely on intuition and the high-level
utilization figures from nvidia-smi. The approach I settled on was to step through the execution of
the first couple of GPU kernel launches (the kernel here being my global function which runs the
core search and game play loop for a predefined number of iterations), and then profile the next
kernel run to see how much of the hardware capability was being used effectively.
The first, unflattering, report that the tool shows is the âSpeed of Lightâ summary: ie. the
percentage of theoretical peak performance on compute and memory bandwidth utilization. My first
program scored around 12% for compute, and 28% for memory. (For the same program, nvidia-smi would
report 88% and 28% utilization respectively, underscoring how misleading its information can be when
you are trying to optimize your algorithm in the early stages.)
There were many detailed metrics underneath the headline figures which could be examined, and
various analysis warnings that point out potential problem areas. Many of these sounded esoteric,
but there were two reasonably clear actionable warnings:
The first one was easy to address: assign at least 32 threads per block (not 16), so we donât leave
warp capacity unused for not even trying. I ended up settling on 32 blocks of 32 threads, which
increased throughput to 2.3M deals per second. nvidia-smi now reported 95% and 13% for compute
and memory utilization. Nsight Compute reports we have improved average active threads per warp from
2.4 to 3.6. That left Thread Divergence as the key warning to address.
Thread Divergenceâ
If you have learned about CUDA programming before, you may recall that thread divergence occurs when
not all threads in the same warp are executing the same instruction. (Each group of 32 consecutive
threads in a block will always execute together as a unit, known as a warp, and GPU hardware is
heavily optimized around running these warps of threads very efficiently on the assumption that the
threads are normally executing instructions in unison, working on closely related pieces of the same
computation.)
It is part of the beauty of General Purpose GPU programming that thread divergence is allowed, and
is handled automatically by the hardware as well as it can be, by simply letting the threads in the
warp take turns to run their next instruction, when they are different. Subsets of the warp that do
share the next instruction will execute together, so degradation in performance is proportional to
how much the threads have diverged from each other.
In the worst case, where every thread is at its own unique point in the code, the threads are
effectively time sliced on the hardware, and so running at 1/32 of the hardwareâs potential. From
the Nsight Compute warning, I could see we were quite close to that point:
[Warning] Instructions are executed in warps, which are groups of 32 threads. Optimal instruction
throughput is achieved if all 32 threads of a warp execute the same instruction. The chosen launch
configuration, early thread completion, and divergent flow control can significantly lower the
number of active threads in a warp per cycle. This kernel achieves an average of 3.6 threads
being active per cycle. This is further reduced to 3.1 threads per warp due to predication. The
compiler may use predication to avoid an actual branch. Instead, all instructions are scheduled,
but a per-thread condition code or predicate controls which threads execute the instructions.
Try to avoid different execution paths within a warp when possible. In addition, ensure your
kernel makes use of Independent Thread Scheduling, which allows a warp to reconverge after a
data-dependent conditional block by explicitly calling __syncwarp().
Getting about 3 active threads per warp means Iâm only tapping into about 1/10th of the GPUâs
compute capacity, at best. Not exactly what you would assume from the 95% compute utilization figure
reported by nvidia-smi â that figure really just means Iâm doing something on all of the GPUâs
Streaming Multiprocessor or SM units about 95% of the time â even if that something is very
inefficient at using each SM, as in this case.
I highlighted the most relevant part of the remediation advice above, which is essentially to remove
the nested branching logic as much as I can. To do that in my program, where each thread is working
on a different game, I realised I would have to rewrite the core game play function. The original
version uses position within the codeâs nested branching structure to encode part of the game state
â I needed to replace that with an explicit data representation of all pieces of game state. Then
every step of the inner loop would be executed in unison across all threads playing their own games,
just with different data values.
Rewrite to use a lookup tableâ
To make the inner loop purely data driven, I chose to introduce a state transition table. Game state
is now tracked explicitly in variables which are treated as inputs to the transition lookup table,
followed by a set of actions that is always performed on every iteration using data values from the
lookup table.
A critical realization for implementing this cleanly in my case was to notice that the discard pile
can be treated as a third player in the game, with some extra states to track when the discard pile
is âplayingâ its cards to the player who won the last trick. With that mental twist in place, a
fairly straightforward lookup table became possible to write by hand. The code for this version is
here; it works on both CPU and GPU, but
it is slower on both â about half the speed of the baseline version on CPU, and two-thirds on
GPU. â¹
Being slower on CPU was expected, since it requires more instructions to manipulate more state
variables, and there is more memory access to lookup state transitions. We also have forced the
adding of the discard pile to the winning playerâs hand on every trick to be done slowly, as
individual turns with all the overhead needed for that. However, on GPU I had hoped to see gains
because now more threads in each warp should be able to execute in unison.
In fact Speed-of-Light compute and memory utilization figures did improve, to 17% and 38%
respectively, but Thread Divergence was still listed as a remediation warning. We are at least up
from an average of 3.6 active threads per warp to 5.3, but that is small comfort given the actual
speed is now about back to where I first started on GPU, which is well behind the CPU performance.
Ah, be careful about function exitsâ
What I had forgotten is that thread divergence will also occur when threads are exiting from a
nested function inside the kernel. My game play loop would always play one game (inside the function
play), then return to the search loop to swap two cards before calling play again. It basically
didnât matter that inside the play inner loop thread divergence has been largely solved, because
each thread in a warp will still finish at different times, depending on how many turns are needed
for their respective games. So they usually diverge at game end, and the whole warp of threads must
wait for the longest game amongst them to finish. Stats on the range of game lengths shows a minimum
of about 33 turns, an average of about 250 turns, and a max somewhere in the many thousands of turns
at least. Thatâs a lot of variation.
With that realization, my next program refactoring was to include the logic for game completion
book-keeping and switching to a new game as another (necessarily conditional) step in the inner game
playing loop. This slows down the inner loop even further, but allows the threads to stay mostly
converged across multiple games. To support this, I pre-create a backlog of games to play, ready for
the next game to be picked very quickly from inside the inner loop by whichever thread is available
first. (This introduces the only inter-thread synchronization mechanism needed so far, which is the
use of a CUDA atomic operation to read and increment the next_deal_to_play index, which barely
affects speed but solves the race condition.)
In order to allow the threads to run as long as possible in this synchronized inner loop, I decided
to use a big chunk of main GPU memory to hold a large backlog, which so far has barely been used
(just for the search loopâs best-game-so-far records).
We now get to a near-final version of the program, which can be recreated from the latest
version using some conditional
compilation definitions.
This version fills a large (eg. 1M entry) backlog of deals to play in GPU memory (using another
kernel of cooperating GPU threads working in parallel), which is then processed in parallel by 1024
threads (32 blocks of 32 threads). Performance did indeed improve over the initial version with the
lookup table, but only to about the same speed as the baseline version, which was achieved once
I had tweaked the number of blocks and threads. What gives?!
Ah, memory speed really mattersâ
Nsight Compute reveals Speed-of-Light is about the same as before (12% compute and 37% memory
utilization), and again lists a number of warnings. This time, some other warnings now seem more
relevant and actionable (as Iâve highlighted below):
[Warning] All pipelines are under-utilized. Either this kernel is very small or it doesnât issue
enough warps per scheduler. Check the Launch Statistics and Scheduler Statistics sections for
further details.
[Warning] Every scheduler is capable of issuing one instruction per cycle, but for this kernel
each scheduler only issues an instruction every 6.0 cycles. This might leave hardware resources
underutilized and may lead to less optimal performance. Out of the maximum of 8 warps per
scheduler, this kernel allocates an average of 1.00 active warps per scheduler, but only an
average of 0.17 warps were eligible per cycle. Eligible warps are the subset of active warps that
are ready to issue their next instruction. Every cycle with no eligible warp results in no
instruction being issued and the issue slot remains unused. To increase the number of eligible
warps either increase the number of active warps or reduce the time the active warps are
stalled.
[Warning] On average each warp of this kernel spends 2.4 cycles being stalled waiting for a
scoreboard dependency on a L1TEX (local, global, surface, texture) operation. This represents
about 39.7% of the total average of 6.0 cycles between issuing two instructions. To reduce the
number of cycles waiting on L1TEX data accesses verify the memory access patterns are optimal for
the target architecture, attempt to increase cache hit rates by increasing data locality or by
changing the cache configuration, and consider moving frequently used data to shared memory.
Thread Divergence is still listed as a warning, though the average active threads per warp is now at about 9. Looking at the âsourceâ view of the kernel profile, which shows per-instruction info such as the execution counts, average active threads, distribution of instruction stall reasons etc, it looks like there are lots of times when the inner loop instructions are stalled waiting for access to results from memory.
So it looks like there are two clear problems we can address:
Ensure there are more (warps of) threads available to schedule, so they can hide the
micro-latencies that are naturally experienced by any one warp.
Reduce the times when the inner loop is stalled accessing GPU memory.
The first issue can in theory be addressed by playing with the blocks and threads configuration.
However, changing that doesnât really make a difference: we are primarily bottlenecked on accessing
data from GPU memory.
Use Shared Memory as much as possibleâ
The best improvement we can make now is ironically to stop using main GPU memory as much as we can.
In my case, that means replacing the single large backlog of deals with a very small backlog per
block, held in Shared Memory.
As you may recall from CUDA Programming primers, Shared
Memory is the fastest memory area
(other than registers allocated by the compiler) available to programmers, but the most limited in
size. On my laptopâs GPU, it is limited to 48KB â the same amount of memory as my first home
computer (from the Apple II era). This memory is per-block and only available while each specific
block is executing â its contents will be lost when that block ends, so relevant info needs to be
copied to main memory as final output.
Thankfully for my program, using this only involves a modest code change, and basically means the
inner game loop runs for fewer iterations before the local backlog is exhausted and has to be
replenished. Also each small backlog is shared by fewer threads, so there is more opportunity for
some threads to finish early and increase thread divergence.
With this
change (which
included changing blocks and threads to 16 and 128, in order to try and provide more warps to the
scheduler as noted above), performance on GPU finally moved ahead of CPU for the first time, and by
a fairly impressive margin, reaching about 40M deals per second (vs. 3M on CPU)!
Nsight Compute is still saying we are memory bottlenecked though. Final chance to squeeze further!
Make Shared Memory stretch as far as possibleâ
My final change is to recognise that the core data structures are using more memory than they need
to, since Iâm using an enum to represent 5 possible card values, and enum by default is
represented as an int in C++, which is treated as a 32-bit word in CUDA. Similarly, the lookup
table data values are all defined as int, but far fewer bits would suffice. I had initially
wondered whether native word sized values were more efficient at the individual instruction level,
but actually GPUs (like most CPUs) efficiently support sub-word data sizes and even arbitrary
bit field operations at the instruction level, so that isnât an important concern.
With a change to specify uint8 as my enum base type, add appropriate bit field declarations in the
lookup table struct, and use a more compact representation of deals in the backlog (not the 64 entry
circular buffer representation used for playing fast), I am able to squeeze a longer backlog into
the 48KB of shared memory, and also reduce the memory bandwidth needed for lookup operations and
other game play steps.
The effect is rather gratifying: my final
version
now hits over 100M deals per second, at least until thermal throttling brings it back down to
95M or so. ð
Nsight Compute is still saying Iâm memory bound, but at this stage Iâm thinking that may be just
how it is. The algorithm is so light on computation that it will typically be waiting on memory
(even the super-fast Shared Memory) no matter what. The next step, if it were possible to see how to
restructure the game playing loop in a suitable way, would be to try and ensure we created coalesced
memory access patterns (where threads in a warp all reference directly adjacent memory locations
that allow the hardware to make them one read/write operation). But that seems very unlikely to be
possible with each thread still working on its own deal.
Maybe there is still some potential for improvement from tweaking block and thread counts, and
paying attention to memory layout and other micro-details. Iâm not holding my breath!
Progression Historyâ
From initial CPU version to final decent GPU version, here was the history of changes and results.
All performance numbers are in millions of deals per second.
Product
Use Cases
Pricing
Resources
About
|
Optimizing Sequential cuBLAS Calls for Matrix Operations—Alternatives to Kernel Fusion?
I am currently working on a CUDA project where my code involves a sequence of matrix multiplications followed by activation functions. Typically, such dependent, sequential operations can be optimized using kernel fusion to minimize shared memory access, enhancing overall performance.
To streamline my implementation, I opted to use cuBLAS for handling the matrix multiplications. However, I’ve found that cuBLAS doesn’t support kernel fusion, which seems like a missed opportunity for optimization in terms of reducing memory overhead and improving execution speed.
Given this context, I am seeking advice on alternative methods to optimize these sequential cuBLAS calls. Are there techniques within CUDA or associated libraries that can mimic the effects of kernel fusion, or perhaps a way to efficiently manage these operations to achieve similar performance gains? Any suggestions on optimizing memory usage or overlapping computations would also be greatly appreciated.
Share this problem. Way I understand it cuBLASDx aims to facilitate kernel fusion for BLAS operations. But currently it looks to only support matrix multiplication which is not enough in my case.
You can use cublasLT for some fusion cases, or directly using cutlass.
Please take a look at cuDNN’s Graph API (Graph API — NVIDIA cuDNN v9.1.0 documentation). It supports fusion prologue and epilogue fusions with convolutions and matmuls. It offers an abstraction layer on top of cublas and cutlass.
Generic Runtime Fusion Engines (Graph API — NVIDIA cuDNN v9.1.0 documentation)
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
Democratizing AI Accelerators and GPU Kernel Programming using Triton
by Sanjeev Rampal | Nov 7, 2024 | AI, Hybrid Cloud
Red Hat’s Emerging Technologies blog includes posts that discuss technologies that are under active development in upstream open source communities and at Red Hat. We believe in sharing early and often the things we’re working on, but we want to note that unless otherwise stated the technologies and how-tos shared here aren’t part of supported products, nor promised to be in the future.
Triton1 is a language and compiler for parallel programming. Specifically it is currently a Python-based DSL (Domain Specific Language) along with associated tooling, that enables the writing of efficient custom compute kernels used for implementing DNNs (Deep Neural Networks) and LLMs (Large Language Models), especially when executed on AI accelerators such as GPUs.
The key goals for Triton are:
Consequently Triton aims to democratize AI infrastructure, accelerate data science developer productivity (i.e., “developer inner loop”), enabling an open architecture for GPU and AI accelerator programming.
The Triton project is currently released as open source by Philippe Tillet and OpenAI under an MIT license (with growing contributions from Meta and others). Red Hat is a strong proponent of Open Source AI technologies and innovations that facilitate a healthy and diverse hardware ecosystem that lowers costs and expands adoption of AI infrastructure solutions. In this blog post, we describe some of the foundational architecture topics in the Triton space and its connection with frameworks such as PyTorch. In subsequent blog posts, we will go into further details including the use of Triton on Red Hat platforms.
Kernels, GPU programming models
Figure 1 below illustrates a simplified view of a basic AI server model with a single multi-core CPU and a single GPU. A GPU kernel is simply the program/ function that runs on the GPU, loaded and invoked on-demand from the program running on the host CPU. A common GPU architecture is that of a SIMD/ SPMD machine that itself contains a number of processors (often referred to as Streaming Multiprocessors or SMs), which then themselves contain multiple smaller compute cores as well as specialized arithmetic units such as Tensor cores.
When running a DNN application, the conventional design pattern is that the host CPU launches new GPU kernels onto the GPU, loads a (often very large) set of data into GPU memory, lets the GPU processors execute multiple parallel compute threads on the loaded kernel to perform a set of computations, (usually vector or matrix operations such as matrix multiplications) and then harvests the results back into CPU memory before potentially launching a follow up set of kernels and associated data. Later in this article we see how newer design patterns optimize this. A precise comparison of the vector processing differences between CPUs and GPUs is beyond the scope of this article, save to mention here that general purpose CPUs typically contains 10s or maybe 100s of general purpose compute cores with some amount of vector processing support, whereas GPUs contain 1 or 2 orders of magnitude more special purpose compute threads (e.g., “CUDA threads”) enabling massively parallel processing on large vectors in an SIMD manner and also have special purpose tensor arithmetic units e.g., “tensor cores”.
Figure 2 (Ref. article) shows a common abstracted model of a single GPU that can serve for the purpose of this article. This shows that there is a multi-level memory hierarchy even within a single GPU, with an L1 SRAM based cache available to compute threads within a single SM, an L2 SRAM based cache shared by multiple SMs and a global GPU memory (often implemented using a form of DRAM called HBM or High Bandwidth Memory). The SRAM memories support higher throughput and significantly lower access latencies than the global HBM which in turn is faster than CPU DRAM. The exact numbers vary with GPU type but as an example on the H100, HBM memory bandwidth is 3 TB/s, L2 SRAM bandwidth is 12 TB/s and L1 SRAM bandwidth is 33 TB/s.
Newer GPUs continue to add special capabilities that are not shown in the above simplified model, an example being the Tensor Memory Accelerator or TMA. A detailed analysis of the implications of these different memory hierarchies as well as the memory and compute bandwidths of these different components is beyond the scope of this article. The key point to note is that in order to achieve high efficiency and performance for DNNs, it is vital that all the compute cores, SMs and tensor cores of the GPU are kept busy, particularly with large transformer style models. See for example GPUs go brr. Due to advances in GPU hardware design technology, the raw compute capacity of GPU SMs have vastly outpaced the improvements in memory bandwidth implying that the arithmetic cores are often idling if data is not already in the GPU’s SRAM or if there isn’t enough computation designed in per HBM memory fetch.
Due to the mismatch between throughputs of the various compute and memory components, a naively written AI application might well utilize the GPU’s SMs and tensor cores at less than 10% utilization (or sometimes even less than 1% utilization) resulting in both poor overall performance due to increased latency and overall execution time as well as high cost of providing the service given that these expensive GPUs are being utilized at a tiny fraction of their potential compute capacity. See for example “AI and memory wall”. Hence the performance of transformer model based applications is often said to be “memory-bound”, although there can be some scenarios where the performance is “compute-bound” as well.
This is where a well designed GPU kernel can make a huge difference and improve overall performance and efficiency. Industry and academic research in this area has emphasized the need to design kernels that are better tuned to address this compute vs memory performance imbalance. See for example “Data movement is all you need” and “GPUs go brr”.
Having learnt a bit about GPU architecture and the value of well written and tuned GPU kernels, we now come to what types of kernels one could have. CUDA kernels are popular examples of such kernels that are specific to GPUs and AI accelerator hardware from one vendor. Similarly vendors of other AI chips have their own kernel stacks. For instance AMD has open sourced its software platform called ROCm for building compute kernels for its AI accelerators and GPUs.
Triton
With this background perspective, we now come to the Triton kernels. Triton is a DSL for writing GPU kernels that aren’t tied to any one vendor. Additionally, it is architected to have multiple layers of compilers that automate the optimization and tuning needed to address the memory vs compute throughput tradeoff noted above without requiring the kernel developer to do so. This is the primary value of Triton. Using this combination of device-independent front-end compilation and device-dependent backend compilation, it is able to generate near-optimal code for multiple hardware targets ranging from existing accelerators to upcoming accelerator families from Intel, Qualcomm, Meta, Microsoft and more without requiring the kernel developer to know details of or design optimization strategies for each hardware architecture separately.
Effectively, this moves the detailed and often vendor-specific complexity out of the user’s concern and into the vendor’s. This is a more logical design split because vendors are incentivized to highlight their hardware’s differentiating features, and user’s immediately benefit without change to their applications and do not have to build a depth of engineering proficiency in GPU programming for each model GPU they own. However we are still in an early phase of this technology and should monitor how back-end compiler tuning performance evolves.
Figure 3 below illustrates some of the different ways that Triton could fit into a DNN software stack in combination with a PyTorch framework. Triton can also be used in combination with other frameworks such as TensorFlow.
As shown, a DNN application may choose to make direct calls to launch custom Triton kernels (either developer authored or leveraged from open source Triton kernel library repos) to perform certain GPU intensive functions. Alternatively, it may choose to leverage a framework such as PyTorch. Within the PyTorch community, additional frameworks have been recently introduced as part of PyTorch 2.x which can further automate the options for using Triton kernels. For instance by using the Torch Dynamo and Torch Inductor compiler frameworks, Triton kernels can be automatically generated and optimized (including optimizations such as kernel fusions) without the app developer having to manually write, fuse or invoke an existing library of Triton kernels directly. However not all kinds of kernel functions can be automatically generated in this manner with optimal performance, so a mix of generated and manually developed Triton kernels may be used in practice. Several industry and open source projects already exist to develop well designed Triton kernels for various DNN functions. Examples include. LinkedIn’s Liger, Unsloth and sample kernels from the Triton team. We haven’t listed example Triton kernels in this article in order to focus on architecture and analysis. The official Triton repo has some tutorials with kernel examples and explanations that may be referred to.
In any case, once we have the set of well designed Triton kernels, the Triton language compiler performs multiple passes and its own set of optimizations over the kernel code to generate a Triton IR/ Intermediate Representation version of the code. Beyond this point, we enter the realm of device and GPU specific compilations where backend compilers, typically provided by GPU vendors, translate the lower versions of the kernels into GPU specific machine code binaries for eventual execution on the hardware runtimes. Although we have skipped over many of the details, it should be clear to the reader that the overall process involves multiple layers and passes of compilations, code generation and optimizations when going from a high level DNN application all the way to binary code executable on specific hardware devices and GPUs.
Definitions of some key concepts
The prior sections provided some intuition around design issues relevant to Triton, GPU kernels and frameworks such as Pytorch. For completeness it is useful to have a short reference list below of common terminology which the reader may find handy when further researching the literature in this area as well as in advance of our future blog posts on these topics.
Kernel Fusion – Optimization technique of combining multiple compute operations/ kernels into a single GPU kernel, optimizing memory transfers, minimizing latency and improving overall performance by executing related operations together in a single pass on the GPU.
Auto-tuning – A process of automatically optimizing parameters (such as block size, thread count) to find the best-performing configuration for a specific hardware setup and workload, improving execution efficiency.
Arithmetic Intensity – The ratio of computational operations (e.g., floating-point operations) to memory operations (data transfers), indicating how much work is done per memory access.
Conclusion
Triton is an important initiative in the move towards democratizing the use and programming of AI accelerators such as GPUs for Deep Neural Networks. In this article we shared some foundational concepts around this project. In future articles, we will dive into additional Triton details and illustrate its use in enterprise AI platforms from Red Hat.
Acknowledgements: Thanks to Steve Royer (Red Hat), Jeremy Eder (Red Hat), Raffaele Spazzoli (Red Hat), Adnan Aziz (Meta), Adam Goucher (OpenAI), Madhukar Srivatsa (IBM) and Raghu Ganti (IBM) for their valued input and review.
Explore
Privacy statement
Terms of use
All policies and guidelines
About
Copyright © 2021-2023 Red Hat, Inc.
|
GPU Optimization
Fundamentals
Cliff Woolley
Developer Technology Engineer
© NVIDIA 2013
Note: Fundamentals will apply broadly
Example performance numbers are presented for Tesla K20X,
which is based on the Kepler GK110 GPU
Same general optimization concepts apply to other GPUs, though
some parameters may be different, e.g.:
Number of SMs per GPU
Number of functional units per SM
Maximum number of concurrent warps per SM
Shared memory size per SM
Register file size per SM
Developer tools from NVIDIA help you analyze the concepts
without having to memorize parameters of each architecture
© NVIDIA 2013
GPU OPTIMIZATION FUNDAMENTALS
© NVIDIA 2013
Main Requirements for GPU Performance
Expose sufficient parallelism
Utilize parallel execution resources efficiently
Use memory system efficiently
Coalesce global memory accesses
Use shared memory where possible
Have coherent execution within warps of threads
© NVIDIA 2013
APOD: A Systematic Path to Performance
Assess
Parallelize
Optimize
Deploy
© NVIDIA 2013
Identify hotspots (total time, number of calls)
Understand scaling (strong and weak)
Assess
HOTSPOTS
© NVIDIA 2013
Parallelize Applications
Applications Libraries
Libraries Programming Languages
Programming
Languages Compiler Directives
Compiler
Directives
© NVIDIA 2013
Optimize
Profile-driven optimization
Tools:
nsight Visual Studio Edition or Eclipse Edition
nvvp
NVIDIA Visual Profiler
nvprof Command-line profiling
© NVIDIA 2013
Deploy
Check API return values
Run cuda-memcheck tools
Library distribution
Cluster management
Early gains
Subsequent changes are evolutionary
Productize
© NVIDIA 2013
ASSESS
© NVIDIA 2013
Profile the code, find the hotspot(s)
Focus your attention where it will give the most benefit
Assess
© NVIDIA 2013
Assess
We’ve found a hotspot to work on!
What percent of our total time does this represent?
How much can we improve it? What is the “speed of light”?
How much will this improve our overall performance?
© NVIDIA 2013
Assess
Let’s investigate…
Strong scaling and Amdahl’s Law
Weak scaling and Gustafson’s Law
Expected perf limiters: Bandwidth? Computation? Latency?
© NVIDIA 2013
Assess: Understanding Scaling
Strong Scaling
A measure of how, for fixed overall problem size, the time to
solution decreases as more processors are added to a system
Linear strong scaling: speedup achieved is equal to number of
processors used
Amdahl’s Law:
𝑺=
𝟏
𝟏−𝑷+ 𝑷
𝑵
≈
𝟏
(𝟏−𝑷)
© NVIDIA 2013
Assess: Understanding Scaling
Weak Scaling
A measure of how time to solution changes as more processors
are added with fixed problem size per processor
Linear weak scaling: overall problem size increases as num. of
processors increases, but execution time remains constant
Gustafson’s Law:
𝑺= 𝑵+ (𝟏−𝑷)(𝟏−𝑵)
© NVIDIA 2013
Assess: Applying Strong and Weak Scaling
Understanding which type of scaling is most applicable is an
important part of estimating speedup:
Sometimes problem size will remain constant
Other times problem size will grow to fill the available processors
Apply either Amdahl's or Gustafson's Law to determine an upper
bound for the speedup
© NVIDIA 2013
Assess: Applying Strong Scaling
Recall that in this case we are wanting to optimize an
existing kernel with a pre-determined workload
That’s strong scaling, so Amdahl’s Law will determine
the maximum speedup
~93%
© NVIDIA 2013
Assess: Applying Strong Scaling
Say, for example, our kernel is ~93% of total time:
Speedup 𝑺=
𝟏
𝟏−𝑷+ 𝑷
𝑺𝑷
(SP = speedup in parallel part)
In the limit when 𝑺𝑷 is huge, 𝑺 will approach
𝟏
𝟏−𝟎.𝟗𝟑≈𝟏𝟒. 𝟑
In practice, it will be less than that depending on the 𝑺𝑷 achieved
Getting 𝑺𝑷 to be high is the goal of optimizing, of course
~93%
© NVIDIA 2013
Assess: Speed of Light
What’s the limiting factor?
Memory bandwidth?
Compute throughput?
Latency?
Not sure?
Get a rough estimate by counting bytes per instruction,
compare it to “balanced” peak ratio
𝑮𝑩𝒚𝒕𝒆𝒔/𝒔𝒆𝒄
𝑮𝒊𝒏𝒔𝒏𝒔/𝒔𝒆𝒄
Profiler will help you determine this
© NVIDIA 2013
Assess: Limiting Factor
Comparing bytes per instr. will give you a guess as to whether
you’re likely to be bandwidth-bound or instruction-bound
Comparing actual achieved GB/s vs. theory and achieved
Ginstr/s vs. theory will give you an idea of how well you’re doing
If both are low, then you’re probably latency-bound and need to expose
more (concurrent) parallelism
© NVIDIA 2013
Assess: Limiting Factor
© NVIDIA 2013
Assess: Speed of Light
What’s the limiting factor?
Memory bandwidth? Compute throughput? Latency?
Consider SpMV: intuitively expect it to be bandwidth-limited
Say we discover we’re getting only ~38% of peak bandwidth
If we aim to get this up to ~65% of peak, that’s 1.7 for this kernel
1.7 for this kernel translates into 1.6 overall due to Amdahl:
𝐒=
𝟏
𝟏−𝟎.𝟗𝟑+𝟎.𝟗𝟑
𝟏.𝟕
≈𝟏. 𝟔
~93%
© NVIDIA 2013
Assess: Limiting Factor
For our example SpMV kernel, our first discovery was that we’re
latency-limited, not bandwidth, since utilization was so low
This tells us our first “optimization” step actually needs to be
related how we expose (memory-level) parallelism
~93%
© NVIDIA 2013
PARALLELIZE
© NVIDIA 2013
PARALLELIZE
Computation
© NVIDIA 2013
Parallelize Applications
Applications Libraries
Libraries Programming Languages
Programming
Languages Compiler Directives
Compiler
Directives
Pick the best tool for the job
© NVIDIA 2013
NVIDIA cuFFT
NVIDIA cuSPARSE
NVIDIA cuBLAS
NVIDIA cuRAND
NVIDIA NPP
Vector Signal
Image Processing
Matrix Algebra on
GPU and Multicore
C++ Templated
Parallel Algorithms
IMSL Library
GPU Accelerated
Linear Algebra
Building-block
Algorithms
CenterSpace NMath Parallelize: e.g., with GPU Accelerated Libraries
Parallelize: e.g., with GPU Accelerated Libraries
© NVIDIA 2013
// generate 32M random numbers on host
thrust::host_vector<int> h_vec(32 << 20);
thrust::generate(h_vec.begin(),
h_vec.end(),
rand);
// transfer data to device (GPU)
thrust::device_vector<int> d_vec = h_vec;
// sort data on device
thrust::sort(d_vec.begin(), d_vec.end());
// transfer data back to host
thrust::copy(d_vec.begin(),
d_vec.end(),
h_vec.begin());
Parallelize: e.g., with Thrust
Parallelize: e.g., with Thrust
Similar to C++ STL
High-level interface
Enhances developer productivity
Enables performance portability
between GPUs and multicore CPUs
Flexible
Backends for CUDA, OpenMP, TBB
Extensible and customizable
Integrates with existing software
Open source
thrust.github.com or developer.nvidia.com/thrust
© NVIDIA 2013
Parallelize: e.g., with OpenACC
Program myscience
... serial code ...
!$acc kernels
do k = 1,n1
do i = 1,n2
... parallel code ...
enddo
enddo
!$acc end kernels
...
End Program myscience
CPU
GPU
Your original
Fortran or C code Directives-based approach Compiler parallelizes code Works on many-core GPUs & multicore CPUs
Directives-based approach
Compiler parallelizes code
Works on many-core GPUs &
multicore CPUs OpenACC
OpenACCCompiler
Compiler Directive
Directive
www.nvidia.com/gpudirectives
© NVIDIA 2013
void saxpy_serial(int n,
float a,
float *x,
float *y)
{
for (int i = 0; i < n; ++i)
y[i] = a*x[i] + y[i];
}
// Perform SAXPY on 1M elements
saxpy_serial(4096*256, 2.0, x, y);
__global__
void saxpy_parallel(int n,
float a,
float *x,
float *y)
{
int i = blockIdx.x * blockDim.x +
threadIdx.x;
if (i < n) y[i] = a*x[i] + y[i];
}
// Perform SAXPY on 1M elements
saxpy_parallel<<<4096,256>>>(n,2.0,x,y); Parallelize: e.g., with CUDA C
Parallelize: e.g., with CUDA C
Standard C Code
CUDA C Code
developer.nvidia.com/cuda-toolkit
© NVIDIA 2013
Parallelism Needed
GPU is a parallel machine
Lots of arithmetic pipelines
Multiple memory banks
To get good performance, your code must expose sufficient
parallelism for 2 reasons:
To actually give work to all the pipelines
To hide latency of the pipelines
Rough rule of thumb for Tesla K20X:
You want to have 14K or more threads running concurrently
© NVIDIA 2013
void transpose(float in[][], float out[][], int N)
{
for(int j=0; j < N; j++)
for(int i=0; i < N; i++)
out[j][i] = in[i][j];
}
Case Study: Matrix Transpose
i
j
© NVIDIA 2013
+ Quickly implemented - Performance weak
Need to expose parallelism!
An Initial CUDA Version
__global__ void transpose(float in[], float out[], int N)
{
for(int j=0; j < N; j++)
for(int i=0; i < N; i++)
out[i*N+j] = in[j*N+i];
}
float in[N*N], out[N*N];
…
transpose<<<1,1>>>(in, out, N);
© NVIDIA 2013
+ Quickly implemented - Performance weak
Need to expose parallelism!
An Initial CUDA Version
__global__ void transpose(float in[], float out[], int N)
{
for(int j=0; j < N; j++)
for(int i=0; i < N; i++)
out[i*N+j] = in[j*N+i];
}
float in[N*N], out[N*N];
…
transpose<<<1,1>>>(in, out, N);
© NVIDIA 2013
Parallelize across matrix elements
tid
in
tid
tid
tid
out
tid
tid
__global__ transpose(float in[], float out[])
{
int tid = threadIdx.x;
int bid = blockIdx.x;
out[tid*N+bid] = in[bid*N+tid];
}
float in[], out[];
…
transpose<<<N,N>>>(in, out);
Process elements independently
bid
bid
bid bid
© NVIDIA 2013
PARALLELIZE
Data Transfer
© NVIDIA 2013
Heterogeneous system: overlap work and data movement
Asynchronicity = Overlap = Parallelism
DMA
DMA
© NVIDIA 2013
This is the kind of case we would be concerned about
Found the top kernel, but the GPU is mostly idle – that is our bottleneck
Need to overlap CPU/GPU computation and PCIe transfers
Asynchronicity
© NVIDIA 2013
What we want to see is maximum overlap of all engines
Parallelize: Achieve Asynchronicity
© NVIDIA 2013
OPTIMIZE
© NVIDIA 2013
Main Requirements for GPU Performance
Expose sufficient parallelism
Utilize parallel execution resources efficiently
Use memory system efficiently
Coalesce global memory accesses
Use shared memory where possible
Have coherent execution within warps of threads
© NVIDIA 2013
GPU Optimization Fundamentals
Find ways to parallelize sequential code
Adjust kernel launch configuration to maximize device utilization
Ensure global memory accesses are coalesced
Minimize redundant accesses to global memory
Avoid different execution paths within the same warp
Minimize data transfers between the host and the device
http://docs.nvidia.com/cuda/cuda-c-best-practices-guide/
© NVIDIA 2013
GPU Optimization Fundamentals
Find ways to parallelize sequential code
Kernel optimizations
Launch configuration
Global memory throughput
Shared memory access
Instruction throughput / control flow
Optimization of CPU-GPU interaction
Maximizing PCIe throughput
Overlapping kernel execution with memory copies
© NVIDIA 2013
OPTIMIZE
Kernel Optimizations: Kernel Launch Configuration
© NVIDIA 2013
Kernel Launch Configuration
A kernel is a function that runs on the GPU
A kernel is launched as a grid of blocks of threads
Launch configuration is the number of blocks and number of
threads per block, expressed in CUDA with the <<< >>> notation:
mykernel<<<blocks_per_grid,threads_per_block>>>(…);
What values should we pick for these?
Need enough total threads to process entire input
Need enough threads to keep the GPU busy
Selection of block size is an optimization step involving warp occupancy
© NVIDIA 2013
High-level view of GPU Architecture
Several Streaming Multiprocessors
E.g., Kepler GK110 has up to 15 SMs
L2 Cache shared among SMs
Multiple channels to DRAM
Kepler GK110
© NVIDIA 2013
Kepler Streaming Multiprocessor (SMX)
Per SMX:
192 SP CUDA Cores
64 DP CUDA Cores
4 warp schedulers
Up to 2048 concurrent threads
One or two instructions issued
per scheduler per clock from a
single warp
Register file (256KB)
Shared memory (48KB)
© NVIDIA 2013
CUDA Execution Model
Thread: Sequential execution unit
All threads execute same sequential program
Threads execute in parallel
Threads Block: a group of threads
Executes on a single Streaming Multiprocessor (SM)
Threads within a block can cooperate
Light-weight synchronization
Data exchange
Grid: a collection of thread blocks
Thread blocks of a grid execute across multiple SMs
Thread blocks do not synchronize with each other
Communication between blocks is expensive
© NVIDIA 2013
Software
Hardware
Threads are executed by scalar CUDA Cores
Thread
CUDA
Core
Thread Block
Multiprocessor
Thread blocks are executed on multiprocessors
Thread blocks do not migrate
Several concurrent thread blocks can reside on
one multiprocessor - limited by multiprocessor
resources (shared memory and register file)
Grid
A kernel is launched as a grid of thread blocks
Execution Model
Device
© NVIDIA 2013
Launch Configuration: General Guidelines
How many blocks should we use?
1,000 or more thread blocks is best
Rule of thumb: enough blocks to fill the GPU at least 10s of times over
Makes your code ready for several generations of future GPUs
© NVIDIA 2013
Launch Configuration: General Guidelines
How many threads per block should we choose?
The really short answer: 128, 256, or 512 are often good choices
The slightly longer answer:
Pick a size that suits the problem well
Multiples of 32 threads are best
Pick a number of threads per block (and a number of blocks) that is
sufficient to keep the SM busy
© NVIDIA 2013
Multiprocessor
32 Threads
Warps
A thread block consists
of warps of 32 threads
A warp is executed
physically in parallel on
some multiprocessor.
Threads of a warp issue
instructions in lock-
step (as with SIMD)
=
Warps
Thread Block
32 Threads
32 Threads
32 Threads
© NVIDIA 2013
Hardware Levels of Parallelism
SIMD
MPI
Single Instruction, Multiple Data
In-core parallelism
SMT
Simultaneous Multithreading
Cross-core, Cross-socket
Single Computer
OpenMP, pthreads
Multiple “computers”
Tightly-coupled
Supercomputing apps
SIMT
Single Instruction, Multiple Threads
In-processor parallelism
Many threads on many cores
These form a continuum. Best performance is achieved with a mix.
© NVIDIA 2013
Low Latency or High Throughput?
CPU
Optimized for low-latency
access to cached data sets
Control logic for out-of-order
and speculative execution
GPU
Optimized for data-parallel,
throughput computation
Architecture tolerant of
memory latency
More transistors dedicated to
computation
© NVIDIA 2013
Occupancy
Need enough concurrent warps
per SM to hide latencies:
Instruction latencies
Memory access latencies
Hardware resources determine
number of warps that fit per SM
Occupancy = Nactual / Nmax
© NVIDIA 2013
Low Latency or High Throughput?
CPU architecture must minimize latency within each thread
GPU architecture hides latency with computation from other (warps of) threads
GPU Streaming Multiprocessor – High-throughput Processor
CPU core – Low-latency Processor
Computation Thread/Warp
Tn
Processing
Waiting for data
Ready to be processed
Context switch
W1
W2
W3
W4
T1
T2
T3
T4
© NVIDIA 2013
Latency Hiding
Instruction latencies:
Roughly 10-20 cycles for arithmetic operations
DRAM accesses have higher latencies (400-800 cycles)
Instruction Level Parallelism (ILP)
Independent instructions between two dependent ones
ILP depends on the code, done by the compiler
Switching to a different warp
If a warp must stall for N cycles due to dependencies, having N other
warps with eligible instructions keeps the SM going
Switching among concurrently resident warps has no overhead
State (registers, shared memory) is partitioned, not stored/restored
FFMA R0, R43, R0, R4;
FFMA R1, R43, R4, R5;
FMUL R7, R9, R0;
FMUL R8, R9, R1;
ST.E [R2], R7;
ILP=2
© NVIDIA 2013
Occupancy
Occupancy: number of concurrent warps per SM, expressed as:
Absolute number of warps of threads that fit concurrently (e.g., 1..64), or
Ratio of warps that fit concurrently to architectural maximum (0..100%)
Number of warps that fit determined by resource availability:
Threads per thread block
Registers per thread
Shared memory per thread block
Kepler SM resources:
–
64K 32-bit registers
–
Up to 48 KB of shared memory
–
Up to 2048 concurrent threads
–
Up to 16 concurrent thread blocks
© NVIDIA 2013
Occupancy and Performance
Note that 100% occupancy isn’t needed to reach maximum
performance
Once the “needed” occupancy (enough warps to switch among to cover
latencies) is reached, further increases won’t improve performance
Level of occupancy needed depends on the code
More independent work per thread -> less occupancy is needed
Memory-bound codes tend to need more occupancy
Higher latency than for arithmetic, need more work to hide it
© NVIDIA 2013
Thread Block Size and Occupancy
Thread block size is a multiple of warp size (32)
Even if you request fewer threads, hardware rounds up
Thread blocks can be too small
Kepler SM can run up to 16 thread blocks concurrently
SM can reach the block count limit before reaching good occupancy
E.g.: 1-warp blocks = 16 warps/SM on Kepler (25% occ – probably not enough)
Thread blocks can be too big
Enough SM resources for more threads, but not enough for a whole block
A thread block isn’t started until resources are available for all of its threads
© NVIDIA 2013
Thread Block Sizing
SM resources:
Registers
Shared memory
Number of warps allowed by SM resources
Too few
threads per block
Too many
threads per block
© NVIDIA 2013
CUDA Occupancy Calculator
Analyze effect of
resource consumption
on occupancy
© NVIDIA 2013
Occupancy Analysis in NVIDIA Visual Profiler
Occupancy here is limited
by grid size and number of
threads per block
© NVIDIA 2013
OPTIMIZE
Kernel Optimizations: Global Memory Throughput
© NVIDIA 2013
Host
CPU
Chipset
DRAM
Device
DRAM
Global
Constant
Texture
Local
GPU
Multiprocessor
Registers
Shared Memory
Multiprocessor
Registers
Shared Memory
Multiprocessor
Registers
Shared Memory
Constant and Texture
Caches
L1 / L2 Cache
CUDA Memory Architecture
© NVIDIA 2013
Optimizing Memory Throughput
Goal: utilize all available memory
bandwidth
Little’s Law:
# bytes in flight = latency * bandwidth
Increase parallelism (bytes in flight)
(or)
Reduce latency (time between requests)
Access latency L
© NVIDIA 2013
Illustration: Little’s Law for Escalators
Say the parameters of our escalator are:
1 person fits on each step
Step arrives every 2 secs (bandwidth=0.5 persons/s)
20 steps tall (latency=40 seconds)
1 person in flight: 0.025 persons/s achieved
To saturate bandwidth:
Need 1 person arriving every 2 s
Means we’ll need 20 persons in flight
The idea: Bandwidth × Latency
It takes latency time units for the first person to arrive
We need bandwidth persons to get on the escalator every time unit
© NVIDIA 2013
Memory-Level Parallelism = Bandwidth
In order to saturate memory bandwidth, SM must have
enough independent memory requests in flight concurrently
© NVIDIA 2013
Memory-Level Parallelism: Requests in flight
Achieved Kepler memory throughput
Shown as a function of number of concurrent requests
per SM with 128-byte lines
© NVIDIA 2013
Experiment: vary size of accesses by
threads of a warp, check performance
Memcopy kernel: each warp has 2 concurrent
requests (one write and the read following it)
Accesses by a warp:
4B words: 1 line
8B words: 2 lines
16B words: 4 lines
To achieve same
throughput at lower
occupancy or with
smaller words, need
more independent
requests per warp
Requests per Thread and Performance
© NVIDIA 2013
Optimizing Access Concurrency
Ways to increase concurrent accesses:
Increase occupancy (run more warps concurrently)
Adjust block dimensions to maximize occupancy
If occupancy is limited by registers per thread, try to reduce register count
(-maxrregcount option or __launch_bounds__)
Modify code to process several elements per thread
Doubling elements per thread doubles independent accesses per thread
© NVIDIA 2013
OPTIMIZE
Kernel Optimizations: Global Memory Access Coalescing
© NVIDIA 2013
Mechanics of a Memory Access
Memory operations are issued per warp
Just like all other instructions
Operation:
Threads in a warp provide memory addresses
Hardware determines which lines/segments are needed, fetches them
© NVIDIA 2013
Memory Access Efficiency Analysis
Two perspectives on the throughput:
Application’s point of view: count only bytes requested by application
HW point of view: count all bytes moved by hardware
The two views can be different:
Memory is accessed at 32 byte granularity
With a scattered or offset pattern, the application doesn’t use all the bytes the
hardware actually transferred
Broadcast: the same small transaction serves many threads in a warp
© NVIDIA 2013
Access Patterns vs. Memory Throughput
Scenario:
Warp requests 32 aligned, consecutive 4-byte words
Addresses fall within 4 segments
Warp needs 128 bytes
128 bytes move across the bus
Bus utilization: 100%
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
© NVIDIA 2013
Access Patterns vs. Memory Throughput
...
addresses from a warp
Scenario:
Warp requests 32 aligned, permuted 4-byte words
Addresses fall within 4 segments
Warp needs 128 bytes
128 bytes move across the bus
Bus utilization: 100%
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
© NVIDIA 2013
Access Patterns vs. Memory Throughput
Scenario:
Warp requests 32 misaligned, consecutive 4-byte words
Addresses fall within at most 5 segments
Warp needs 128 bytes
At most 160 bytes move across the bus
Bus utilization: at least 80%
Some misaligned patterns will fall within 4 segments, so 100% utilization
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
...
addresses from a warp
© NVIDIA 2013
Access Patterns vs. Memory Throughput
addresses from a warp
Scenario:
All threads in a warp request the same 4-byte word
Addresses fall within a single segment
Warp needs 4 bytes
32 bytes move across the bus
Bus utilization: 12.5%
...
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
© NVIDIA 2013
Access Patterns vs. Memory Throughput
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Scenario:
Warp requests 32 scattered 4-byte words
Addresses fall within N segments
Warp needs 128 bytes
N*32 bytes move across the bus
Bus utilization: 128 / (N*32)
...
© NVIDIA 2013
Parallelizing SAXPY
void saxpy(int n, float a, float * x, float
* y)
{
for(int i=0; i<n; i++)
{
y[base +i] += a * x[base+ i];
}
}
Divide the work equally
among T threads
Each thread is responsible for
computing one contiguous
‘region’ of the arrays
This is good for pthreads
© NVIDIA 2013
Parallelizing SAXPY
__global__ void saxpy1(int n, float a, float
* x, float * y)
{
int workPerThread = 1 + n/blockDim.x;
int base = threadIdx.x * workPerThread;
for(int i=0; i<workPerThread; i++)
{
if(base + i < n)
{
y[base +i] += a * x[base+ i];
}
}
}
Divide the work equally
among T threads
Each thread is responsible for
computing one contiguous
‘region’ of the arrays
This is good for pthreads
thread 0
thread 1
thread 2
thread 3
…
thread 31
x
© NVIDIA 2013
Parallelizing SAXPY
__global__ void saxpy1(int n, float a, float
* x, float * y)
{
int workPerThread = 1 + n/blockDim.x;
int base = threadIdx.x * workPerThread;
for(int i=0; i<workPerThread; i++)
{
if(base + i < n)
{
y[base +i] += a * x[base+i];
}
}
}
In SIMT, 32 threads of a warp
issue the x[base+i] instruction
simultaneously.
Each thread has different value
of base
if workPerThread > 1, this
becomes a strided load
thread 0
thread 1
thread 2
thread 3
…
thread 31
x
© NVIDIA 2013
Parallelizing SAXPY
__global__ void saxpy1(int n, float a, float
* x, float * y)
{
int workPerThread = 1 + n/blockDim.x;
int base = threadIdx.x * workPerThread;
for(int i=0; i<workPerThread; i++)
{
if(base + i < n)
{
y[base +i] += a * x[base+i];
}
}
}
In SIMT, 32 threads of a warp
issue the x[base+i] instruction
simultaneously.
Each thread has different value
of base
if workPerThread > 1, this
becomes a strided load
thread 0
thread 1
thread 2
thread 3
…
thread 31
x
© NVIDIA 2013
…
A Better Way to Parallelize SAXPY
Divide work up so that each
pass through the loop, the
thread block computes one
‘contiguous region’ of the
array.
Achieves memory coalescing
loopcount = 0
loopcount = 1
…
x
loopcount=k
__global__ void saxpy2(int n, float a, float
* x, float * y)
{
int id;
int loopCount = 0;
while(id < n)
{
id = loopCount*blockDim.x + threadIdx.x;
y[id] += a * x[id];
loopCount++;
}
}
© NVIDIA 2013
__global__ void saxpy2(int n, float a, float
* x, float * y)
{
int id;
int loopCount = 0;
while(id < n)
{
id = loopCount*blockDim.x + threadIdx.x;
y[id] += a * x[id];
loopCount++;
}
}
The area of X addressed by
each warp is contiguous in
global memory.
The number of global memory
transactions is minimized.
This effect translates to loads
and stores of y also.
loopcount = 0
loopcount = 1
…
…
x
A Better Way to Parallelize SAXPY
loopcount=k
© NVIDIA 2013
Structures of Non-Native Size
Say we are reading a 12-byte structure per thread
struct Position
{
float x, y, z;
};
...
__global__ void kernel( Position *data, ... )
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
Position temp = data[idx];
...
}
© NVIDIA 2013
Structure of Non-Native Size
Compiler converts temp = data[idx] into 3 loads:
Each loads 4 bytes
Can’t do an 8 and a 4 byte load: 12 bytes per element means that every
other element wouldn’t align the 8-byte load on 8-byte boundary
Addresses per warp for each of the loads:
Successive threads read 4 bytes at 12-byte stride
© NVIDIA 2013
First Load Instruction
4
8
12 16 20
56 60 64
0
24
48 52
36 40 44
28 32
addresses from a warp
...
© NVIDIA 2013
Second Load Instruction
4
8
12 16 20
56 60 64
0
24
48 52
36 40 44
28 32
addresses from a warp
...
© NVIDIA 2013
Third Load Instruction
4
8
12 16 20
56 60 64
0
24
48 52
36 40 44
28 32
addresses from a warp
...
© NVIDIA 2013
Performance and Solutions
Because of the address pattern, we end up moving 3x more bytes
than application requests
We waste a lot of bandwidth, leaving performance on the table
Potential solutions:
Change data layout from array of structures to structure of arrays
In this case: 3 separate arrays of floats
The most reliable approach (also ideal for both CPUs and GPUs)
Use loads via read-only cache
As long as lines survive in the cache, performance will be nearly optimal
Stage loads via shared memory
© NVIDIA 2013
Global Memory Access Patterns
SoA vs AoS:
Good:
point.x[i]
Not so good: point[i].x
Strided array access:
~OK:
x[i] = a[i+1] – a[i]
Slower:
x[i] = a[64*i] – a[i]
Random array access:
Slower:
a[rand(i)]
0
1
31
0
1
31
© NVIDIA 2013
Summary: GMEM Optimization
Strive for perfect address coalescing per warp
Align starting address (may require padding)
A warp will ideally access within a contiguous region
Avoid scattered address patterns or patterns with large strides between
threads
Analyze and optimize address patterns:
Use profiling tools (included with CUDA toolkit download)
Compare the transactions per request to the ideal ratio
Choose appropriate data layout (prefer SoA)
If needed, try read-only loads, staging accesses via SMEM
© NVIDIA 2013
A note about caches
L1 and L2 caches
Ignore in software design
Thousands of concurrent
threads – cache blocking
difficult at best
Read-only Data Cache
Shared with texture pipeline
Useful for uncoalesced reads
Handled by compiler when
const __restrict__ is used, or
use __ldg() primitive
© NVIDIA 2013
Blocking for GPU Memory Caches
Short answer: DON’T
GPU caches are not intended for the same use as CPU caches
Smaller size (especially per thread), so not aimed at temporal reuse
Intended to smooth out some access patterns, help with spilled registers,
etc.
Usually not worth trying to cache-block like you would on CPU
100s to 1,000s of run-time scheduled threads competing for the cache
If it is possible to block for L1 then it’s possible block for SMEM
Same size
Same or higher bandwidth
Guaranteed locality: hw will not evict behind your back
© NVIDIA 2013
Read-only Data Cache
Go through the read-only cache
Not coherent with writes
Thus, addresses must not be written by the same kernel
Two ways to enable:
Decorating pointer arguments as hints to compiler:
Pointer of interest: const __restrict__
All other pointer arguments: __restrict__
– Conveys to compiler that no aliasing will occur
Using __ldg() intrinsic
Requires no pointer decoration
© NVIDIA 2013
Read-only Data Cache
Go through the read-only cache
Not coherent with writes
Thus, addresses must not be written by the same kernel
Two ways to enable:
Decorating pointer arguments as hints to compiler:
Pointer of interest: const __restrict__
All other pointer arguments: __restrict__
– Conveys to compiler that no aliasing will occur
Using __ldg() intrinsic
Requires no pointer decoration
__global__ void kernel(
int* __restrict__ output,
const int* __restrict__ input )
{
...
output[idx] = input[idx];
}
© NVIDIA 2013
Read-only Data Cache
Go through the read-only cache
Not coherent with writes
Thus, addresses must not be written by the same kernel
Two ways to enable:
Decorating pointer arguments as hints to compiler:
Pointer of interest: const __restrict__
All other pointer arguments: __restrict__
– Conveys to compiler that no aliasing will occur
Using __ldg() intrinsic
Requires no pointer decoration
__global__ void kernel( int *output,
int *input )
{
...
output[idx] = __ldg( &input[idx] );
}
© NVIDIA 2013
Texture and Constant Memory
Read-only
Data resides in global memory
Read via special-purpose caches
© NVIDIA 2013
Texture
Separate cache
Dedicated texture cache hardware provides:
Out-of-bounds index handling
clamp or wrap-around
Optional interpolation
Think: using fp indices for arrays
Linear, bilinear, trilinear
– Interpolation weights are 9-bit
Optional format conversion
{char, short, int} -> float
All of these are “free”
© NVIDIA 2013
Examples of Texture Object Indexing
©
11
Index Clamp:
0 1 2 3 4
1
2
3
0
(5.5, 1.5)
1
2
3
0
(2.5, 0.5)
(1.0, 1.0)
0 1 2 3 4
1
2
3
0
(5.5, 1.5)
0 1 2 3 4
Index Wrap:
Integer indices fall between
elements
Optional interpolation:
Weights are determined by coordinate distance
© NVIDIA 2013
OPTIMIZE
Kernel Optimizations: Shared Memory Accesses
© NVIDIA 2013
Shared Memory
Fast, on-chip memory
Accessible by all threads within a thread block
Common allocation for entire thread block
Variety of uses:
Software managed cache (e.g., tiled DGEMM)
Global memory coalescing (e.g., transpose)
Communication within a thread block (e.g., FFT, reductions)
Limited Resource
Use of shared memory affects occupancy
Registers
L1 SM
SM
SMEM
© NVIDIA 2013
Shared Memory Organization
Organized in 32 independent banks
Optimal access: no two words from
same bank
Separate banks per thread
Banks can multicast
Multiple words from same bank serialize
C
Bank
Any 1:1 or multicast pattern
C
C
C
Bank
Bank
Bank
C
Bank
C
C
C
Bank
Bank
Bank
© NVIDIA 2013
Bank Addressing Examples
No Bank Conflicts
No Bank Conflicts
Bank 31
Bank 7
Bank 6
Bank 5
Bank 4
Bank 3
Bank 2
Bank 1
Bank 0
Thread 31
Thread 7
Thread 6
Thread 5
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
Bank 31
Bank 7
Bank 6
Bank 5
Bank 4
Bank 3
Bank 2
Bank 1
Bank 0
Thread 31
Thread 7
Thread 6
Thread 5
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
© NVIDIA 2013
Bank Addressing Examples
2-way Bank Conflicts
8-way Bank Conflicts
Thread 31
Thread 30
Thread 29
Thread 28
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
Bank 31
Bank 7
Bank 6
Bank 5
Bank 4
Bank 3
Bank 2
Bank 1
Bank 0
Thread 31
Thread 7
Thread 6
Thread 5
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
Bank 9
Bank 8
Bank 31
Bank 7
Bank 2
Bank 1
Bank 0
x8
x8
© NVIDIA 2013
Motivating Example: Matrix Transpose
_global__ void gpuTranspose_kernel(int rows,
int cols, float *in, float *out)
{
int i, j;
i = blockIdx.x * blockDim.x + threadIdx.x;
j = blockIdx.y * blockDim.y + threadIdx.y;
out[i * rows + j] = in[j * cols + i];
}
Either write or read is strided in gmem
and uncoalesced
Solution: tile in shared memory
i
j
© NVIDIA 2013
Transposing with Shared Memory
1. Read block_ij into
shared memory
•
Reads are coalesced
2. Transpose shared
memory indices
3. Write transposed
block to global
memory
•
Writes are coalesced
i
j
Global
Memory
Shared
Memory
© NVIDIA 2013
Shared Memory Organization
Organized in 32 independent banks
Note: same as warp size. Not a coincidence.
Every 32byte word is in the next bank,
modulo 32.
Optimal access: no two words from
same bank
Separate banks per thread
Banks can multicast
Multiple words from same bank serialize
Called bank conflict, causes instruction replay
C
Bank
Any 1:1 or multicast pattern
C
C
C
Bank
Bank
Bank
C
Bank
C
C
C
Bank
Bank
Bank
© NVIDIA 2013
Shared Memory: Avoiding Bank Conflicts
Example: 32x32 SMEM array
Warp accesses a column:
32-way bank conflicts (threads in a warp access the same bank)
31
2
1
0
31
2
1
0
31
2
1
0
Bank 0
Bank 1
…
Bank 31
2
0
1
31
© NVIDIA 2013
Shared Memory: Avoiding Bank Conflicts
Example: 32x32 SMEM array
Warp accesses a column:
32-way bank conflicts (threads in a warp access the same bank)
31
2
1
0
31
2
1
0
31
2
1
0
Bank 0
Bank 1
…
Bank 31
2
0
1
31
Accesses along row
produces 0 bank
conflicts
Accesses along
column produces 32
bank conflicts
(replays)
© NVIDIA 2013
Shared Memory: Avoiding Bank Conflicts
Add a column for padding:
32x33 SMEM array
Warp accesses a column:
32 different banks, no bank conflicts
31
2
1
0
31
2
1
0
31
2
1
0
padding
Bank 0
Bank 1
…
Bank 31
31
2
0
1
Accesses along row
produces no bank
conflicts
Accesses along
column produces no
bank conflicts
© NVIDIA 2013
Shared Memory/L1 Sizing
Shared memory and L1 use the same 64KB physical memory
Program-configurable split:
Fermi: 48:16, 16:48
Kepler: 48:16, 16:48, 32:32
CUDA API: cudaDeviceSetCacheConfig(), cudaFuncSetCacheConfig()
Large L1 can improve performance when:
Spilling registers (more lines in the cache -> fewer evictions)
Large SMEM can improve performance when:
Occupancy is limited by SMEM
© NVIDIA 2013
Final Notes on Shared Memory
Fast: high bandwidth, low latency
Useful as user managed cache for coalescing, caching, and
communication within a thread block
Shared memory size / L1 cache size is API-configurable
16k L1 / 48k Shared (default on both Fermi and Kepler)
48k L1 / 16k Shared
32k L1 / 32k Shared (Kepler only).
Be careful of:
Overuse: Excessive allocation can hurt occupancy
Access pattern: Lots of bank conflicts can hurt performance
© NVIDIA 2013
OPTIMIZE
Kernel Optimizations: Instruction Throughput / Control Flow
© NVIDIA 2013
Exposing Sufficient Parallelism
What SMX ultimately needs:
Sufficient number of independent instructions
Kepler GK110 is “wider” than Fermi or GK104; needs more parallelism
Two ways to increase parallelism:
More independent instructions (ILP) within a thread (warp)
More concurrent threads (warps)
© NVIDIA 2013
Independent Instructions: ILP vs. TLP
SMX can leverage available Instruction-Level Parallelism more or
less interchangeably with Thread-Level Parallelism
Sometimes easier to increase ILP than to increase TLP
E.g., # of threads may be limited by algorithm or by HW resource limits
But if each thread has some degree of independent operations to do,
Kepler SMX can leverage that. (E.g., a small loop that is unrolled.)
In fact, some degree of ILP is actually required to approach
theoretical max Instructions Per Clock (IPC)
© NVIDIA 2013
Control Flow
Instructions are issued per 32 threads (warp)
Divergent branches:
Threads within a single warp take different paths
if-else, ...
Different execution paths within a warp are serialized
Different warps can execute different code with no impact on
performance
© NVIDIA 2013
Control Flow
Avoid diverging within a warp
Note: some divergence is not necessarily a problem, but large
amounts impacts execution efficiency
Example with divergence:
if (threadIdx.x > 2) {...} else {...}
Branch granularity < warp size
Example without divergence:
if (threadIdx.x / warpSize > 2) {...} else {...}
Branch granularity is a whole multiple of warp size
© NVIDIA 2013
Control Flow
if ( ... )
{
// then-clause
}
else
{
// else-clause
}
instructions
© NVIDIA 2013
Execution within warps is coherent
instructions / time
Warp
(“vector” of threads)
35
34
33
63
62
32
3
2
1
31
30
0
Warp
(“vector” of threads)
© NVIDIA 2013
Execution diverges within a warp
instructions / time
3
2
1
31
30
0
35
34
33
63
62
32
© NVIDIA 2013
Execution diverges within a warp
instructions / time
3
2
1
31
30
0
35
34
33
63
62
32
Solution: Group threads with similar control flow
© NVIDIA 2013
Runtime Math Library and Intrinsics
Two types of runtime math library functions
__func(): many map directly to hardware ISA
Fast but lower accuracy (see CUDA Programming Guide for full details)
Examples: __sinf(x), __expf(x), __powf(x, y)
func(): compile to multiple instructions
Slower but higher accuracy (5 ulp or less)
Examples: sin(x), exp(x), pow(x, y)
A number of additional intrinsics:
__sincosf(), __frcp_rz(), ...
Explicit IEEE rounding modes (rz,rn,ru,rd)
© NVIDIA 2013
OPTIMIZE
Optimizing CPU-GPU Interaction: Maximizing PCIe Throughput
© NVIDIA 2013
Maximizing PCIe Throughput
Use transfers that are of reasonable size (a few MB, at least)
Use pinned system memory
Overlap memcopies with useful computation
© NVIDIA 2013
Pinned (non-pageable) memory
Pinned memory enables:
faster PCIe copies
memcopies asynchronous with CPU
memcopies asynchronous with GPU
Usage
cudaHostAlloc / cudaFreeHost
instead of malloc / free
cudaHostRegister / cudaHostUnregister
pin regular memory after allocation
Implication:
pinned memory is essentially removed from host virtual memory
© NVIDIA 2013
Asynchronicity in CUDA
Default:
Kernel launches are asynchronous with CPU
Memcopies (D2H, H2D) block CPU thread
CUDA calls are serialized by the driver
Streams and async functions provide additional asynchronicity:
Memcopies (D2H, H2D) asynchronous with CPU
Ability to concurrently execute kernels and memcopies
Stream: sequence of ops that execute in issue-order on GPU
Operations from different streams may be interleaved
Kernels and memcopies from different streams can be overlapped
© NVIDIA 2013
OPTIMIZE
Optimizing CPU-GPU Interaction: Overlapping Kernel
Execution with Memory Copies
© NVIDIA 2013
Overlap kernel and memory copy
Requirements:
D2H or H2D memcopy from pinned memory
Kernel and memcopy in different, non-0 streams
Code:
cudaStream_t stream1, stream2;
cudaStreamCreate(&stream1);
cudaStreamCreate(&stream2);
cudaMemcpyAsync( dst, src, size, dir, stream1 );
kernel<<<grid, block, 0, stream2>>>(…);
potentially
overlapped
© NVIDIA 2013
Call Sequencing for Optimal Overlap
CUDA calls are dispatched in the sequence they were issued
Kepler can concurrently execute:
Up to 32 kernels
Up to 2 memcopies, as long as they are in different directions (D2H, H2D)
A call is dispatched if both are true:
Resources are available
Preceding calls in the same stream have completed
Scheduling:
Kernels are executed in the order in which they were issued
Thread blocks for a given kernel are scheduled if all thread blocks for
preceding kernels have been scheduled and SM resources still available
© NVIDIA 2013
Hyper-Q Enables Efficient Scheduling
Grid Management Unit selects most appropriate task from up to
32 hardware queues (CUDA streams)
Improves scheduling of concurrently executed grids
Particularly interesting for MPI applications when combined with
CUDA MPS (though not limited to MPI applications)
© NVIDIA 2013
Stream Examples without Hyper-Q
K1,M1,K2,M2: K1
K1 M1
M1 K2
K2 M2
M2
K1,K2,M1,M2: K1
K1 M1
M1 K2
K2 M2
M2
K1,M1,M2: K1
K1 M1
M1 M2
M2
K1,M2,M1: K1
K1 M1
M1 M2
M2
K1,M2,M2: K1
K1 M2
M2 M2
M2
Time
K:
Kernel
M:
Memcopy
Integer: Stream ID
© NVIDIA 2013
Stream Examples with Hyper-Q
K1,M1,K2,M2: K1
K1 M1
M1 K2
K2 M2
M2
K1,K2,M1,M2: K1
K1 M1
M1 K2
K2 M2
M2
K1,M1,M2: K1
K1 M1
M1 M2
M2
K1,M2,M1: K1
K1 M1
M1 M2
M2
K1,M2,M2: K1
K1 M2
M2 M2
M2
Time
K:
Kernel
M:
Memcopy
Integer: Stream ID
© NVIDIA 2013
Grid Management
Work Distributor
32 active grids
Stream Queue Mgmt
C
B
A
R
Q
P
Z
Y
X
Grid Management Unit
Pending & Suspended Grids
1000s of pending grids
SMX
SMX
SMX
SMX
SM
SM
SM
SM
Work Distributor
16 active grids
Stream Queue Mgmt
C
B
A
Z
Y
X
R
Q
P
CUDA
Generated
Work
Fermi
Kepler GK110
© NVIDIA 2013
Stream Dependencies Example
void foo(void)
{
kernel_A<<<g,b,s, stream_1>>>();
kernel_B<<<g,b,s, stream_1>>>();
kernel_C<<<g,b,s, stream_1>>>();
}
void bar(void)
{
kernel_P<<<g,b,s, stream_2>>>();
kernel_Q<<<g,b,s, stream_2>>>();
kernel_R<<<g,b,s, stream_2>>>();
}
stream_1
kernel_A
kernel_B
kernel_C
stream_2
kernel_P
kernel_Q
kernel_R
© NVIDIA 2013
Stream Dependencies without Hyper-Q
stream_1
kernel_A
kernel_B
kernel_C
stream_2
kernel_P
kernel_Q
kernel_R
Hardware Work Queue
R—Q—P C—B—A
© NVIDIA 2013
Stream Dependencies with Hyper-Q
Hyper-Q allows 32-way concurrency
Avoids inter-stream dependencies
stream_1
kernel_A
kernel_B
kernel_C
stream_2
kernel_P
kernel_Q
kernel_R
C—B—A
R—Q—P
Multiple Hardware Work Queues
© NVIDIA 2013
Heterogeneous system: overlap work and data movement
Kepler + CUDA 5: Hyper-Q and CPU Callbacks
Hyper-Q Example: Building a Pipeline
DMA
DMA
© NVIDIA 2013
Tick-Tock Matrix Multiply
cudaMemcpyAsync(devA1, A[tile0], N, stream1);
cudaMemcpyAsync(devB1, B[tile0], N, stream1);
DGEMM<<<g,b,s, stream1>>>(devA1, devB1, devC1);
cudaMemcpyAsync(devA2, A[tile1], N, stream2);
cudaMemcpyAsync(devB2, B[tile1], N, stream2);
DGEMM<<<g,b,s, stream2>>>(devTileA, devTileB, devC1);
cudaMemcpyAsync(C[tile0], devC, N, D2H, stream1);
cudaMemcpyAsync(devA1, A[tile2], N, H2D, stream1)
cudaMemcpyAsync(devB1, B[tile2], N, D2H, stream1)
DGEMM<<<g,b,s, stream1>>>(devA1, devB1, devC1);
cudaMemcpyAsync(C[tile1], devC, N, D2H, stream1);
cudaMemcpyAsync(devA1, A[tile4], N, H2D, stream1);
cudaMemcpyAsync(devB1, B[tile4], N, D2H, stream1);
DGEMM<<<g,b,s, stream1>>>(devA1, devB1, devC1);
© NVIDIA 2013
Tick-Tock Matrix Multiply
dA1
stream 1
stream 2
memcpy
B[0]
dB1
DGEMM
dC_1 =dA_1 x dB_1
dC1
dA1
dB1
A[0]
C[0]
B[2]
A[2]
DGEMM
dC_2 =dA_2 x dB_2
DGEMM
C_1 =A_1 x B_1
B[1]
A[1]
dA2
dB2
memcpy
DGEMM
C_2 =A_2 x B_2
dC2
dA2
dB2
C[1]
B[3]
A[3]
C[2]
B[4]
A[4]
dC1
dA1
dB1
DGEMM
C_1 =A_1 x B_1
Copy Tile 0
Copy Tile 1
Compute Tile 0
Copy Tile 2
Compute Tile 1
Copy Tile 3
Compute Tile 2
Copy Tile 4
Compute Tile 3
GPU Memory
CPU Memory
memcpy
memcpy
memcpy
memcpy
dC2
dA2
dB2
C[3]
B[5]
A[5]
Copy Tile 5
Compute Tile 4
© NVIDIA 2013
Just a Higher Level of Parallelism
Problem is decomposed into parallel
“workers”.
At any given time
1 worker is using compute resources
1 worker is using copy transfers
Importantly:
The PCI-E link is kept saturated with
useful work.
For DGEMM, compute is also saturated.
Arch specific balancing
Depends on CPU and GPU
characteristics.
tile computed by stream 1
tile computed by stream 2
Result Matrix:
© NVIDIA 2013
Pipeline Code
for (unsigned int i = 0 ; i < nIterations ; ++i)
{
// Copy data from host to device
cudaMemcpyAsync(d_data, h_data, cpybytes, cudaMemcpyHostToDevice,
*r_streams.active());
// Launch device kernel A
kernel_A<<<gdim, bdim, 0, *r_streams.active()>>>();
// Copy data from device to host
cudaMemcpyAsync(h_data, d_data, cpybytes, cudaMemcpyDeviceToHost,
*r_streams.active());
// Launch host post-process
cudaStreamAddCallback(*r_streams.active(), cpu_callback,
r_streamids.active(), 0);
// Rotate streams
r_streams.rotate(); r_streamids.rotate();
}
© NVIDIA 2013
False dependencies prevent overlap
Breadth-first launch gives overlap, requires more complex code
Pipeline Without Hyper-Q
© NVIDIA 2013
Full overlap of all engines
Simple to program
Pipeline With Hyper-Q
© NVIDIA 2013
Hyper-Q also enables CUDA MPS
No application modifications necessary
Start MPS daemon using nvidia_cuda_mps_control -d
CUDA driver detects daemon and routes GPU accesses through it
Combines requests from several processes into one GPU context
(shared virtual memory space, concurrent kernels possible, etc.)
Allows for overlap of kernels with memcopies without explicit
use of streams
© NVIDIA 2013
But Hyper-Q != CUDA MPS
One process: No MPS required!
Automatically utilized
One or many host threads no problem
Just need multiple CUDA streams
Removes false dependencies among CUDA streams that
reduce effective concurrency on earlier GPUs
Multi-process: Use CUDA MPS
Leverages task-level parallelism across processes (e.g., MPI ranks)
MPI is not required for MPS – it’s just the common case for HPC
© NVIDIA 2013
Deploy
We’ve removed (or reduced) some bottleneck
Our app is now faster while remaining fully functional*
Let’s take advantage of that!
*Don’t forget to check correctness at every step
© NVIDIA 2013
GPU Optimization Fundamentals
Recap:
Develop systematically with APOD
Expose sufficient parallelism
Utilize parallel processing resources efficiently
Assess
Parallelize
Optimize
Deploy
© NVIDIA 2013
Online Resources
www.udacity.com
docs.nvidia.com
developer.nvidia.com
devtalk.nvidia.com
www.stackoverflow.com |
Skip links
DeepSpeed Inference: Multi-GPU inference with customized inference kernels and quantization support
March 15, 2021
Contents
While DeepSpeed supports training advanced large-scale models, using these trained models in the desired application scenarios is still challenging due to three major limitations in existing inference solutions: 1) lack of support for multi-GPU inference to fit large models and meet latency requirements, 2) limited GPU kernel performance when running inference with small batch sizes, and 3) difficulties in exploiting quantization, which includes both quantizing the model to reduce the model size and latency as well as supporting high-performance inference of quantized models without specialized hardware.
To handle these challenges, we introduce DeepSpeed Inference, which seamlessly adds high-performance inference support to large models trained in DeepSpeed with three key features: inference-adapted parallelism for multi-GPU inference, inference-optimized kernels tuned for small batch sizes, and flexible support for quantize-aware training and inference kernels for quantized models.
Multi-GPU Inference with Adaptive Parallelism
Parallelism is an effective approach to fit large models and reduce per-device memory consumption for both training and inference. However, simply applying training parallelism choices and degree to inference does not work well. The MP and PP configuration is normally set during the model training, apart from the data parallelism (DP), based on the memory footprint and computation style, and resource budget. On one hand, inference computation intrinsically requires less memory, so it can afford a larger partition per device. It helps reduce the degree of parallelism needed for model deployment. On the other hand, optimizing latency or meeting latency requirements is often a first-class citizen in inference while training optimizes throughput.
To obtain desired latency, DeepSpeed Inference automatically adapts MP as an effective approach to reduce model latency, and its parallelism degree is often determined first. With MP, we can split the mode and parallelize computational operations across multiple devices (GPUs) to reduce latency, but it reduces computation granularity and increases communication that may hurt throughput. Once the latency target has been met, DeepSpeed can apply pipeline parallelism to maximize the throughput. Overall, DeepSpeed Inference supports flexible adaptation of both parallelism approach and degree choices from training to inference, minimizing latency while saving deployment costs.
Customized Inference Kernels for Boosted Compute Efficiency of Transformer Blocks
To achieve high compute efficiency, DeepSpeed-inference offers inference kernels tailored for Transformer blocks through operator fusion, taking model-parallelism for multi-GPU into account. The main difference between our kernel-fusion scheme and similar approaches is that we not only fuse element-wise operations (such as bias-add, residual, and activation function), but also merge the General matrix multiply (GeMM) operations with other operations. To do this, we design an efficient implementation for the vector-matrix or skinny matrix-matrix multiplication that allows us to fuse more operations at the reduction boundary of GeMM operations.
Kernel-Fusion
We take two main policies for fusing operations: 1) keeping the access-pattern of inputs and outputs intact throughout the sequence of operations fused together; 2) fusing operations at each all-reduce boundary. The first policy ensures that different thread-blocks won’t encounter transferring data between Streaming-Multiprocessors (SMs). This is due to no straight-forward communication among SMs other than using the main memory which adds the block-synching overhead because of non-deterministic behavior of memory access. The reason behind the second policy is that we cannot continue the execution unless the partial results are reduced among the model-parallel GPUs.
Figure 1: Transformer Layer with Megatron-style model-parallelism all-reduce components. The figure illustrates the parts of layer fused together with broken lines (width of line shows the fusion depth).
Figure 1 shows the different components of a Transformer layer, and the groups of operations considered for fusion in our inference optimization. We also consider the NVIDIA Megatron-LM style of parallelism that partitions attention (Attn) and feed-forward (FF) blocks across multiple GPUs. Thus, we include the two all-reduce operations that reduce the results among parallel GPUs after Attn and FF blocks. As Figure 1 shows, we fuse the operations inside a Transformer layer at four main regions:
To fuse these operations, we exploit shared-memory as an intermediate cache for transferring data between reduction operations used in layer-norm and GeMM, and the element-wise operations. Moreover, we use the warp-level instructions to communicate data between threads when reducing partial computations. In addition, we use a new schedule for GeMM operations, which allows for fusing as many operations as needed for the third kernel-fusion. We also combine the GeMMs for the attention computation in the second kernel-fusion, by using an implicit matrix transformation in order to reduce the memory pressure. Compared to the unfused computation style using cuBLAS GeMM, we improve the performance by 1.5x, 2.9x. 3x, and 1.2x for all these kernel-fusions, respectively.
Seamless pipeline from training to inference with automatic kernel-injection
To run the model in Inference mode, DeepSpeed simply requires the location of the model checkpoints and the desired parallelism configuration, i.e., MP/PP degree. DeepSpeed Inference kernels can also be enabled for many well-known model architectures such as HuggingFace (Bert and GPT-2) or Megatron GPT-based models using a pre-defined policy map that maps the original parameters to the parameters in the inference kernels. For other transformer-based models, user can specify their own policy map. Note that DS-Inference can run independent of the training pipeline as long as it receives all model checkpoints, and the DeepSpeed Transformer kernels for inference can be injected into any Transformer model if the right mapping policy is defined. For more information on how to enable Transformer inference kernel as well as specifying parallelism, please refer to out inference tutorial.
Flexible quantization support
To further reduce the inference cost for large-scale models, we created the DeepSpeed Quantization Toolkit, supporting flexible quantize-aware training and high-performance kernels for quantized inference.
For training, we introduce a novel approach called Mixture of Quantization (MoQ), which is inspired by mixed-precision training while seamlessly applying quantization. With MoQ, we can control the precision of the model by simulating the impact of quantization when updating the parameters at each step of training. Moreover, it supports flexible quantization policies and schedules—we find that by dynamically adjusting the number of quantization bits during training, the final quantized model provides higher accuracy under the same compression ratio. To adapt to different tasks, MoQ can also leverage the second order information of models to detect their sensitivity to precision and adjust the quantization schedule and target accordingly.
To maximize the performance gains from the quantization model, we provide inference kernels tailored for quantized models that reduce latency through optimizing data movement but do not require specialized hardware. Finally, our toolkit does not require any code changes on the client side, making it easy to use.
Performance results
Boosting throughput and reducing inference cost. Figure 3 shows the inference throughput per GPU for the three model sizes corresponding to the three Transformer networks, GPT-2, Turing-NLG, and GPT-3. DeepSpeed Inference increases in per-GPU throughput by 2 to 4 times when using the same precision of FP16 as the baseline. By enabling quantization, we boost throughput further. We reach a throughput improvement of 3x for GPT-2, 5x for Turing-NLG, and 3x for a model that is similar in characteristics and size to GPT-3, which directly translates to 3–5x inference cost reduction on serving these large models. In addition, we achieve these throughput and cost improvements without compromising latency as shown in Figure 5.
Figure 3: Inference throughput for different model sizes. DeepSpeed Inference achieves 3x to 5x higher throughput than baseline.
One source of inference cost reduction is through reducing the number of GPUs for hosting large models as shown in Figure 4. The optimized GPU resources comes from 1) using inference-adapted parallelism, allowing users to adjust the model and pipeline parallelism degree from the trained model checkpoints, and 2) shrinking model memory footprint by half with INT8 quantization. As shown in this figure, we use 2x less GPUs to run inference for the 17B model size by adapting the parallelism. Together with INT8 quantization through DeepSpeed MoQ, we use 4x and 2x fewer GPUs for 17B and 175B sizes respectively.
Figure 4: Number of GPUs used for running inference on the different model sizes shown in Figure 4.
Reducing inference latency. For the application scenarios where inference latency is critical, we can increase model parallelism degree in DeepSpeed Inference to reduce inference latency further. As Figure 5 depicts, we can reduce the latency by 2.3x compared to PyTorch as we increase the model-parallelism size to 4. Furthermore, we can still have high latency improvement with a fewer number of GPUs by adapting the parallelism at inference and using MoQ to quantize the model. We obtain 1.3x and 1.9x speedups while using 4x and 2x lower resources than baseline, respectively.
For the application scenarios where inference latency is critical, we can increase model parallelism degree in DeepSpeed Inference to reduce inference latency further. As Figure 5 depicts, we can reduce the latency by 2.3x compared to PyTorch as we increase the model-parallelism size to 4. Furthermore, we can still have high latency improvement with a fewer number of GPUs by adapting the parallelism at inference and using MoQ to quantize the model. We obtain 1.3x and 1.9x speedups while using 4x and 2x lower resources than baseline, respectively.
Figure 5. Inference latency for the 17B model using different parallelism configuration to optimize latency.
Updated: March 15, 2021
|
Source: Image created by Generative AI Lab using image generation models.
Maximizing GPU Kernel Optimization in Python with Triton
Author(s): Chaim Rand
TL;DR: Learn how to optimize your Python code for GPU using Triton. This book provides practical tips and techniques for improving performance and unleashing the full potential of GPU kernels. From data management to parallelization, it covers everything you need to know to master GPU kernel optimization in Python.”
Disclaimer: This post has been created automatically using generative AI. Including DALL-E, Gemini, OpenAI and others. Please take its contents with a grain of salt. For feedback on how we can improve, please email us
Introduction to Triton and GPU Kernel Optimization
In recent years, the use of graphics processing units (GPUs) has become increasingly popular in the field of data analysis and scientific computing. These powerful processors are capable of performing complex calculations and handling large datasets at lightning-fast speeds. However, harnessing the full potential of GPUs requires specialized knowledge and skills in optimization techniques. This is where Triton comes in – a powerful tool for GPU kernel optimization in Python and C++.
Understanding Triton and Its Capabilities
Triton is an open-source library developed by NVIDIA that allows users to write high-performance GPU kernels in Python and C++. It provides a simple and intuitive interface for writing code that can be executed on GPUs, without the need for complex and time-consuming low-level programming. With Triton, users can easily harness the full power of GPUs and accelerate their code, making it ideal for tasks such as machine learning, data analysis, and scientific simulations.
The Benefits of Using Triton for GPU Kernel Optimization
One of the main advantages of using Triton for GPU kernel optimization is its ease of use. With its simple and intuitive interface, even users with little or no experience in GPU programming can quickly learn how to write efficient and high-performing code. Additionally, Triton offers a wide range of built-in functions and optimizations that can significantly speed up the execution of code on GPUs. This not only saves time and effort but also allows users to focus on the logic and algorithms of their code rather than worrying about low-level optimizations.
Mastering GPU Kernel Optimization with Triton
To fully unleash the power of Triton, it is essential to understand its various optimization techniques and how to use them effectively. These include techniques such as data layout optimizations, loop unrolling, and memory coalescing, among others. Triton also provides a set of tools for profiling and debugging, which can help identify bottlenecks and optimize code further. By mastering these techniques and tools, users can achieve significant performance gains and fully utilize the capabilities of GPUs.
Real-World Applications of Triton in GPU Kernel Optimization
The applications of Triton in GPU kernel optimization are vast and diverse. From accelerating machine learning algorithms to speeding up scientific simulations, Triton has been used in a wide range of fields and industries. For example, researchers have used Triton to optimize code for computational fluid dynamics simulations, resulting in a 10x speedup compared to traditional CPU-based code. In the field of finance, Triton has been used to accelerate risk analysis calculations. With the increasing demand for faster and more powerful computing, understanding and utilizing GPU optimization techniques can be a valuable skill. With Triton, developers can easily harness the power of GPUs and achieve optimal results. It is a valuable tool for those looking to maximize their use of GPU technology in Python.
Crafted using generative AI from insights found on Towards Data Science.
Join us on this incredible generative AI journey and be a part of the revolution. Stay tuned for updates and insights on generative AI by following us on X or LinkedIn.
Introduction to Machine Learning, fourth edition (Adaptive Computation and Machine Learning series)
ChatGPT for Beginners Made Useful: Master the Fundamentals, Learn Useful Prompts, Boost Your Productivity and Build Passive Income (Generative AI & Chat GPT Mastery Series)
Super Study Guide: Transformers & Large Language Models
Disclaimer: The content on this website reflects the views of contributing authors and not necessarily those of Generative AI Lab. This site may contain sponsored content, affiliate links, and material created with generative AI. Thank you for your support.
Must read
Empowering Biology with Generative AI: GenBio AI’s Breakthrough
Generalizing Temporal Difference (TD) Algorithms with n-Step Bootstrapping in Reinforcement Learning
From Solo Notebooks to Collaborative Powerhouse: Essential VS Code Extensions for Data Science and Machine Learning Teams
Data Scientists Beware: The Power of Polars Over Pandas
More articles
Generalizing Temporal Difference (TD) Algorithms with n-Step Bootstrapping in Reinforcement Learning
From Solo Notebooks to Collaborative Powerhouse: Essential VS Code Extensions for Data Science and Machine Learning Teams
Data Scientists Beware: The Power of Polars Over Pandas
LEAVE A REPLY Cancel reply
Save my name, email, and website in this browser for the next time I comment.
Latest articles
Empowering Biology with Generative AI: GenBio AI’s Breakthrough
Generalizing Temporal Difference (TD) Algorithms with n-Step Bootstrapping in Reinforcement Learning
From Solo Notebooks to Collaborative Powerhouse: Essential VS Code Extensions for Data Science and Machine Learning Teams
Data Scientists Beware: The Power of Polars Over Pandas
Beyond LLMs: Compound Systems, Agents, and Building AI Products
About Us
Popular Category
Editor Picks
Best Books on Generative AI
Top Books on Large Language Models (LLMs)
© Generative AI Lab Co.
|
Optimize TensorFlow GPU performance with the TensorFlow Profiler
Overview
This guide will show you how to use the TensorFlow Profiler with TensorBoard to
gain insight into and get the maximum performance out of your GPUs, and debug
when one or more of your GPUs are underutilized.
If you are new to the Profiler:
Keep in mind that offloading computations to GPU may not always be beneficial,
particularly for small models. There can be overhead due to:
Performance optimization workflow
This guide outlines how to debug performance issues starting with a single GPU,
then moving to a single host with multiple GPUs.
It is recommended to debug performance issues in the following order:
For example, if you are using a TensorFlow
distribution strategy
to train a model on a single host with multiple GPUs and notice suboptimal GPU
utilization, you should first optimize and debug the performance for one GPU
before debugging the multi-GPU system.
As a baseline for getting performant code on GPUs, this guide assumes you are
already using tf.function. The Keras Model.compile and Model.fit APIs will
utilize tf.function automatically under the hood. When writing a custom
training loop with tf.GradientTape, refer to the
Better performance with tf.function
on how to enable tf.functions.
The next sections discuss suggested approaches for each of the scenarios above
to help identify and fix performance bottlenecks.
1. Optimize the performance on one GPU
In an ideal case, your program should have high GPU utilization, minimal CPU
(the host) to GPU (the device) communication, and no overhead from the input
pipeline.
The first step in analyzing the performance is to get a profile for a model
running with one GPU.
TensorBoard's Profiler
overview page—which
shows a top level view of how your model performed during a profile run—can
provide an idea of how far away your program is from the ideal scenario.
The key numbers to pay attention to the overview page are:
Achieving optimal performance means maximizing these numbers in all three cases.
To get an in-depth understanding of your program, you will need to be familiar
with TensorBoard's Profiler
trace viewer. The
sections below show some common trace viewer patterns that you should look for
when diagnosing performance bottlenecks.
Below is an image of a model trace view running on one GPU. From the TensorFlow
Name Scope and TensorFlow Ops sections, you can identify different parts of
the model, like the forward pass, the loss function, backward pass/gradient
calculation, and the optimizer weight update. You can also have the ops running
on the GPU next to each Stream, which refer to CUDA streams. Each stream is
used for specific tasks. In this trace, Stream#118 is used to launch compute
kernels and device-to-device copies. Stream#119 is used for host-to-device
copy and Stream#120 for device to host copy.
The trace below shows common characteristics of a performant model.
For example, the GPU compute timeline (Stream#118) looks "busy" with very few
gaps. There are minimal copies from host to device (Stream #119) and from
device to host (Stream #120), as well as minimal gaps between steps. When you
run the Profiler for your program, you may not be able to identify these ideal
characteristics in your trace view. The rest of this guide covers common
scenarios and how to fix them.
1. Debug the input pipeline
The first step in GPU performance debugging is to determine if your program is
input-bound. The easiest way to figure this out is to use the Profiler’s
Input-pipeline analyzer,
on TensorBoard, which provides an overview of time spent in the input pipeline.
You can take the following potential actions if your input-pipeline contributes
significantly to step time:
In addition, refer to the
best practices for optimizing the input data pipeline.
2. Debug the performance of one GPU
There are several factors that can contribute to low GPU utilization. Below are
some scenarios commonly observed when looking at the
trace viewer and
potential solutions.
1. Analyze gaps between steps
A common observation when your program is not running optimally is gaps between
training steps. In the image of the trace view below, there is a large gap
between steps 8 and 9, meaning that the GPU is idle during that time.
If your trace viewer shows large gaps between steps, this could be an indication
that your program is input bound. In that case you should refer to the previous
section on debugging your input pipeline if you have not already done so.
However, even with an optimized input pipeline, you can still have gaps between
the end of one step and the start of another due to CPU thread contention.
tf.data makes use of background threads to parallelize pipeline processing.
These threads may interfere with GPU host-side activity that happens at the
beginning of each step, such as copying data or scheduling GPU operations.
If you notice large gaps on the host side, which schedules these ops on the GPU,
you can set the environment variable TF_GPU_THREAD_MODE=gpu_private. This
ensures that GPU kernels are launched from their own dedicated threads, and
don't get queued behind tf.data work.
Gaps between steps can also be caused by metric calculations, Keras callbacks,
or ops outside of tf.function that run on the host. These ops don’t have as
good performance as the ops inside a TensorFlow graph. Additionally, some of
these ops run on the CPU and copy tensors back and forth from the GPU.
If after optimizing your input pipeline you still notice gaps between steps in
the trace viewer, you should look at the model code between steps and check if
disabling callbacks/metrics improves performance. Some details of these ops are
also on the trace viewer (both device and host side).The recommendation in this
scenario is to amortize the overhead of these ops by executing them after a
fixed number of steps instead of every step. When using the Model.compile method in
the tf.keras API, setting the steps_per_execution flag does
this automatically. For custom training loops, use tf.while_loop.
2. Achieve higher device utilization
1. Small GPU kernels and host kernel launch delays
The host enqueues kernels to be run on the GPU, but there is a latency (around
20-40 μs) involved before kernels are actually executed on the GPU. In an ideal
case, the host enqueues enough kernels on the GPU such that the GPU spends most
of its time executing, rather than waiting on the host to enqueue more kernels.
The Profiler's
overview page on
TensorBoard shows how much time the GPU was idle due to waiting on the host to
launch kernels. In the image below, the GPU is idle for about 10% of the step
time waiting on kernels to be launched.
The trace viewer for
this same program shows small gaps between kernels where the host is busy
launching kernels on the GPU.
By launching a lot of small ops on the GPU (like a scalar add, for example), the
host might not keep up with the GPU. The
TensorFlow Stats
tool in TensorBoard for the same Profile shows 126,224 Mul operations taking
2.77 seconds. Thus, each kernel is about 21.9 μs, which is very small (around
the same time as launch latency) and can potentially result in host kernel
launch delays.
If your trace viewer
shows many small gaps between ops on the GPU like in the image above, you can:
2. TensorFlow op placement
The Profiler
overview page shows
you the percentage of ops placed on the host vs. the device (you can also verify
the placement of specific ops by looking at the
trace viewer. Like in
the image below, you want the percentage of ops on the host to be very small
compared to the device.
Ideally, most of the compute intensive ops should be placed on the GPU.
To find out which devices the operations and tensors in your model are assigned
to, set tf.debugging.set_log_device_placement(True) as the first statement of
your program.
Note that in some cases, even if you specify an op to be placed on a particular
device, its implementation might override this condition (example:tf.unique).
Even for single GPU training, specifying a distribution strategy, such as
tf.distribute.OneDeviceStrategy, can result in more deterministic placement of
ops on your device.
One reason for having the majority of ops placed on the GPU is to prevent
excessive memory copies between the host and the device (memory copies for model
input/output data between host and device are expected). An example of excessive
copying is demonstrated in the trace view below on GPU streams #167, #168,
and #169.
These copies can sometimes hurt the performance if they block GPU kernels from
executing. Memory copy operations in the
trace viewer have more
information about the ops that are the source of these copied tensors, but it
might not always be easy to associate a memCopy with an op. In these cases, it
is helpful to look at the ops nearby to check if the memory copy happens at the
same location in every step.
3. More efficient kernels on GPUs
Once your program's GPU utilization is acceptable, the next step is to look into
increasing the efficiency of the GPU kernels by utilizing Tensor Cores or fusing
ops.
1. Utilize Tensor Cores
Modern NVIDIA® GPUs have specialized
Tensor Cores that can
significantly improve the performance of eligible kernels.
You can use TensorBoard's
GPU kernel stats
to visualize which GPU kernels are Tensor Core-eligible, and which kernels are
using Tensor Cores. Enabling fp16 (see Enabling Mixed Precision section below)
is one way to make your program’s General Matrix Multiply (GEMM) kernels (matmul
ops) utilize the Tensor Core. GPU kernels use the Tensor Cores efficiently when
the precision is fp16 and input/output tensor dimensions are divisible by 8 or
16 (for int8).
For other detailed recommendations on how to make kernels efficient for GPUs,
refer to the
NVIDIA® deep learning performance
guide.
2. Fuse ops
Use tf.function(jit_compile=True) to fuse smaller ops to form bigger kernels
leading to significant performance gains. To learn more, refer to the
XLA guide.
3. Enable mixed precision and XLA
After following the above steps, enabling mixed precision and XLA are two
optional steps you can take to improve performance further. The suggested
approach is to enable them one by one and verify that the performance benefits
are as expected.
1. Enable mixed precision
The TensorFlow
Mixed precision guide
shows how to enable fp16 precision on GPUs. Enable
AMP on NVIDIA® GPUs to
use Tensor Cores and realize up to 3x overall speedups when compared to using
just fp32 (float32) precision on Volta and newer GPU architectures.
Make sure that matrix/tensor dimensions satisfy requirements for calling kernels
that use Tensor Cores. GPU kernels use the Tensor Cores efficiently when the
precision is fp16 and input/output dimensions are divisible by 8 or 16 (for
int8).
Note that with cuDNN v7.6.3 and later, convolution dimensions will automatically
be padded where necessary to leverage Tensor Cores.
Follow the best practices below to maximize the performance benefits of fp16
precision.
1. Use optimal fp16 kernels
With fp16 enabled, your program’s matrix multiplications (GEMM) kernels,
should use the corresponding fp16 version that utilizes the Tensor Cores.
However, in some cases, this does not happen and you do not experience the
expected speedup from enabling fp16, as your program falls back to the
inefficient implementation instead.
The GPU kernel
stats page shows which ops are Tensor Core eligible and which kernels are
actually using the efficient Tensor Core. The
NVIDIA® guide on deep learning performance
contains additional suggestions on how to leverage Tensor Cores. Additionally,
the benefits of using fp16 will also show in kernels that were previously
memory bound, as now the ops will take half the time.
2. Dynamic vs. static loss scaling
Loss scaling is necessary when using fp16 to prevent underflow due to low
precision. There are two types of loss scaling, dynamic and static, both of
which are explained in greater detail in the
Mixed Precision guide.
You can use the mixed_float16 policy to automatically enable loss scaling
within the Keras optimizer.
When trying to optimize performance, it is important to remember that dynamic
loss scaling can introduce additional conditional ops that run on the host, and
lead to gaps that will be visible between steps in the trace viewer. On the
other hand, static loss scaling does not have such overheads and can be a better
option in terms of performance with the catch that you need to specify the
correct static-loss scale value.
2. Enable XLA with tf.function(jit_compile=True) or auto-clustering
As a final step in getting the best performance with a single GPU, you can
experiment with enabling XLA, which will fuse ops and lead to better device
utilization and a lower memory footprint. For details on how to enable XLA in
your program with tf.function(jit_compile=True) or auto-clustering, refer to
the XLA guide.
You can set the global JIT level to -1 (off), 1, or 2. A higher level is
more aggressive and may reduce parallelism and use more memory. Set the value to
1 if you have memory restrictions. Note that XLA does not perform well for
models with variable input tensor shapes as the XLA compiler would have to keep
compiling kernels whenever it encounters new shapes.
2. Optimize the performance on the multi-GPU single host
The tf.distribute.MirroredStrategy API can be used to scale model training
from one GPU to multiple GPUs on a single host. (To learn more about how to do
distributed training with TensorFlow, refer to the
Distributed training with TensorFlow,
Use a GPU, and
Use TPUs guides and the
Distributed training with Keras
tutorial.)
Although the transition from one GPU to multiple GPUs should ideally be scalable
out of the box, you can sometimes encounter performance issues.
When going from training with a single GPU to multiple GPUs on the same host,
ideally you should experience the performance scaling with only the additional
overhead of gradient communication and increased host thread utilization.
Because of this overhead, you will not have an exact 2x speedup if you move from
1 to 2 GPUs, for example.
The trace view below shows an example of the extra communication overhead when
training on multiple GPUs. There is some overhead to concatenate the gradients,
communicate them across replicas, and split them before doing the weight update.
The following checklist will help you achieve better performance when optimizing
the performance in the multi-GPU scenario:
1. Optimize gradient AllReduce
When training with a synchronous strategy, each device receives a portion of the
input data.
After computing the forward and backwards passes through the model, the
gradients calculated on each device need to be aggregated and reduced. This
gradient AllReduce happens after the gradient calculation on each device, and
before the optimizer updates the model weights.
Each GPU first concatenates the gradients across the model layers, communicates
them across GPUs using tf.distribute.CrossDeviceOps
(tf.distribute.NcclAllReduce is the default), and then returns the gradients
after reduction per layer.
The optimizer will use these reduced gradients to update the weights of your
model. Ideally, this process should happen at the same time on all GPUs to
prevent any overheads.
The time to AllReduce should be approximately the same as:
(number of parameters * 4bytes)/ (communication bandwidth)
This calculation is useful as a quick check to understand whether the
performance you have when running a distributed training job is as expected, or
if you need to do further performance debugging. You can get the number of
parameters in your model from Model.summary.
Note that each model parameter is 4 bytes in size since TensorFlow uses fp32
(float32) to communicate gradients. Even when you have fp16 enabled, NCCL
AllReduce utilizes fp32 parameters.
To get the benefits of scaling, the step-time needs to be much higher compared
to these overheads. One way to achieve this is to use a higher batch size as
batch size affects step time, but does not impact the communication overhead.
2. GPU host thread contention
When running multiple GPUs, the CPU’s job is to keep all of the devices busy by
efficiently launching GPU kernels across the devices.
However, when there are a lot of independent operations that the CPU can
schedule on one GPU, the CPU can decide to use a lot of its host threads to keep
one GPU busy, and then launch kernels on another GPU in a non-deterministic
order. This can cause a skew or negative scaling, which can negatively affect
the performance.
The trace viewer below
shows the overhead when the CPU staggers GPU kernel launches inefficiently, as
GPU1 is idle and then starts running ops after GPU2 has started.
The trace view for the host shows that the host is launching kernels on GPU2
before launching them on GPU1 (note that the below tf_Compute* ops are not
indicative of CPU threads).
If you experience this kind of staggering of GPU kernels in your program’s trace
view, the recommended action is to:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-09-15 UTC.
Stay connected
Support
|
CISC 879 : Software Support for Multicore Architectures
Yu, Xuan
Dept of Computer & Information Sciences
University of Delaware
Program Optimization Study on a
128-Core GPU
Shane Ryoo, Christopher I. Rodrigues, Sam S. Stone,
Sara S. Baghsorkhi, Sain-Zee Ueng, and Wen-mei W.
Hwu
CISC 879 : Software Support for Multicore Architectures
General Idea
Good news
Improving programmability and generality on
GPU
Possibility to perform a wide variety of
parallelization optimizations
Problem
How do you choose and control optimizations on
GPU properly
Possible combination of optimization is very large and
makes the optimization space tedious to explore
Limited local resources and global memory bandwidth
makes performance sensitive to even small changes in
code, unpredictable
CISC 879 : Software Support for Multicore Architectures
General Idea
• Presented a study that examines a
broad space of optimizations performed
on several applications
• Found configurations up to 74% faster
than previously thought optimal.
• Explained why this is happening on
GPU, discuss some principles and
techniques for finding near-optimal
configurations
CISC 879 : Software Support for Multicore Architectures
Organization
Architecture Overview (CUDA)
Introduction of execution hardware and threading model
Compute Unified Device Architecture (CUDA)
optimization space search
Discussion of the space search process and the classifications and
characteristics of the program optimizations
Experiments
Discuss result of the search for several applications
Matrix Multiplication
Magnetic resonance Imaging
Sums of Absolute Difference
Conclusion
CISC 879 : Software Support for Multicore Architectures
Architecture
• General Programming and compilation process
GPU is treaded as a coprocessor that
executes data-parallel kernel functions
The user supplies a both host (CPU) and
kernel (GPU) code
Codes are separated and compiled by
NVIDIA’s compiler.
Host code transfers data to GPU’s and
initialed the kernel code via API calls
CISC 879 : Software Support for Multicore Architectures
Architecture
16 streaming multiprocessors (SMs)
Each SM containing eight streaming
processors (SPs), or cores
Each core executes a single thread's
instruction in SIMD
multiply-add arithmetic unit
two special functional units (SFUs)
reciprocal square root, sine, and
cosine
fully pipelined,
CISC 879 : Software Support for Multicore Architectures
Architecture
CISC 879 : Software Support for Multicore Architectures
Architecture
Three Level Hierarchy:
•Grid
•Block
•Thread
Each kernel creates a single grid
A grid consists of many thread
blocks.(512, on single SM)
Threads in a block are organized
into warps of 32 threads. Each
warp executes in SIMD fashion,
issuing in four cycles on the eight
SPs of an SM.
When one wrap stall, SM switch to
another warp
CISC 879 : Software Support for Multicore Architectures
Architectural Interactions
• Hardware constraints
These constrains interacts with each other
making accurately predicting the effects of
one or more compiler optimizations of CUDA
difficult.
CISC 879 : Software Support for Multicore Architectures
Architectural Interactions
Consider an application:
•Uses 256 threads per block
•10 registers per thread
•4KB of shared memory per thread block.
Can schedule 3 thread blocks and 768 threads on each SM.
An optimization:
Increases each thread's register usage from 10 to 11 (an increase of only
10%) will decrease the number of blocks per SM from 3 to 2. This
decreases the number of threads on an SM by 33%.
Why? 768 * 11 = 8448 > 9192
CISC 879 : Software Support for Multicore Architectures
Architectural Interactions
By contrast, an optimization that increases
each thread block's shared memory usage by
1KB (an increase of 25%) does not decrease
the number of blocks per SM. Clearly, the
optimization space is inherently non-linear.
CISC 879 : Software Support for Multicore Architectures
Optimization space search
Architecture Overview (CUDA)
Introduction of execution hardware and threading model
Compute Unified Device Architecture (CUDA)
Optimization space search
Discussion of the space search process and the classifications and
characteristics of the program optimizations
Experiments
Discuss result of the search for several applications
Matrix Multiplication
Magnetic resonance Imaging
Sums of Absolute Difference
Conclusion
CISC 879 : Software Support for Multicore Architectures
Optimization space search
Basic strategy for good performance:
Reduce dynamic instruction count while maintaining high SP
occupancy.
Four categories of machine-level behavior to optimizae
Thread-level work redistribution
Instruction count reduction
Intra-thread parallelism
Resource balancing
CISC 879 : Software Support for Multicore Architectures
Example of matrix multiplication
The kernel is tiled so that each
thread block computes a square 16-
by-16 tile of the output matrix
Optimization space search
CISC 879 : Software Support for Multicore Architectures
Optimization space search
Example of matrix multiplication
tx and ty are each thread's coordinates
in the thread block;
indexA , indexB , and indexC are
positions in the matrices
Threads in a block cooperatively load
parts of the input matrices into shared
memory, amortizing the cost of global
load latency
Using larger tiles enhances the benefit
of data sharing, but reduces scheduling
flexibility since a greater fraction of the
threads on an SM must wait at barrier
synchronizations.
CISC 879 : Software Support for Multicore Architectures
Optimization space search
Four categories of machine-level behavior
to optimize
Thread-level work redistribution
Instruction count reduction
Intra-thread parallelism
Resource balancing
Each thread compute two matrix elements
instead of one, presents opportunities for
eliminating redundant instructions
previously distributed across threads
CISC 879 : Software Support for Multicore Architectures
Four categories of machine-level behavior
to optimize
Thread-level work redistribution
Instruction count reduction
Intra-thread parallelism
Resource balancing
Traditional compiler optimizations such as
common sub expression elimination, loop-
invariant code removal, and loop unrolling.
Optimization space search
CISC 879 : Software Support for Multicore Architectures
Four categories of machine-level behavior
to optimize
Thread-level work redistribution
Instruction count reduction
Intra-thread parallelism
Resource balancing
A developer can unroll loops to facilitate
code scheduling in the compiler or explicitly
insert pre-fetching code.
Optimization space search
CISC 879 : Software Support for Multicore Architectures
Four categories of machine-level behavior to optimize
Thread-level work redistribution
Instruction count reduction
Intra-thread parallelism
Resource balancing
Trade certain resource usages, some of which may be counterintuitive, to
produce a better performing application.
An example of this is using shared memory to buffer data for reuse,
regardless of whether it is shared with other threads.
Another example is proactive register spilling by the programmer. By
reducing register usage, often a critical resource, more thread blocks can be
assigned to each SM.
Optimization space search
CISC 879 : Software Support for Multicore Architectures
Experiments
Architecture Overview (CUDA)
Introduction of execution hardware and threading model
Compute Unified Device Architecture (CUDA)
optimization space search
Discussion of the space search process and the classifications and
characteristics of the program optimizations
Experiments
Discuss result of the search for several applications
Matrix Multiplication
Magnetic resonance Imaging
Sums of Absolute Difference
Conclusion
CISC 879 : Software Support for Multicore Architectures
Experiments
Comparison:
GPU experiments:
AMD Opteron 248 2.2GHz with 1GB main memory.
CPU versions:
Intel Core2 Extreme Quad running at 2.66 GHz with 4GB
main memory.
CISC 879 : Software Support for Multicore Architectures
Experiments
We varied tiling sizes, tiling dimensions, pre-fetching, and unroll factors,
CISC 879 : Software Support for Multicore Architectures
Experiments
The general trend: Larger tiles sizes and more work per thread gives
higher performance
Initial thought optimal
1x1 tiling, 16x16 tiles, complete unrolling, pre-fetching
87.5 GFLOPS.
Actual peak performing:
1x1 tiling, 16x16 tiles, complete unrolling, no pre-fetching
91.3 GFLOPS, an improvement of 4.2%.
CISC 879 : Software Support for Multicore Architectures
Experiments
Increasing the tiling dimensions:
•Stable performance
•Slight advantage in average
•does not result in peak performance.
Reason: negative effects of unrolling by more than a factor of two for the
higher tiling dimensions.
CISC 879 : Software Support for Multicore Architectures
Experiments
In summary:
larger thread blocks are good due to data sharing.
Complete unrolling often good due to reducing the branches calculations.
However, the runtime's scheduling may increase register pressure such that
the number of thread blocks assigned to each SM is reduced.
CISC 879 : Software Support for Multicore Architectures
Experiments
Another application:
Magnetic resonance imaging (MRI) reconstruction
Reconstruct high-quality images from non-Cartesian trajectories.
The computation required to perform these reconstructions is
substantial.
Parameters sensitive to performance:
loop unrolling factor,
The number of threads per block (tpb),
The number of scan points processed by each grid
CISC 879 : Software Support for Multicore Architectures
Shorter execution time for an unrolling factor of 8
4 is often worse than either 2 or 8
Reason:
•12 registers when unrolled, 2 thread blocks per SM and 6.17s.
•12 registers Unrolling factor is 2, 5.52s.
•24 registers unrolling factor is 4, only admit on block per SM 5.89s
•30 registers unrolling factor is 8, 1 block per SM, 4.64s
CISC 879 : Software Support for Multicore Architectures
Experiments
CISC 879 : Software Support for Multicore Architectures
there is a smaller chance of conflicts when fewer thread
blocks run on an SM.
Experiments
CISC 879 : Software Support for Multicore Architectures
To summarize MRI
Performance relatively insensitive to block size
An unrolling factor of 8 provided the highest performing
Experiments
CISC 879 : Software Support for Multicore Architectures
Gradual changes in optimization parameters can have wildly
varying effects on an application.
Local resources used by a thread increases to points where
fewer thread blocks can be assigned to each SM will reduce
overall performance.
They believe that scheduling should be better
controlled, possibly by the compiler rather than the runtime.
Conclusions |
Acceleration of BLAST Hydra Code on GPU
Tingxing(Tim) Dong
Lawrence Livermore National Laboratory
September 9th, 2011
Mentors: Tzanio Kolev, Robert Rieben
This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
Outline
Introduction of BLAST
Motivation
Details
Optimization and Restriction
Examples and Results
Conclusion
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
2 / 34
Outline
Introduction of BLAST
Motivation
Details
Optimization and Restriction
Examples and Results
Conclusion
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
3 / 34
Introduction
BLAST
Solve equations of compressible hydrodynamics with Finite Element Method(FEM)
Based on Lagrangian frame (moving mesh)
C++ code, parallelized by MPI
BLAST’s features
Curvilinear zone geometries
Higher order field representations
Exact discrete energy conservation by construction
Reduces to classical SGH under simplifying assumptions
Support for 2D/3D meshes
Multiple options for basis functions / quadrature order
Q1Q0, Q2Q1, and Q3Q2 cases
velocity,position
density,energy,pressure
reference
random
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
4 / 34
Euler’s Equations in a Lagrangian Frame
Euler’s Equations
Momentum Conservation:
ρ d⃗v
dt = ∇· σ
Mass Conservation:
1
ρ
dρ
dt = −∇· ⃗v
Energy Conservation:
ρ de
dt = σ : ∇⃗v
Equation of State:
p = EOS(e, ρ)
Equation of Motion:
d⃗x
dt = ⃗v
Semi-discrete finite element method in BLAST
Momentum Conservation:
dv
dt = −M−1
v
F · 1
Energy Conservation:
de
dt = M−1
e
FT · v
Equation of Motion:
dx
dt = v
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
5 / 34
Outline
Introduction of BLAST
Motivation
Details
Optimization and Restriction
Examples and Results
Conclusion
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
6 / 34
Google Profiler of BLAST
Consider three cases of Lagrange hydro problems with different computational workloads:
2D q2q1: least expensive, most common
2D q3q2: more expensive, greater robustness and accuracy
3D q2q1: most expensive
2D q2q1
blast
Total samples: 29855
Focusing on: 29855
Dropped nodes with <= 149 abs(samples)
Dropped edges with <= 29 samples
__libc_start_main
0 (0.0%)
of 29220 (97.9%)
main
0 (0.0%)
of 29220 (97.9%)
29220
_start
0 (0.0%)
of 29220 (97.9%)
29220
RK2AvgIntegrator
TimeStep
0 (0.0%)
of 20523 (68.7%)
20523
HydroStatePU
DeltaTEstimate
0 (0.0%)
of 8242 (27.6%)
8242
HydroStatePU
KineticEnergy
0 (0.0%)
of 341 (1.1%)
341
HydroStatePU
Eval_dv_dt
8 (0.0%)
of 20053 (67.2%)
20053
hypre_ParCSRMatrixMatvec
1 (0.0%)
of 7527 (25.2%)
99
HydroStatePU
Eval_de_dt
12 (0.0%)
of 295 (1.0%)
294
HydroStatePU
ComputeCornerForces
2918 (9.8%)
of 16502 (55.3%)
8260
HyprePCG
Mult
0 (0.0%)
of 11161 (37.4%)
11161
hypre_ParCSRMatrixMatvecT
0 (0.0%)
of 219 (0.7%)
85
DenseMatrix
Mult
166 (0.6%)
162
FiniteElementSpace
GetElementVDofs
93 (0.3%)
of 162 (0.5%)
41
2918
MultABt
4548 (15.2%)
4480
MultAtB
2510 (8.4%)
2447
DenseMatrix
CalcEigenvalues
1589 (5.3%)
1571
DenseMatrix
FNorm
1128 (3.8%)
1120
DenseMatrix
CalcSingularvalue
815 (2.7%)
793
DenseMatrix
operator*=
801 (2.7%)
768
Mult
785 (2.6%)
of 790 (2.6%)
718
hypre_PCGSolve
6 (0.0%)
of 11161 (37.4%)
11161
7015
HYPRE_ParCSRDiagScale
2288 (7.7%)
2288
hypre_SeqVectorAxpy
820 (2.7%)
820
hypre_ParVectorInnerProd
1 (0.0%)
of 580 (1.9%)
572
hypre_SeqVectorSetConstantValues
241 (0.8%)
241
hypre_SeqVectorScale
202 (0.7%)
202
8242
hypre_CSRMatrixMatvec
7515 (25.2%)
7513
7515
4521
2482
2288
1576
1120
820
806
768
754
hypre_SeqVectorInnerProd
572 (1.9%)
572
572
DenseMatrix
MultTranspose
532 (1.8%)
506
DenseMatrix
operator=
470 (1.6%)
443
267
132
Vector
GetSubVector
187 (0.6%)
47
31
241
hypre_CSRMatrixMatvecT
219 (0.7%)
219
219
202
Vector
operator*
198 (0.7%)
198
185
164
91
IdealGasEOS
SoundSpeed
159 (0.5%)
159
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
7 / 34
Google Profiler of BLAST
2D q3q2
blast
Total samples: 255527
Focusing on: 255527
Dropped nodes with <= 1277 abs(samples)
Dropped edges with <= 255 samples
__libc_start_main
0 (0.0%)
of 251170 (98.3%)
main
1 (0.0%)
of 251170 (98.3%)
251170
_start
0 (0.0%)
of 251170 (98.3%)
251170
RK4Integrator
TimeStep
1 (0.0%)
of 214472 (83.9%)
214472
HydroStatePU
DeltaTEstimate
1 (0.0%)
of 34758 (13.6%)
34758
HydroStatePU
KineticEnergy
0 (0.0%)
of 1351 (0.5%)
1351
HydroStatePU
Eval
0 (0.0%)
of 213707 (83.6%)
213707
HydroStatePU
Eval_dv_dt
36 (0.0%)
of 210349 (82.3%)
210349
hypre_ParCSRMatrixMatvec
19 (0.0%)
of 80662 (31.6%)
558
HydroStatePU
Eval_de_dt
24 (0.0%)
of 2697 (1.1%)
2697
HydroStatePU
ComputeCornerForces
18660 (7.3%)
of 137915 (54.0%)
103158
HyprePCG
Mult
2 (0.0%)
of 103015 (40.3%)
103015
DenseMatrix
Mult
1492 (0.6%)
1481
18660
MultABt
63910 (25.0%)
62845
MultAtB
20013 (7.8%)
19573
DenseMatrix
CalcEigenvalues
8264 (3.2%)
8137
DenseMatrix
FNorm
5629 (2.2%)
5585
DenseMatrix
CalcSingularvalue
4874 (1.9%)
4647
Mult
4640 (1.8%)
of 4691 (1.8%)
4272
DenseMatrix
operator*=
4269 (1.7%)
4077
hypre_PCGSolve
19 (0.0%)
of 103015 (40.3%)
103013
78201
HYPRE_ParCSRDiagScale
16982 (6.6%)
16982
hypre_SeqVectorAxpy
3207 (1.3%)
3206
hypre_ParVectorInnerProd
1 (0.0%)
of 2902 (1.1%)
2884
hypre_CSRMatrixMatvec
80560 (31.5%)
80558
80560
63768
34757
19866
16982
8225
5591
4833
4518
DenseMatrix
MultTranspose
4332 (1.7%)
4232
4089
3206
hypre_SeqVectorInnerProd
2878 (1.1%)
2878
2878
1490
DenseMatrix
operator=
2543 (1.0%)
2389
1490
1124
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
8 / 34
Google Profiler of BLAST
3D q2q1
blast
Total samples: 195012
Focusing on: 195012
Dropped nodes with <= 975 abs(samples)
Dropped edges with <= 195 samples
__libc_start_main
0 (0.0%)
of 191550 (98.2%)
main
0 (0.0%)
of 191550 (98.2%)
191550
_start
0 (0.0%)
of 191550 (98.2%)
191550
RK2AvgIntegrator
TimeStep
1 (0.0%)
of 112516 (57.7%)
112516
HydroStatePU
DeltaTEstimate
0 (0.0%)
of 77426 (39.7%)
77426
HydroStatePU
KineticEnergy
0 (0.0%)
of 983 (0.5%)
983
HydroStatePU
ComputeCornerForces
8223 (4.2%)
of 154644 (79.3%)
8223
MultABt
61576 (31.6%)
61244
MultAtB
28754 (14.7%)
28524
DenseMatrix
CalcEigenvalues
17807 (9.1%)
of 23690 (12.1%)
23526
DenseMatrix
CalcSingularvalue
7887 (4.0%)
of 13772 (7.1%)
13558
DenseMatrix
FNorm
5468 (2.8%)
5415
Mult
4789 (2.5%)
of 4825 (2.5%)
4567
DenseMatrix
operator*=
2668 (1.4%)
2531
HydroStatePU
Eval_dv_dt
8 (0.0%)
of 111675 (57.3%)
111675
77218
HyprePCG
Mult
0 (0.0%)
of 33087 (17.0%)
33087
77426
61528
hypre_PCGSolve
16 (0.0%)
of 33087 (17.0%)
33087
hypre_ParCSRMatrixMatvec
8 (0.0%)
of 29284 (15.0%)
27786
HYPRE_ParCSRDiagScale
2897 (1.5%)
2897
hypre_SeqVectorAxpy
1121 (0.6%)
1121
hypre_CSRMatrixMatvec
29248 (15.0%)
29246
29247
28683
17779
acos
905 (0.5%)
of 7681 (3.9%)
3483
cos
4613 (2.4%)
of 4654 (2.4%)
2400
7855
3925
1960
667
__ieee754_acos
6536 (3.4%)
of 6644 (3.4%)
6408
6467
5450
4720
4545
2897
2566
DenseMatrix
operator=
2049 (1.1%)
1968
DenseMatrix
MultTranspose
1941 (1.0%)
1886
1121
CalcAdjugate
992 (0.5%)
956
907
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
9 / 34
GPU vs CPU
GFLOP/s
Bandwidth
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
10 / 34
My background before this work
Fairly familiar with Finite Difference Method (FDM)
Limited understanding about Finite Element Method (FEM)
Starts from June 1st ends at August 13th 2.5months
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
11 / 34
Outline
Introduction of BLAST
Motivation
Details
Optimization and Restriction
Examples and Results
Conclusion
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
12 / 34
Generalized corner forces on the GPU
Semi-discrete finite element method in BLAST
Momentum Conservation:
dv
dt = −M−1
v
F · 1
Energy Conservation:
de
dt = M−1
e
FT · v
Equation of Motion:
dx
dt = v
Matrix F is highly floating point operation intensive and thread independent
F is constructed by two loops:
- Loop over zones in the domain(in each processor)
-Loop over quadrature points in this zone
Compute hydro forces associated with this quadrature point
on each point we compute this value absoutely independently
varies with basis functions, dimension, etc. F can be abitrarily expensive
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
13 / 34
Generalized corner forces on the GPU
Semi-discrete finite element method in BLAST
Momentum Conservation:
dv
dt = −M−1
v
F · 1
Energy Conservation:
de
dt = M−1
e
FT · v
Equation of Motion:
dx
dt = v
CUDA kernel 1: Loop over qudrature points. Compute part of F based on v, e, x (transferred
from CPU) and allocated work space (on GPU)
CUDA kernel 2: Loop over zones. each zone does a Matrix Matrix TransposeMultiplication and
assemble the F (stay on GPU)
CUDA kernel 3 (in Momentum Equaton): Compute F · 1 and either return result to the CPU or
keep on the GPU depending on the CG solver settings.
CUDA kernel 4 (in Energy Equaton): Compute FT · v based on v (results stay on GPU)
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
14 / 34
Mass matrix solve on the GPU
Semi-discrete finite element method in BLAST
Momentum Conservation:
dv
dt = −M−1
v
F · 1
Energy Conservation:
de
dt = M−1
e
FT · v
Equation of Motion:
dx
dt = v
CUDA kernel 5 (in Momentum Equaton): Custom CG solver (Provided by Stan) for M−1
v
F · 1
based on CUBLAS/CUSPARSE, with a diagonal preconditioner (We add later).
CUDA kernel 6 (in Energy Equaton): Sparse Matrix(CSR) Multiplication to solve M−1
e
FT · v by
calling CUSPARSE
Notice:
Mv and M−1
e
are computed once and read only thereafter (stay on GPU)
M−1
v
is dense, so we did not use it directly
Me is a diagonal local dense matrix, so M−1
e
is a sparse one, can be used directly
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
15 / 34
Map to CUDA Thread Hierachy
CUDA kernel 1: loop points
Each thread < −−> one quadrature point
Each thread block < −−> one or more
zones (tunable)
In fact more flexsbile: one zone can be
splited into two thread blocks
CUDA kernel 2: loop zone
Each thread block < −−> one zone
Each block(zone) do MMtMult(ABt=C)
Each thread < −−> one row of Matix C
CUDA kernel 3, 4
Each thread < −−> one zone
Each thread block is composed of 32 or 64
threads(tunable)
CUDA kernel 5, 6
Call CUBLAS/CUSPARSE/MAGMA library
rountes
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
16 / 34
Kernel2: ABt = C
Each thread block(zone) does a ABt=C
A,B are not big and generally can be fitted in shared and constant memory on Fermi
A is varying to each thread block and updated in each iteration; B is read only and the same
to every block
Matrix are stored in column major
Accessing global memory is coalesced
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
17 / 34
Technical details: Memory management
CUDA code can be integrated into the previous C++ code very well
Malloc GPU memory in C++ Constructor
Free GPU memory in C++ Destructor
Add a new method CUDA ComputerCornerForce
Constructor and Destructor
// in .hpp files
// declare the varibles
double *d_vec;
// in .cu files
HydroState::CUDA_Constructor//called by HydroState()
{
// malloc variables on GPU and copy initilized
// and read only data from CPU to GPU
cudaMalloc(&d_vec);
cudaMemcpy(ToDevice);
}
HydroState::CUDA_Destructor//called by ~HydroState()
{
cudaFree(d_vec);
}
Corner Force Method
// still in .cu files,
HydroState::CUDA_CornerForce
{
// copy updated hydro states(v,e,x) to GP
cudaMemcpy(ToDevice);
// compute on GPU
kernel<<< , >>>(d_vec, ...);
// copy outputs to CPU
cudaMemcpy(ToHost);
}
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
18 / 34
Technical details: GPU Class
Porting code, not developing algorithms
Maximize use of previous C++ code to avoid develope new code
In BLAST, almost everything is class
CUDA4.0 support C++ class in device code (althought not fully)
Class on CPU
class Vector
{
private:
int size;
double *data;
public:
Vector(int a)
double *GetData()
{return data;}
void Operation()
Vector()
}
Class on GPU
class Vector_GPU
{
private:
int size;
double *data;
public:
__device__ Vector_GPU(int a)
__device__ double *GetData()
{return data;}
__device__ void Operation()
__device__ Vector_GPU()
}
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
19 / 34
An example of using class in GPU kernel
#define num threads 32
#define vector length 4
global
void kernel(double* d data)
{
Vector GPU vec(vector length);
//threadIdx.x is the thread’s id, from 0-31, each thread grab its own data via pointer cast
vec.GetData()=d data + threadIdx.x * vector length;
vec.Operation();
}
int main()
{
double *d data;
//malloc a space on device memory
cudaMalloc(&d data, sizeof(double) * num threads * vector length);
//in the kernel, each thread grab its own portion of data and execute parallelly
kernel<<<1, num threads,>>>(d data);
cudaFree(d data);
return 1;
}
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
20 / 34
Technical details: Transfer of arguments
hydro state e, v, x (in the form of class objects) needs to be transferred into CUDA kernel
can not transfer C++ class objects directly like scalar
grab data (pointer) from class objects and stored into double array or structs (see
programming guide 4.0 page)
define a struct
typedef struct
{
int height;
int width;
int chunk;
int size;
double *data;
} d_Matrix;
transfer struct as argument
void configureMatrix(d_Matrix &dm);
{
// malloc memory
// grab data from C++ class objects
and copy to dm.data
// initilize height, width, chunk, size
}
int main()
{
d_Matrix dm;
configureMatrix(dm,..);
kernel<<<, >>> (d_Matirx dm);
}
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
21 / 34
Outline
Introduction of BLAST
Motivation
Details
Optimization and Restriction
Examples and Results
Conclusion
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
22 / 34
Optimization
Use cuda profiler to identify the hot spot (pics of profiler shown later)
Constant memory store static read only coefficients, like basis function, weight
parameters,etc (in kenel 1-4)
Use shared (store A) and constant (store B) memory to accelerate in cuda kenel 2 ABt=C
(memory refer O(n3)), hinted by profiler
Implement PCG Solver which uses CUBLAS, CUSPARSE routines instead of coding myself
Hand code Eigenvalue/vector, SVD(by Veselin) (for very small matrix 2*2(2D) or 3*3(3D),
just specific to this application) in kernel 1, as can’t call LAPACK as used in C++ BLAST
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
23 / 34
Restriction
Code is developed on Telsa C1060(my local PC), tests run on Fermi
Tesla(1.3): not support dynamic malloc and free inside kernel
memory has to be pre-allocated outside kernel, even temporary variables
Fermi(2.0): supports dynamic malloc and free inside kernel
Tesla: do not support virtual function, so some codes has to be rewritten
PCG solver and kenel 6 only work on one processor at present
Fairly tuned, but not fully optimized for Fermi
Kenel 1 (also the most complicated and expensive one) reading global memory is
uncoalesced: each thread(quardarature points) access small matrix(2*2,3*3) althought
consecutively
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
24 / 34
Outline
Introduction of BLAST
Motivation
Details
Optimization and Restriction
Examples and Results
Conclusion
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
25 / 34
Tests
q2q1 2D Triple-pt
9 * 2 velocity unkowns dofs(degree of freedoms). 4 engery dofs (zones) 16 points per zones
q3q2 2D Triple-pt
16 * 2 velocity unkowns dofs. 9 engery dofs (zones) 36 points per zones
q2q1 3D Sedov wave
27 * 3 velocity unkowns dofs. 8 engery dofs (zones) 64 points per zones
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
26 / 34
Performance
2550 lines of CUDA code + 2 months
Tesla C1060
CPU: Xeon E5520 at
2.27GHz
Tesla C2050
CPU: Xeon Westmere-EP
X5660 (on Edge)
Quadro 5000
CPU: Xeon Westmere-EP
X5660
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
27 / 34
Performance
On Edge
2D q2q1 triple-pt problem
MPI+CUDA
Result of 4 MPI
Result of 4 MPI+CUDA
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
28 / 34
CPU vs GPU results
2D q3q2 triple-pt on CPU vs GPU
total energy = kinetic energy + internal energy
CPU energy change:-4.04832e-12 with iteration step 38910 at t=2.5
GPU energy change: 2.99486e-09 with iteration step 38748 at t=2.5 (below, 3x on C2050)
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
29 / 34
CPU vs GPU results
3D q2q1 Sedov on CPU
total energy = kinetic energy + internal energy
CPU: energy change:+1.89570e-13 with iteration step 848 at t=0.3
GPU: energy change:+1.26013e-11 with iteration step 848 at t=0.3 (right, 4x on C2050)
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
30 / 34
CUDA Profiler
Profiler of 2D q3q2 triple-pt
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
31 / 34
Outline
Introduction of BLAST
Motivation
Details
Optimization and Restriction
Examples and Results
Conclusion
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
32 / 34
Conclusion
GPU is well suited for computational heavy kernel
Floating points operation should accomodate the penalty of transferring data between CPU
and GPU
Optimization is a procedure of discovering
Profiler does help to identify the bottleneck
Use existing library instead of coding yourself (not necessarily to be the best but the most
stable)
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
33 / 34
Thanks
T. Dong (LLNL)
Acceleration of BLAST Hydra Code on GPU
Seminar
34 / 34 |
Optimizing CUDA kernel with atomics
Please help,
I have been working on optimizing a CUDA kernel that utilizes atomic adds and comparing performance. I have been able to achieve approximately 2x faster but once the size of the input goes beyond 1025, e.g., 1026, I get the following output:
i (0): original (0.000000,0.000000), modified (16416.000000,8208.000000)
Atomic and Optimized kernels do not match.
Which is incorrect as the modified/optimized CUDA kernel should match the output from the original atomic add CUDA kernel - just faster execution. I am hoping that it is just something stupid simple I am doing wrong. Can anyone help?
The code is posted below, and I am using CUDA 12.2 (driver 535.54.03) with A100-SXM4-80GB device:
#include <cuda.h>
#include <chrono>
#include <iostream>
#include <stdio.h>
#include <string>
#include <sstream>
#include <vector>
#include <stdexcept>
#include <cstdlib>
#include <cmath>
// for easy gpu error checking
#define GPU_ERROR_CHECK(ans) do{gpuAssert((ans),__FILE__,__LINE__);}while(0)
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
printf("\nCUDA KERNEL ERROR: CUDA Kernel reports error: %s\n",cudaGetErrorString(code));
if (abort) exit(code);
}
}
__forceinline__ __host__ __device__ float dot(float3 a, float3 b)
{
return a.x * b.x + a.y * b.y + a.z * b.z;
}
__forceinline__ __host__ __device__ float length(float3 v)
{
return sqrtf(dot(v, v));
}
__forceinline__ __device__ float2 myKernel(float val) {
return make_float2(val, val / 2);
}
/**
* @brief Works for values less than 1026 samples, that is up to and including 1025 samples
*/
__global__ void optimized_org_kernel(const float3 * __restrict__ pos_a, const float3 * __restrict__ pos_b,
const uint32_t input_size, const uint32_t output_size,
float2 * __restrict__ result, const int region,
const uint32_t first_idx_x, const uint32_t last_idx_x)
{
// Calculate thread indices
uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x;
uint32_t stride_x = blockDim.x * gridDim.x;
uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y;
uint32_t stride_y = blockDim.y * gridDim.y;
float3 distance3;
float distance;
uint32_t output_start, output_end;
// Local accumulation variables to reduce the number of atomic operations
float2 local_accum = make_float2(0.0f, 0.0f);
for (uint32_t x = thridx_x; x < last_idx_x; x += stride_x) {
// Calculate the distance between points
distance3.x = pos_a[x].x - pos_b[x].x;
distance3.y = pos_a[x].y - pos_b[x].y;
distance3.z = pos_a[x].z - pos_b[x].z;
distance = length(distance3);
// Determine the output range for this thread
output_start = __fdiv_rz(output_size, 2) + __fdiv_rz(distance, output_size) - region;
output_end = output_start + region;
// Clamp the values to ensure they stay within bounds
output_start = max(0u, output_start);
output_end = min(output_end, output_size);
for (uint32_t y = thridx_y; y < output_size; y += stride_y) {
// Only accumulate within the valid range
if (y >= output_start && y < output_end) {
float2 lval = myKernel(1.0f);
local_accum.x += lval.x;
local_accum.y += lval.y;
}
}
}
// Write back the accumulated values using atomic operations
if (local_accum.x != 0.0f || local_accum.y != 0.0f) {
atomicAdd(&result[thridx_y].x, local_accum.x);
atomicAdd(&result[thridx_y].y, local_accum.y);
}
}
__global__ void org_kernel(const float3 * pos_a, const float3 * pos_b,
const uint32_t input_size, const uint32_t output_size,
float2 * result, const int region,
const uint32_t first_idx_x, const uint32_t last_idx_x)
{
uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x;
uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y;
uint32_t stride_x = blockDim.x * gridDim.x;
uint32_t stride_y = blockDim.y * gridDim.y;
float3 distance3 = make_float3(0.0f, 0.0f, 0.0f);
float distance = 0;
uint32_t output_start, output_end;
for(uint32_t x = thridx_x; x < last_idx_x; x += stride_x){
// distance calcs
distance3.x = pos_a[x].x - pos_b[x].x;
distance3.y = pos_a[x].y - pos_b[x].y;
distance3.z = pos_a[x].z - pos_b[x].z;
distance = length(distance3);
output_start = __fdiv_rz(output_size, 2) + __fdiv_rz(distance, output_size) - region;
output_end = output_start + region;
for(uint32_t y = thridx_y; y < output_size; y += stride_y){
if((y < output_end) && (y >= output_start)){
float2 lval = myKernel(1.0f);
atomicAdd(&result[y].x, lval.x);
atomicAdd(&result[y].y, lval.y);
}
}
}
}
bool eval_arrays_equal(float2 * d_org, float2 * d_mod, uint32_t n) {
if (d_org == nullptr || d_mod == nullptr) {
throw std::invalid_argument("Arrays are NULL.");
}
if (n < 1) {
throw std::invalid_argument("Invalid array length, less than 1.");
}
float2 * h_org;
float2 * h_mod;
size_t sz = n * sizeof(float2);
h_org = (float2*)malloc(sz);
h_mod = (float2*)malloc(sz);
GPU_ERROR_CHECK(cudaMemcpy(h_org, d_org, sz, cudaMemcpyDeviceToHost));
GPU_ERROR_CHECK(cudaMemcpy(h_mod, d_mod, sz, cudaMemcpyDeviceToHost));
GPU_ERROR_CHECK(cudaDeviceSynchronize());
for (uint32_t i = 0; i < n; ++i) {
if (h_org[i].x != h_mod[i].x || h_org[i].y != h_mod[i].y) {
printf("\ti (%i): original (%f,%f), modified (%f,%f)\n", i, h_org[i].x, h_org[i].y, h_mod[i].x, h_mod[i].y);
return false;
}
}
free(h_org);
free(h_mod);
// Every element is equal
return true;
}
void printDeviceProperties() {
int32_t device;
cudaError_t error = cudaGetDevice(&device);
if (error != cudaSuccess) {
std::cerr << "Failed to get current device: " << cudaGetErrorString(error) << std::endl;
return;
}
cudaDeviceProp deviceProp;
error = cudaGetDeviceProperties(&deviceProp, device);
if (error != cudaSuccess) {
std::cerr << "Failed to get device properties: " << cudaGetErrorString(error) << std::endl;
return;
}
std::cout << "Device " << device << ": \"" << deviceProp.name << "\"" << std::endl;
std::cout << " CUDA Capability: " << deviceProp.major << "." << deviceProp.minor << std::endl;
std::cout << " Total Global Memory: " << deviceProp.totalGlobalMem / (1024 * 1024) << " MB" << std::endl;
std::cout << " Shared Memory per Block: " << deviceProp.sharedMemPerBlock / 1024 << " KB" << std::endl;
std::cout << " Registers per Block: " << deviceProp.regsPerBlock << std::endl;
std::cout << " Warp Size: " << deviceProp.warpSize << std::endl;
std::cout << " Max Threads per Block: " << deviceProp.maxThreadsPerBlock << std::endl;
std::cout << " Max Threads Dim: [" << deviceProp.maxThreadsDim[0] << ", "
<< deviceProp.maxThreadsDim[1] << ", " << deviceProp.maxThreadsDim[2] << "]" << std::endl;
std::cout << " Max Grid Size: [" << deviceProp.maxGridSize[0] << ", "
<< deviceProp.maxGridSize[1] << ", " << deviceProp.maxGridSize[2] << "]" << std::endl;
std::cout << " Clock Rate: " << deviceProp.clockRate / 1000 << " MHz" << std::endl;
std::cout << " Total Constant Memory: " << deviceProp.totalConstMem / 1024 << " KB" << std::endl;
std::cout << " Multiprocessor Count: " << deviceProp.multiProcessorCount << std::endl;
std::cout << " Compute Mode: " << deviceProp.computeMode << std::endl;
}
int main(int argc, char * argv[]) {
if (argc != 2) {
fprintf(stderr, "\nPass the number of array elements via command line as follows:\n");
fprintf(stderr, "./xTest <num_elems>\n\n");
return EXIT_FAILURE;
}
// Dimensions
const uint32_t BLOCK_WIDTH = 512;
dim3 nblks(BLOCK_WIDTH,1,1);
dim3 nthreads(1,BLOCK_WIDTH,1);
// Retrieve command-line argument
uint32_t n_values = static_cast<uint32_t>(std::stoi(argv[1]));
uint32_t region = 3;
uint32_t n_float3s = n_values;
uint32_t float3_sz = n_float3s * sizeof(float3);
uint32_t output_sz = n_values * sizeof(float2);
// Allocate host & device side
float2 *d_out_org;
float2 *d_out_mod;
GPU_ERROR_CHECK(cudaMalloc(&d_out_org, output_sz));
GPU_ERROR_CHECK(cudaMalloc(&d_out_mod, output_sz));
GPU_ERROR_CHECK(cudaMemset(d_out_org, 0, output_sz));
GPU_ERROR_CHECK(cudaMemset(d_out_mod, 0, output_sz));
// Float3s
float3 *pos_a, *pos_b;
float3 *d_pos_a, *d_pos_b;
pos_a = (float3*)malloc(float3_sz);
pos_b = (float3*)malloc(float3_sz);
for(size_t p = 0; p < n_float3s; ++p){
pos_a[p] = make_float3(1,1,1);
pos_b[p] = make_float3(0.1,0.1,0.1);
}
GPU_ERROR_CHECK(cudaMalloc(&d_pos_a, float3_sz));
GPU_ERROR_CHECK(cudaMalloc(&d_pos_b, float3_sz));
GPU_ERROR_CHECK(cudaMemcpy(d_pos_a, pos_a, float3_sz, cudaMemcpyHostToDevice));
GPU_ERROR_CHECK(cudaMemcpy(d_pos_b, pos_b, float3_sz, cudaMemcpyHostToDevice));
GPU_ERROR_CHECK(cudaDeviceSynchronize());
float total_time_org = 0.0f;
float total_time_mod = 0.0f;
uint32_t first_idx_x = 0;
uint32_t last_idx_x = n_values;
const uint32_t n_passes = 16;
for (uint32_t pass = 0; pass < n_passes; ++pass) {
auto start = std::chrono::high_resolution_clock::now();
// Original atomic add kernel
org_kernel<<<nblks,nthreads,0,0>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_org, region, first_idx_x, last_idx_x);
GPU_ERROR_CHECK(cudaDeviceSynchronize());
auto stop = std::chrono::high_resolution_clock::now();
total_time_org += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count());
start = std::chrono::high_resolution_clock::now();
// Optimized atomic add kernel
optimized_org_kernel<<<nblks,nthreads,0,0>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_mod, region, first_idx_x, last_idx_x);
GPU_ERROR_CHECK(cudaDeviceSynchronize());
stop = std::chrono::high_resolution_clock::now();
total_time_mod += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count());
}
// Check for fidelity
if (eval_arrays_equal(d_out_org, d_out_mod, n_values)) {
printf("\nFidelity achieved.\n");
printf("\tTotal number of passes: %d\n", n_passes);
float org_time = (total_time_org / n_passes);
float mod_time = (total_time_mod / n_passes);
printf("\t[ORIGINAL] Time: %8.9f (us.)\n", org_time);
printf("\t[MODIFIED] Time: %8.9f (us.)\n", mod_time);
printf("\tSpeedup Factor: %8.9f\n", (org_time / mod_time));
} else {
printf("\nAtomic and Optimized kernels do not match.\n");
return EXIT_FAILURE;
}
GPU_ERROR_CHECK(cudaPeekAtLastError());
GPU_ERROR_CHECK(cudaDeviceSynchronize());
GPU_ERROR_CHECK(cudaDeviceReset());
return EXIT_SUCCESS;
}
Thanks to anyone who can point out any error or point a direction that might fix this issue.
The original kernel adds a number to multiple columns. The modified kernel sums up multiple numbers and adds it to a single column. How can this ever be equivalent?
Did you investigate all suggestions already given in the original thread here ?
Hi @striker159
Thank you for the reply.
Yes, I have been looking into the suggestions. I don’t think shared memory is useful in my case as I was able to get it working but during execution the performance was not actually much better, maybe 3%. So, I wanted to look in another direction hopefully without shared memory - just lowering the number of necessary atomicAdds if possible.
I think I just solved the problem, by using warp level primitives and only calling atomicAdd once per block.
The performance is somewhere between 1.5x to 2.8x faster with the mod_kernel as compared to the org_kernel. I have posted the code below for anyone who may like to use it for their own project(s).
Thanks to all for assistance pointing me in some good direction(s) for optimizing this CUDA kernel.
#include <cuda.h>
#include <cuda_runtime.h>
#include <iostream>
#include <iomanip>
#include <stdexcept>
#include <chrono>
#include <cstdlib>
// for easy gpu error checking
#define GPU_ERROR_CHECK(ans) do{gpuAssert((ans),__FILE__,__LINE__);}while(0)
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
printf("\nCUDA KERNEL ERROR: CUDA Kernel reports error: %s\n",cudaGetErrorString(code));
if (abort) exit(code);
}
}
/**
* @brief CUDA DEVICE kernels executes scalar dot product.
*
* @param a The first float3.
* @param b The second float3.
* @return floating-point value that is the scalar dot product.
*/
__forceinline__ __host__ __device__ float dot(float3 a, float3 b) {
return a.x * b.x + a.y * b.y + a.z * b.z;
}
/**
* @brief CUDA DEVICE kernel executes Euclidean length of input float3.
*
* @param v The float3 whose x, y, z components length is being computed from.
* @return floating-point value that is the Euclidean length of input float3.
*/
__forceinline__ __host__ __device__ float length(float3 v) {
return sqrtf(dot(v, v));
}
/**
* @brief CUDA DEVICE kernel is a toy operation for demonstration purposes, whereby the
* input value is modified and returned as a float2 datatype.
*
* @param val The input value being modified.
* @return float2 version of input float with modifications applied.
*/
__forceinline__ __host__ __device__ float2 myKernel(float val) {
return make_float2(val, val / 2);
}
/**
* @brief CUDA DEVICE kernel that executes a warp-level summation of input float2 value.
* @details Allows data to be summed without the use of extra memory space, that is, shared
* directly across threads in a single warp (32-threads).
*
* @param val The float2 value being summed across warp.
* @return float2 value summed across threads in a single warp.
*/
__inline__ __device__ float2 warpReduceSum(float2 val) {
for (int offset = warpSize / 2; offset > 0; offset /= 2) {
val.x += __shfl_down_sync(0xffffffff, val.x, offset);
val.y += __shfl_down_sync(0xffffffff, val.y, offset);
}
return val;
}
/**
* @brief CUDA DEVICE kernel that calls CUDA intrinsic @ref atomicAdd only on the
* first thread and warp in the block.
*
* @param address The resulting global address where the value is added and stored.
* @param val The value being added to the global address.
*/
__inline__ __device__ void atomicAddWarp(float2 *address, float2 val) {
if (threadIdx.x % warpSize == 0) {
atomicAdd(&address->x, val.x);
atomicAdd(&address->y, val.y);
}
}
__global__ void org_kernel(const float3 * pos_a, const float3 * pos_b,
const uint32_t input_size, const uint32_t output_size,
float2 * result, const int32_t region,
const uint32_t first_idx_x, const uint32_t last_idx_x) {
// Compute indices
uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x;
uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y;
uint32_t stride_x = blockDim.x * gridDim.x;
uint32_t stride_y = blockDim.y * gridDim.y;
float3 distance3 = make_float3(0.0f, 0.0f, 0.0f);
float distance = 0;
uint32_t output_start, output_end;
for(uint32_t x = thridx_x; x < last_idx_x; x += stride_x){
// distance calcs
distance3.x = pos_a[x].x - pos_b[x].x;
distance3.y = pos_a[x].y - pos_b[x].y;
distance3.z = pos_a[x].z - pos_b[x].z;
distance = length(distance3);
output_start = __fdiv_rz(output_size, 2) + __fdiv_rz(distance, output_size) - region;
output_end = output_start + region;
for(uint32_t y = thridx_y; y < output_size; y += stride_y){
if((y < output_end) && (y >= output_start)) {
float2 lval = myKernel(1.0f);
atomicAdd(&result[y].x, lval.x);
atomicAdd(&result[y].y, lval.y);
}
}
}
}
__global__ void mod_kernel(const float3 * __restrict__ pos_a, const float3 * __restrict__ pos_b,
const uint32_t input_size, const uint32_t output_size,
float2 * __restrict__ result, const int32_t region,
const uint32_t first_idx_x, const uint32_t last_idx_x) {
// Compute indices
uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x;
uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y;
uint32_t stride_x = blockDim.x * gridDim.x;
uint32_t stride_y = blockDim.y * gridDim.y;
if (thridx_x >= last_idx_x) return;
float3 distance3;
float distance;
uint32_t output_start, output_end;
for (uint32_t x = thridx_x; x < last_idx_x; x += stride_x) {
// Pre-calculate distance components
distance3.x = pos_a[x].x - pos_b[x].x;
distance3.y = pos_a[x].y - pos_b[x].y;
distance3.z = pos_a[x].z - pos_b[x].z;
// Compute the distance and the output indices range
distance = sqrtf(distance3.x * distance3.x +
distance3.y * distance3.y +
distance3.z * distance3.z);
output_start = __fdividef(output_size, 2) + __fdividef(distance, output_size) - region;
output_end = output_start + region;
// Restrict output range to valid indices
output_start = max(output_start, 0U);
output_end = min(output_end, output_size);
for (uint32_t y = thridx_y; y < output_size; y += stride_y) {
if (y >= output_start && y < output_end) {
float2 lval = myKernel(1.0f);
// Execute warp-level primitives then only call atomic add once per block
float2 warp_sum = warpReduceSum(lval);
atomicAddWarp(&result[y], warp_sum);
}
}
}
}
bool eval_arrays(const float2 * d_arr1, const float2 * d_arr2, const uint32_t n) {
if (d_arr1 == nullptr || d_arr2 == nullptr) {
throw std::invalid_argument("Null array(s).");
}
if (n < 1) {
throw std::invalid_argument("Invalid array length.");
}
float2 * h_arr1 = nullptr;
float2 * h_arr2 = nullptr;
h_arr1 = new float2[n];
h_arr2 = new float2[n];
GPU_ERROR_CHECK(cudaMemcpy(h_arr1, d_arr1, n * sizeof(float2), cudaMemcpyDeviceToHost));
GPU_ERROR_CHECK(cudaMemcpy(h_arr2, d_arr2, n * sizeof(float2), cudaMemcpyDeviceToHost));
for (uint32_t i = 0; i < n; ++i) {
if (h_arr1[i].x != h_arr2[i].x || h_arr1[i].y != h_arr2[i].y) {
std::cout << "Index: " << i << " Array 1 (" << h_arr1[i].x << "," << h_arr1[i].y << "), Array 2 ("
<< h_arr2[i].x << "," << h_arr2[i].y << ")\n";
delete [] h_arr1;
delete [] h_arr2;
return false;
}
}
delete [] h_arr1;
delete [] h_arr2;
// Every element in both arrays was the same
return true;
}
int main(int argc, char * argv[]) {
if (argc != 2) {
std::cerr << "\nPass the number of array elements via command line as follows:\n";
std::cerr << "./xOptimize <num_elems>\n\n";
return EXIT_FAILURE;
}
// Get number of array elements from command line
int n_values = std::stoi(argv[1]);
if (n_values < 1) {
std::cerr << "Invalid number of array elements: " << n_values << std::endl;
return EXIT_FAILURE;
}
// Defined sizes
const uint32_t BLOCK_WIDTH = 512;
size_t float3_sz = n_values * sizeof(float3);
size_t output_sz = n_values * sizeof(float2);
// HOST-side positions
float3 *pos_a = nullptr;
float3 *pos_b = nullptr;
pos_a = new float3[n_values];
pos_b = new float3[n_values];
for (int i = 0; i < n_values; ++i) {
pos_a[i] = make_float3(i, i + 1, i + 2);
pos_b[i] = make_float3(i + 0.5f, i + 1.5f, i + 2.5f);
}
// DEVICE-side positions
float3 *d_pos_a = nullptr;
float3 *d_pos_b = nullptr;
GPU_ERROR_CHECK(cudaMalloc(&d_pos_a, float3_sz));
GPU_ERROR_CHECK(cudaMalloc(&d_pos_b, float3_sz));
GPU_ERROR_CHECK(cudaMemcpy(d_pos_a, pos_a, float3_sz, cudaMemcpyHostToDevice));
GPU_ERROR_CHECK(cudaMemcpy(d_pos_b, pos_b, float3_sz, cudaMemcpyHostToDevice));
// DEVICE-side outputs
float2 *d_out_org = nullptr;
float2 *d_out_mod = nullptr;
GPU_ERROR_CHECK(cudaMalloc(&d_out_org, output_sz));
GPU_ERROR_CHECK(cudaMalloc(&d_out_mod, output_sz));
GPU_ERROR_CHECK(cudaMemset(d_out_org, 0, output_sz));
GPU_ERROR_CHECK(cudaMemset(d_out_mod, 0, output_sz));
float total_time_org = 0.0f;
float total_time_mod = 0.0f;
uint32_t first_idx_x = 0;
uint32_t last_idx_x = n_values;
int region = 3;
dim3 nthreads(BLOCK_WIDTH, 1, 1);
dim3 nblocks(1, BLOCK_WIDTH, 1);
const uint32_t n_passes = 16;
for (uint32_t pass = 0; pass < n_passes; ++pass) {
auto start = std::chrono::high_resolution_clock::now();
// Original atomic kernel
org_kernel<<<nblocks, nthreads>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_org, region, first_idx_x, last_idx_x);
GPU_ERROR_CHECK(cudaDeviceSynchronize());
auto stop = std::chrono::high_resolution_clock::now();
total_time_org += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count());
start = std::chrono::high_resolution_clock::now();
// Modified atomic kernel
mod_kernel<<<nblocks, nthreads>>>(d_pos_a, d_pos_b, n_values, n_values, d_out_mod, region, first_idx_x, last_idx_x);
GPU_ERROR_CHECK(cudaDeviceSynchronize());
stop = std::chrono::high_resolution_clock::now();
total_time_mod += static_cast<float>(std::chrono::duration_cast<std::chrono::nanoseconds>(stop - start).count());
}
std::cout << std::fixed << std::setprecision(4);
total_time_org /= n_passes;
total_time_mod /= n_passes;
std::cout << "\nTotal number of passes: " << n_passes << std::endl;
std::cout << "Original CUDA Kernel Time: " << total_time_org << " (us.)\n";
std::cout << "Modified CUDA Kernel Time: " << total_time_mod << " (us.)\n";
std::cout << "Speedup factor: " << (total_time_org / total_time_mod) << std::endl;
// Check fidelity
if (eval_arrays(d_out_org, d_out_mod, n_values)) {
std::cout << "\nFidelity achieved.\n\n";
}else {
std::cout << "\nFidelity not achieved.\n\n";
}
return EXIT_SUCCESS;
}
When trying to improve a kernel, your main focus should be correctness, not speed. If you don’t want correct results, you could remove the kernel which will give you the greatest speedup.
As already pointed out by others in the first thread, your modified kernels are not equivalent to the original kernel.
This is your original code
for(uint32_t y = thridx_y; y < output_size; y += stride_y){
if((y < output_end) && (y >= output_start)) {
float2 lval = myKernel(1.0f);
atomicAdd(&result[y].x, lval.x);
}
}
Let’s plug in some numbers for simplicity.
for(uint32_t y = 0; y < 5; y += 1){
if((y < 5) && (y >= 0)) {
float2 lval = myKernel(1.0f);
atomicAdd(&result[y].x, lval.x);
}
}
This means if result is initialized with [0,0,0,0,0], it will be [1,1,1,1,1] after the loop.
But if you accumulate the values for multiple y in any way, and then write to output, you will get different results.
local_accum = 0
for(uint32_t y = 0; y < 5; y += 1){
if((y < 5) && (y >= 0)) {
float2 lval = myKernel(1.0f);
local_accum.x += lval.x;
}
}
atomicAdd(&result[0].x, local_accum.x);
This will output [5,0,0,0,0], not [1,1,1,1,1]
The simplest solution the reduce the number of atomics in the original kernel is to only perform atomicAdd(result.x, 1) . Then afterwards use a second kernel which computes result.y = result.x / 2
Hi @striker159,
Thank you for the reply.
Yes, my first goal has always been to ensure the correctness of the modified code with the original code then to speed it up, if possible. This is why it was such a problem, speeding it up is not too difficult but speeding it up and maintaining consistency with the original results, that’s the issue.
I understand that the following code would give incorrect results when comparing with the original:
local_accum = 0
for(uint32_t y = 0; y < 5; y += 1){
if((y < 5) && (y >= 0)) {
float2 lval = myKernel(1.0f);
local_accum.x += lval.x;
}
}
atomicAdd(&result[0].x, local_accum.x);
That is why I have since abandoned that idea.
However, is there any reason that the following snippet of code from above wouldn’t work?
...
__inline__ __device__ float2 warpReduceSum(float2 val) {
for (int offset = warpSize / 2; offset > 0; offset /= 2) {
val.x += __shfl_down_sync(0xffffffff, val.x, offset);
val.y += __shfl_down_sync(0xffffffff, val.y, offset);
}
return val;
}
__inline__ __device__ void atomicAddWarp(float2 *address, float2 val) {
if (threadIdx.x % warpSize == 0) {
atomicAdd(&address->x, val.x);
atomicAdd(&address->y, val.y);
}
}
__global__ void mod_kernel(const float3 * __restrict__ pos_a, const float3 * __restrict__ pos_b,
const uint32_t input_size, const uint32_t output_size,
float2 * __restrict__ result, const int32_t region,
const uint32_t first_idx_x, const uint32_t last_idx_x) {
// Compute indices
uint32_t thridx_x = threadIdx.x + blockDim.x * blockIdx.x + first_idx_x;
uint32_t thridx_y = threadIdx.y + blockDim.y * blockIdx.y;
uint32_t stride_x = blockDim.x * gridDim.x;
uint32_t stride_y = blockDim.y * gridDim.y;
if (thridx_x >= last_idx_x) return;
float3 distance3;
float distance;
uint32_t output_start, output_end;
for (uint32_t x = thridx_x; x < last_idx_x; x += stride_x) {
// Pre-calculate distance components
distance3.x = pos_a[x].x - pos_b[x].x;
distance3.y = pos_a[x].y - pos_b[x].y;
distance3.z = pos_a[x].z - pos_b[x].z;
// Compute the distance and the output indices range
distance = sqrtf(distance3.x * distance3.x +
distance3.y * distance3.y +
distance3.z * distance3.z);
output_start = __fdividef(output_size, 2) + __fdividef(distance, output_size) - region;
output_end = output_start + region;
// Restrict output range to valid indices
output_start = max(output_start, 0U);
output_end = min(output_end, output_size);
for (uint32_t y = thridx_y; y < output_size; y += stride_y) {
if (y >= output_start && y < output_end) {
float2 lval = myKernel(1.0f);
// Execute warp-level primitives then only call atomic add once per block
float2 warp_sum = warpReduceSum(lval);
atomicAddWarp(&result[y], warp_sum);
}
}
}
}
...
Thanks again for the help.
First of all, there is no guarantee that all threads in the warp reach the reduction code.
That aside, the reduction approach has the same problem. Values for multiple y are combined.
For example, in thread0 y=0, in thread1 y=1. Then the warpsum will be 2, and both result[0] and result[1] will be set to 2. But the correct result would be result[0] = 1 and result[1] = 1.
Okay, but wouldn’t calling one kernel to operate on the x component and then another kernel to operate on the y component end up costing more time than just calling the original kernel once? Unless the input is very large to maybe mitigate.
I guess I am trying to understand if maybe the original kernel is as fast as it can be given the parameters of the problem itself.
After the first x calculations, store the intermediate results in shared memory, use syncthreads() to synchronize block-wise, then use one or some of the warps for y processing within the block.
Hi @Curefab,
Thank you for the reply. Using shared memory is a good idea, and I have tried various versions of this methodology. While the results are valid in that they match the original values, the performance is really no better or if it is it is negligible. This is why I am beginning to think that maybe the original kernel is as fast as it can get.
This is why I am beginning to think that maybe the original kernel is as fast as it can get.
That is quite likely.
You can make some further tests with changed parameters or theoretical calculations of the maximum speed (e.g. how many bytes you read/write vs. the bandwidth of device memory) combined with Compute Nsight to support this assumption.
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
Optimizing CUDA.jl performance for small array operations
Hi,
I’m trying to get better GPU performance with CUDA.jl for small array operations.
So, I’ve started to port SymbolicRegression.jl to the GPU using CUDA.jl. It seems I’ve gotten the main evaluation part of the code to use the corresponding CUDA operations (which was REALLY straightforward by the way, great job!) , but it’s slower than I would like.
Part of the problem is that during symbolic regression, you typically work on small amounts of data; maybe a matrix of size 5x1000. Without some clever fusion of tree evaluations, this means one needs to worry about the time it takes to launch kernels which makes things tricky.
As a MWE, consider the following code:
using CUDA, BenchmarkTools, Statistics
for N in [1000, 10000]
c1 = CUDA.ones(Float32, N)
c2 = ones(Float32, N)
res1 = @benchmark CUDA.@sync cos.($c1);
res2 = @benchmark cos.($c2);
println("Size $N: CUDA=$(median(res1.times)); CPU=$(median(res2.times))")
end
On my v100, this gives me (in microseconds):
Size 1000: CUDA=26021.0; CPU=9086.0
Size 10000: CUDA=24419.5; CPU=87287.0
The GPU scales so well the array size is negligible. But the baseline time it takes to launch a kernel means I can’t exploit the full power of the GPU for evaluating trees on these small arrays.
Is there something I can do to improve the kernel launch speed here?
I also tried:
res1 = @benchmark CUDA.@sync blocking=false cos.($c1);
which was mentioned in the CUDA.jl docs to be better for profiling short executions. This lowers the the evaluation time to ~11000, but unfortunately this is still not enough.
Thanks for any advice!
Cheers,
Miles
This is the order I’d probably try things in:
I’m not at all familiar with the algorithms, but could the fusion be easier than it appears? For one, you can always add “batch” dimensions easily like:
c1a = CUDA.ones(Float32, N)
c1b = CUDA.ones(Float32, N)
c1 = cat(c1a, c1b, dims=2)
res1a, res2a = eachslice(cos.(c1), dims=2)
Also, you can “lazily” create broadcasted objects then evaluate them with a single “fused” kernel call like:
using Base: broadcasted, materialize
bc1 = broadcasted(cos, c1)
bc2 = broadcasted(sin, c2)
bc = broadcasted(+, bc1, bc2)
res = materialize(bc) # one kernel call, w/o temp arrays
You could use multiple threads. I think you may need to be on CUDA#master (might need to do some reading in Automatic task-based concurrency using local streams by maleadt · Pull Request #662 · JuliaGPU/CUDA.jl · GitHub and links therein too) but the latest versions each thread uses its own stream, which can then execute concurrently. In general it’d be good to do some profiling to make sure the GPU is doing what you think it is.
You could combine more of your code into single kernels by writing custom kernels (still pure Julia). KernelAbstractions.jl offers a pretty nice interface to it, although even CUDA.@cuda isn’t too bad.
Thanks @marius311. Replies to each point numbered below.
I should mention first that the C++ CUDA kernel launch cost is only ~10 microseconds, so I definitely think the seeming ~25,000 microseconds launch cost here can be cut down. i.e., this seem to be a software problem rather than hardware or algorithm. But these other tricks to combine kernels might be enough to account for it.
Best,
Miles
I should mention first that the C++ CUDA kernel launch cost is only ~10 microseconds, so I definitely think the seeming ~25,000 microseconds launch cost here
Quick note but I think those benchmark times are in nanoseconds, so its not 25,000 microseconds, its 25 microseconds, on the order of the kernel launch.
Anyway, thanks for the info, that helps. Are the inputs to the ~1000 equations the same at least? As in, does this reduce to applying an array of functions to some input? If so, search these foums for “array of functions” and you’ll get several links discussing this, some even mentioning GPU (this is probalby a good one to follow links from, also, this may be the identical question). This may be naive since I haven’t really read those threads, but here’s my solution messing around with it briefly:
using CUDA
fs = (sin, x->x^2, x->tan(x^2))
apply_funcs(fs, x) = ntuple(i->fs[i](x), length(fs))
apply_funcs.(Ref(fs), cu(rand(1000)))
I haven’t done a super proper profile, but I’m pretty sure that final call results in a single kernel launch. Certainly benchmarking it on a V100, its faster than 3 consecutive ones:
julia> x = cu(rand(Float32, 1000));
julia> @btime CUDA.@sync apply_funcs.($(Ref(fs)), $x);
22.183 μs (47 allocations: 1.55 KiB)
julia> @btime CUDA.@sync (fs[1].($x); fs[2].($x); fs[3].($x));
39.731 μs (78 allocations: 1.84 KiB)
Something else which is cool is that if you look at e.g. @code_llvm apply_funcs(fs, 2.) you’ll see that x^2 is only calculated once, I believe in this case the compiler is smart enough to eliminate the common sub-expression. I’d guess the CUDA compiler would do the same thing, although I’m not familiar enough with how to check exactly.
A limitation of this is that it stops working once your tuple of functions is longer than length 10 because of this. But maybe batching 10 functions together like this is enough to saturate the GPU, and if not, you can always write your own ntuple-like function and grow that if-statement a bit larger.
Thanks! This is very helpful.
Okay, good to know re: units. I’m not sure why I thought it was in microseconds… but yeah 25 us seems much more reasonable.
Are the inputs to the ~1000 equations the same at least?
Only at the leafs of each equation. Two equations might both use the variable x1, for instance. But equations also have constants, and these will always be slightly different due to mutations. In the branch nodes, the inputs end up being completely different unless that particular subtree is identical, which is rare. I tried using memoization for small subtrees at one point, but this didn’t help… there’s just so many different subtrees possible in a typical search.
Re: apply_funcs, good idea. Will try this. To be honest, thinking more about this, I wonder how doable it would be to completely batch evaluation of many equations…
Actually, maybe this make things easier for your point 1!
Right now my recursive evaluation essentially looks like this:
function evalTree(X, tree)
if tree.degree == 0
if tree.constant
return fill(tree.value, size(X, 2))
end
return X[tree.feature, :]
elseif tree.degree == 1
x = evalTree(X, tree.left)
op = unary_operators[tree.op]
return op.(x)
else
x = evalTree(X, tree.left)
y = evalTree(X, tree.right)
op = binary_operators[tree.op]
return op.(x, y)
end
end
(though I do some operator fusing for small subtrees, and also run things behind a function barrier, per your helpful advice in the other thread ).
I guess I am wondering if it would be efficient to: (1) pass a list of trees to evaluate, and (2) walk the binary tree for ALL trees up to the maximum depth, using an identity operator for i when tree[i]==nothing at the current depth, and converting all unary operators to binary via (x, y) -> op(x)
What do you think?
Although, maybe the fact that the tuple of operators changes every single time means that this would be re-compiling the kernel each launch… I guess one could instead pass an array of indices for the functions to call, and manage all of that inside the kernel assuming fixed operators?
Related topics
Powered by Discourse, best viewed with JavaScript enabled
|
How to speed up that simple CUDA kernel?
Hi! I have a kernel:
__global__ void filter_small(unsigned arr_size, float *arr, ResultAndPos *results, float threshold, int *n_results) {
const int itemsPerThread = 32;
int begin = blockIdx.x * blockDim.x * itemsPerThread + threadIdx.x * itemsPerThread;
int end = begin + itemsPerThread;
if (end > arr_size)
end = arr_size;
for (int index = beg; index < end; index++) {
if (arr[index] < threshold) {
int oldIdx = atomicAdd(n_results, 1);
results[oldIdx] = ResultAndPos{arr[index], index};
}
}
}
So basically it's very simple filter, that leaves elements smaller that the threshold. But the array itself is very large, ~ 5-15 millions of floats.
And I launch it as follows:
int blockSize = 128;
int itemsPerThread = 32;
int itemsPerBlock = itemsPerThread * blockSize;
int numBlocks = (N + itemsPerBlock - 1) / itemsPerBlock;
filter_small<<<numBlocks, blockSize>>>(N, arr_ptr, results_ptr, threshold, d_n_results);
For ten million, it executes around ~12 ms which is quite slow. How can I speed it up?
Thank you!
OS: linux ubuntu 18.04, cuda 11.1, nvidia 3060
Create your account and connect with a world of communities.
Anyone can view, post, and comment to this community
Top Posts
|
This makes the project proportionally harder in my opinion because you need to be that much more efficient with moving data through the memory hierarchy. With tensor cores, to get anywhere close to cuBLAS, you need to start with something like the most efficient kernel in simon's article, and then do stuff like shared memory swizzling, async global memory copies, double buffering, and writing a really efficient kernel epilogue to accumulate the C matrix into the product.I came across this article a while ago and it inspired me to take a stab at this^, and as of now I have gotten to ~80% of the cuBLAS tensor core performance where the kernel is mostly compute bound, and I am close to giving up on the last ~20%, because I think I may need to write the inner loop in SASS to make sure the instruction mix between shared memory loads, mma instructions, and synchronizations is perfectly balanced so that none of the hardware pipelines get overloaded (see link below), and I have enough compassion for myself to not spend my free time doing stuff like that :). There are also certain things implemented in CUTLASS that seem important (look up serpentine traversal) but NVIDIA engineers wont talk about the hardware details required to understand why this helps.Article on this is forthcominghttps://github.com/NervanaSystems/maxas/wiki/SGEMM
I came across this article a while ago and it inspired me to take a stab at this^, and as of now I have gotten to ~80% of the cuBLAS tensor core performance where the kernel is mostly compute bound, and I am close to giving up on the last ~20%, because I think I may need to write the inner loop in SASS to make sure the instruction mix between shared memory loads, mma instructions, and synchronizations is perfectly balanced so that none of the hardware pipelines get overloaded (see link below), and I have enough compassion for myself to not spend my free time doing stuff like that :). There are also certain things implemented in CUTLASS that seem important (look up serpentine traversal) but NVIDIA engineers wont talk about the hardware details required to understand why this helps.Article on this is forthcominghttps://github.com/NervanaSystems/maxas/wiki/SGEMM
Article on this is forthcominghttps://github.com/NervanaSystems/maxas/wiki/SGEMM
https://github.com/NervanaSystems/maxas/wiki/SGEMM
I’d be so happy if SASS were documented and ptxas were open source, sometimes I spend entire days going through whitepapers and various sources of online documentation to get more hardware details…
My guess is that people nowadays are gradually moving away from raw CUDA programming and moving towards things like Triton etc, and you won't be focusing on pure GEMM since you tend to do some fusion.The Triton tutorial claims their performance is on par with cuBLAS.https://triton-lang.org/main/getting-started/tutorials/03-ma...
The Triton tutorial claims their performance is on par with cuBLAS.https://triton-lang.org/main/getting-started/tutorials/03-ma...
https://triton-lang.org/main/getting-started/tutorials/03-ma...
Your guess is wrong. Besides the fact that there's much more to life than matmul (for which triton is just ok), the other obvious fact is that triton has exactly 1 frontend (python) and there's much more to life than that frontend.I find that basically in every thread about low-level work there's someone making some weird comment about how triton or mojo or XYZ supplants CUDA or assembly or whatever. I can't understand how this comes about because absolutely no one working in these areas thinks XYZ is going to supplant anything. So it's invariably outsiders making these claims and I cannot fathom why any outsider would be motivated to make claims from the outside.
I find that basically in every thread about low-level work there's someone making some weird comment about how triton or mojo or XYZ supplants CUDA or assembly or whatever. I can't understand how this comes about because absolutely no one working in these areas thinks XYZ is going to supplant anything. So it's invariably outsiders making these claims and I cannot fathom why any outsider would be motivated to make claims from the outside.
As an outsider CUDA is so intimidating so the promise of Triton etc is very appearing and I wanted to get sold.
i have PRs in Triton - i'm well familiar with the fact that triton is an MLIR project.> C++ straight using MLIRthat's like saying llvm ir is usable through C++ ... or hell that's like saying NVPTX is usable through C++. it's not just not a frontend it's the exact opposite: it's emitting IR using IR builders.
> C++ straight using MLIRthat's like saying llvm ir is usable through C++ ... or hell that's like saying NVPTX is usable through C++. it's not just not a frontend it's the exact opposite: it's emitting IR using IR builders.
that's like saying llvm ir is usable through C++ ... or hell that's like saying NVPTX is usable through C++. it's not just not a frontend it's the exact opposite: it's emitting IR using IR builders.
Knowing that reaching broad devex parity is very expensive I think the real win is figuring out what specific problem you have and building community and robust software support around that.
It's the fact that AMD doesn't prioritize the reliability of its hardware and software stack. If I run llama.cpp on Vulkan I get a reasonable speedup, but if I raise the batch size to 512, the GPU is starting to make strange noises and shuts the PC down midway. Very cool. 98% of zero is still zero.
In fact cuBLAS and CUDA are kinda orthogonal in that you're either calling a pre-built cuBLAS kernel or writing your own CUDA kernel but not really combining the two.I'd say CUDA shines more because of stability, documentation, community support + examples, and ability to use modern C++ features in GPU code.
I'd say CUDA shines more because of stability, documentation, community support + examples, and ability to use modern C++ features in GPU code.
Targeting nvidia GPUs? Or in general? For whom?Building a performant BLAS library is hard but certainly not impossible. The tricks discussed in this post are hardly anything new either. Now, making a BLAS competitive with Nvidia's on its own GPUs is bound to be tough. But not technically unfeasible (after all, you can drop down to PTX if needed).
Building a performant BLAS library is hard but certainly not impossible. The tricks discussed in this post are hardly anything new either. Now, making a BLAS competitive with Nvidia's on its own GPUs is bound to be tough. But not technically unfeasible (after all, you can drop down to PTX if needed).
On average over 20 runs:CuBLAS (./sgemm 0) has 50.9 TFLOPS.My kernel has 61.8 TFLOPS, so it's actually +21% speedup in this benchmark.How do I collect my paycheck?
CuBLAS (./sgemm 0) has 50.9 TFLOPS.My kernel has 61.8 TFLOPS, so it's actually +21% speedup in this benchmark.How do I collect my paycheck?
My kernel has 61.8 TFLOPS, so it's actually +21% speedup in this benchmark.How do I collect my paycheck?
How do I collect my paycheck?
On a 4090 gpu, average of 20 runs of SGEMM_CUDA: size tflops_cublas tflops_my diff
4096² 50.8-50.9 61.8 +21%
8192² 56.3-56.4 67.1 +19%
16384² 53.6 66.7 +24%
I guess the right thing to do now would be to hire a B2B salesman and figure out, which company needs it.
size tflops_cublas tflops_my diff
4096² 50.8-50.9 61.8 +21%
8192² 56.3-56.4 67.1 +19%
16384² 53.6 66.7 +24%
I guess the right thing to do now would be to hire a B2B salesman and figure out, which company needs it.
size tflops_cublas tflops_my diff
4096² 50.8-50.9 61.8 +21%
8192² 56.3-56.4 67.1 +19%
16384² 53.6 66.7 +24%
I have seen how those high-performance libraries are made and I'm still in awe at the quality and quantity of the staffing involved. Those were the smartest and most knowledgeable engineers I met in my career.
Generalizing from a micro benchmark is typically hubris.
Then there are also numerics: being fast is not enough if your implementation accumulates a lot of rounding errors doing so. Floating point arithmetic can and will mess up your results in unexpected ways. -funsafe famously is neither fun nor safe.Maybe tooling will catch up and make it easier. Think tinygrad with beamsearch, triton or halide.
Maybe tooling will catch up and make it easier. Think tinygrad with beamsearch, triton or halide.
|
Maharshi's blog
Home Blog
Learning CUDA by optimizing softmax: A worklog
04 Jan, 2025
The softmax operation is crucial. It is used extensively as a layer within deep learning models like transformers where it normalizes raw scores (logits) into a probability distribution. This property makes it particularly useful in classification tasks, where each output neuron represents the likelihood of a specific class. Optimizing softmax, especially in the context of GPU programming with CUDA, presents many opportunities for learning.
In this worklog, we will start by benchmarking PyTorch's softmax operation then finally we will iteratively optimize it in CUDA. The NVIDIA GPU used for this worklog is one GTX 1050Ti (that's all I have got right now).
The full code is available on my GitHub: Optimizing softmax in CUDA
Let's start.
The math
Before getting into it all, let's take a moment to understand the math behind the softmax operation. Softmax for an input vector X having N elements, produces an output vector O with N elements, where the ith element in the output vector is defined as:
Note that softmax operation depends on the current element xi and also on the sum of exponentials of all the elements of the input vector X. We will call this sum as the "normalization factor" (or, norm) henceforth.
Usually, instead of a single vector we deal with a matrix of shape (M,N) consisting of M rows where each row is a vector of N elements. Softmax is then performed along the columns of this matrix. The output here will be another matrix of the same shape.
Throughout this worklog, we will be working with a matrix of shape (1024,32768) i.e. 33,554,432 floating point numbers in total.
Example of the softmax output on a vector containing 5 elements:
import torch
import torch.nn.functional as F
vector = torch.randn(5, dtype=torch.float32)
print("Input vector:", vector)
# softmax along the last dimension
output = F.softmax(vector, dim=-1)
print("Output vector:", output)
Input vector: tensor([-1.3701, 0.7485, 0.1610, -2.0154, 1.0918])
Output vector: tensor([0.0382, 0.3176, 0.1765, 0.0200, 0.4477])
There is a problem though:
If the values of xi are very large (or very small), then the exponentials might cause overflow or underflow considering the precision limits of floating point numbers on a modern computer. We cannot represent and work with very large or very small numbers. This means for extreme values, the above version of softmax is NOT numerically stable.
But... there is a fix! We can modify the above equation in such a way that the overall operation becomes numerically stable while being correct: We subtract the maximum value xmax of the vector (a constant) from each xi before computing the exponential. This subtraction operation "shifts" the numbers to a range that can work nicely with floating point numbers. The numerically stable softmax equation becomes:
How this "shifted" equation results in the correct softmax output is left as an excersice to the reader :)
How fast is PyTorch?
We can get a baseline metric on how fast PyTorch is for computing the softmax operation, along the last dimension, on a randomly initialized matrix.
Following the above example, we can get a quick measure for the execution time of the softmax function:
import time
import torch
import torch.nn.functional as F
# Initialize the matrix on devuce
matrix = torch.randn(1024, 32768, device='cuda', dtype=torch.float32)
# Warm up
_ = torch.nn.functional.softmax(matrix, dim=-1)
# Ensure all CUDA operations are finished
torch.cuda.synchronize()
total_time = 0
n_iters = 5
for i in range(n_iters):
# Measure time
torch.cuda.synchronize() # Ensure all CUDA operations are finished
start = time.time()
_ = torch.nn.functional.softmax(matrix, dim=-1)
torch.cuda.synchronize() # Synchronize again
end = time.time()
total_time += (end - start) * 1000
print(total_time)
print(f"Softmax computation time (average): {(total_time/n_iters):.3f} ms")
Softmax computation time (average): 7.226 ms
From our quick test, PyTorch takes around 7.2 milliseconds to process and compute softmax on the entire matrix. Now, let's see how far can we go with implementing softmax in CUDA.
Kernel 1 - Naive softmax
In this kernel, we will assume that each thread in a block processes and computes one entire row of the input matrix. If the number of threads in one block is N_THREADS, then we need a total of ceil(M / N_THREADS) blocks to process the entire matrix.
The figure below shows this. Note that row = blockDim.x * blockIdx.x + threadIdx.x is the row which each thread within some block will process.
The actual computation is quite intuitive here. Softmax is calculated in three passes over the input array:
Pass 1 - Calculation of the maximum: The whole input row is first traversed from left (index = 0) to right (index = N - 1) to find the maximum value xmax.
Pass 2 - Calculation of the norm: The whole input row is traversed from left to right again, but this time the normalization factor is computed using the xmax value from the first pass, for each element.
Pass 3 - Softmax computation: The whole input row is traversed again from left to right and for each element the exponential of (x−xmax) is divided by the norm calculated in the second pass.
Below is the specific code snippet that does this:
int row = blockDim.x * blockIdx.x + threadIdx.x;
if (row < M) {
// maximum of this row
float x_max = -INFINITY;
// norm factor of this row
float norm = 0.0f;
// output in 3 passes
for (int col = 0; col < N; col++) {
int i = row * N + col;
x_max = max(x_max, input[i]);
}
for (int col = 0; col < N; col++) {
int i = row * N + col;
norm += expf(input[i] - x_max);
}
for (int col = 0; col < N; col++) {
int i = row * N + col;
output[i] = expf(input[i] - x_max) / norm;
}
}
Running this kernel results in:
>> GPU allocation time: 10.727424 ms
>> Host to device transfer time: 26.176161 ms
>> Kernel execution time: 124.102112 ms
>> Device to host transfer time: 37.320896 ms
The naive kernel takes around 124.10 milliseconds to execute. This is 17.24 times slower compared to PyTorch's 7.2 milliseconds.
Can we improve it? Ofcourse we can.
Kernel 2 - Online softmax
Three passes to compute softmax is not at all optimal. Maybe there's a way to "fuse" the first pass (calculating the maximum) and the second pass (calculating the norm) together.
To do this, we will exploit the multiplication property of exponentials i.e.
To calculate the xmax and norm in just one pass, at each step we need to multiply the "current norm" with a "correction term".
For example, consider the following input vector: V=[3,2,5,1] for which we need to compute maximum and norm. We will now iterate through this input vector to see what correction term do we need and when do we need it.
Assume that the variables maxi and normi will represent maximum and norm untill the ith element.
Starting at i=0:
Note that after the first iteration, the values for maximum and norm are the correct values (but till the first index).
Next at i=1:
We add the "previous norm" value to the "current norm" value at each iteration.
Now at i=2:
Finally at i=3:
After the final iteration, we remain with:
and,
We just calculated both maximum and norm factor in only one pass by using a correction term and by exploiting the property of multiplying exponentials! The correction term is:
Now, to write this algorithm as a CUDA kernel, we simply use the naive kernel and "fuse" the first two loops into one:
int row = blockDim.x * blockIdx.x + threadIdx.x;
if (row < M) {
float x_max = -INFINITY;
float norm = 0.0f;
// pass 1
for (int col = 0; col < N; col++) {
int i = row * N + col;
float curr = input[i];
if (curr > x_max) {
// correct the global norm here
norm = norm * expf(x_max - curr);
x_max = curr;
}
norm += expf(curr - x_max);
}
// pass 2
for (int col = 0; col < N; col++) {
int i = row * N + col;
input[i] = expf(input[i] - x_max) / norm;
}
}
Running this kernel results in:
>> GPU allocation time: 10.431488 ms
>> Host to device transfer time: 25.897375 ms
>> Kernel execution time: 88.149567 ms
>> Device to host transfer time: 33.533314 ms
Using this simple trick (also called online softmax) we see that this kernel is 1.39 times (around 28.12%) faster than the naive kernel.
That's a clever improvement, but we can do more. We need to dive deeper into how we can use threads within one block to parallelize the computations even more by collaborating with each other.
Kernel 3 - Shared memory and reductions
The more you learn about GPU programming with CUDA, the more you will realize that memory is structured into hierarchies. The list below shows the access speeds of different memory hierarchies from fastest to slowest.
The kernels above uses only global GPU memory. Reading from and writing to global memory is expensive and time consuming, so we need to somehow reduce the access and storing time.
The idea here is to have each block (thread block) process one row of the input matrix and the threads within each block will process only a chunk of the entire row. Have a look at the figure below to understand which elements will each thread load.
Here tid = threadIdx.x loads elements spaced by blockDim.x so that the threads with different tids load consecutive elements from the input row. This helps in achieving memory coalescing where accessing consecutive addresses from the global memory is faster than accessing random addresses.
There is a problem though: To calculate the values of maximum and norm, we need to have access to all the elements of the input row. How will we do that if different threads have access to only a chunk of the input row?
This is where reductions come into play. Bear with me on this one.
Let's assume each thread has its own private set of variables called local_max and local_norm and also suppose that there are N_THREADS threads in total. Now, the thread with tid = i will compute the local max and local norm using the elements i, i + blockDim.x, i + 2*blockDim.x and so on.
After all the threads in a block complete processing their respective chunks, we will be left with N_THREADS values for local_max and local_norm. To calculate the global maximum value, we need to "reduce" these N_THREADS local maximum values to 1 global maximum value. The figure below will help you understand this.
However, to perform this "block-level" reduction we will need to store the local maximum value in the shared memory of the block. Each thread will store its local maximum as:
smem[tid] = local_max;
__syncthreads();
Note we also add a sync barrier to ensure that each thread correctly stores its local maximum into the corresponding address in the shared memory and waits for other threads before moving on to the reduction step.
We will now use the shared memory to reduce the N_THREADS local maximum values to 1 value and then store it in the first address (smem[0]) in the shared memory. The reduction step looks like:
for (int stride = blockDim.x / 2; stride > 0; stride /= 2) {
if (tid < stride) {
smem[tid] = max(smem[tid], smem[tid + stride]);
}
// sync before next iteration
__syncthreads();
}
float global_max = smem[0];
__syncthreads();
This code block performs reduction in O(log(N)) time complexity which is faster than reducing linearly i.e. O(N) complexity. Let's see an example of this reduction with 8 threads where the shared memory will contain 8 maximum values in the start:
Initially:
smem = [3, 7, 2, 8, 6, 4, 5, 1]
First Iteration (stride = 4):
Each thread with tid < 4 compares smem[tid] with smem[tid + stride] and updates smem[tid] with the maximum.
Comparisons:
tid = 0: smem[0] = max(smem[0], smem[4]) = max(3, 6) = 6
tid = 1: smem[1] = max(smem[1], smem[5]) = max(7, 4) = 7
tid = 2: smem[2] = max(smem[2], smem[6]) = max(2, 5) = 5
tid = 3: smem[3] = max(smem[3], smem[7]) = max(8, 1) = 8
Updated smem:
smem = [6, 7, 5, 8, 6, 4, 5, 1]
Second Iteration (stride = 2):
Each thread with tid < 2 compares smem[tid] with smem[tid + stride] and updates smem[tid].
Comparisons:
tid = 0: smem[0] = max(smem[0], smem[2]) = max(6, 5) = 6
tid = 1: smem[1] = max(smem[1], smem[3]) = max(7, 8) = 8
Updated smem:
smem = [6, 8, 5, 8, 6, 4, 5, 1]
Third Iteration (stride = 1):
Each thread with tid < 1 compares smem[tid] with smem[tid + stride] and updates smem[tid].
Comparison:
tid = 0: smem[0] = max(smem[0], smem[1]) = max(6, 8) = 8
Updated smem:
smem = [8, 8, 5, 8, 6, 4, 5, 1]
Final State:
After the reduction, the maximum value is stored in smem[0], which is:
global_max = smem[0] = 8
This shows how in only 3 iterations, we performed the reduction and got access to the global maximum value from the 8 threads. We do the same reduction for local_norm as well to find the global norm value. The only difference for local norm value is that, instead of performing the max operation we perform the + operation.
Here's how the kernel looks like for reduction of the maximum value:
__shared__ float smem[1024];
int row = blockIdx.x;
int tid = threadIdx.x;
// edge condition (we don't process further)
if (row >= M) return;
float* input_row = xd + row * N;
float* output_row = resd + row * N;
float local_max = -INFINITY;
float local_norm = 0.0f;
for (int i = tid; i < N; i += blockDim.x) {
float x = input_row[i];
if (x > local_max) {
local_norm *= expf(local_max - x);
local_max = x;
}
local_norm += expf(x - local_max);
}
__syncthreads();
smem[tid] = local_max;
__syncthreads();
for (int stride = blockDim.x / 2; stride > 0; stride /= 2) {
if (tid < stride) {
smem[tid] = max(smem[tid], smem[tid + stride]);
}
__syncthreads();
}
float global_max = smem[0];
__syncthreads();
The output from this kernel looks like:
>> GPU allocation time: 10.464928 ms
>> Host to device transfer time: 22.674080 ms
>> Kernel execution time: 6.612160 ms
>> Device to host transfer time: 41.318016 ms
Right away we see that this kernel which uses shared memory and reductions is already around 8.33% (1.09 times) faster than PyTorch's implementation.
Can we improve this even more? Let's see.
Kernel 4 - Shuffle instructions
This kernel will be largely similar to the previous one with one difference. If you notice carefully, in the reduction operations for local maximum value and local norm value we are accessing the shared memory and syncing the threads in every iteration. Even though accessing shared memory is fast, what if we could eliminate the usage of shared memory and syncing barriers while reducing the values?
Before explaining how, we need to understand the concept of warps within thread blocks:
Warps are a fundamental unit of execution within a thread block. A warp is a group of 32 threads in a thread block that execute the same instruction simultaneously (SIMD: Single Instruction, Multiple Data). All threads in a warp execute instructions in lockstep, meaning all 32 threads execute the same instruction at the same time on different data. If a thread block contains N threads, the number of warps is ceil(N / 32). Also, when threads in a warp follow different execution paths (e.g., due to conditional statements), it leads to warp divergence, reducing performance as the threads execute sequentially instead of in parallel.
In our case, if we have blockDim.x = 1024 then each block is composed of 32 warps (each warp consisting of 32 threads).
To limit the usage of shared memory, CUDA provides us with shuffle instructions which are specialized intrinsics that allow threads within a warp to directly exchange data without the overhead of shared memory. These are warp-level primitives and are highly efficient because they use registers to exchange data which is faster than using shared memory (according to the hierarchy).
Suppose in one block we have N_THREADS threads in total. That means, we have NW = ceil(N_THREADS / warp_size) warps where warp_size is usually 32 threads. Now, instead of doing a block-level reduction using shared memory what if we first perform a warp-level reduction:
From N_THREADS values, doing a warp-level reduction for every warp available will leave us with NW values across the block that needs to be reduced further. So, the first available warp can load the values from the remaining warps, and then perform a warp-level reduction again to get the final value. Let's consider an example to ease your mind:
Suppose there are 16 threads that have already calculated their respective local maximum values. Also, assume that warp_size = 4 which means there are 4 warps in total. The values are [3, 7, 2, 9, 4, 1, 8, 5, 10, 6, 12, 11, 13, 14, 15, 16].
Step 1: Warp-level reduction
The warp size is 4, so there are 4 warps in the block (16 threads / 4 threads per warp). Each warp performs its own reduction.
Warp 0 (Threads 0 to 3: Values [3, 7, 2, 9]):
Offset = 2:
Offset = 1:
Result for Warp 0: 9 (stored in Thread 0 of the warp).
Warp 1 (Threads 4 to 7: Values [4, 1, 8, 5]):
Offset = 2:
Offset = 1:
Result for Warp 1: 8 (stored in Thread 4 of the warp).
Warp 2 (Threads 8 to 11: Values [10, 6, 12, 11]):
Offset = 2:
Offset = 1:
Result for Warp 2: 12 (stored in Thread 8 of the warp).
Warp 3 (Threads 12 to 15: Values [13, 14, 15, 16]):
Offset = 2:
Offset = 1:
Result for Warp 3: 16 (stored in Thread 12 of the warp).
Step 2 - Block-level reduction
At this point, the maximum values from each warp are stored in the first thread of each warp: [9, 8, 12, 16].
The block-level reduction begins.
Store Warp Results in Shared Memory:
Synchronize Threads:
Perform Final Reduction Using First Warp:
First Warp Reduction (smem = [9, 8, 12, 16]):
Offset = 2:
Offset = 1:
Global Block Maximum: 16 (stored in smem[0]).
At this point, we have the global maximum value for the entire block using warp-level reductions.
How to actually perform these warp-level reductions though? CUDA provides us with shuffle instructions for that. We will use the __shfl_down_sync instruction to perform reduction. Here's how it works:
It is a CUDA warp-level primitive that shifts data values down within a warp. Threads in the warp exchange data based on a specified offset, and threads that would receive data from out-of-bounds threads are assigned a default value. The syntax for __shfl_down_sync is:
T __shfl_down_sync(unsigned mask, T var, int delta, int width=warpSize);
Here:
Consider the following piece of code:
int val = threadIdx.x;
int shifted_val = __shfl_down_sync(0xFFFFFFFF, val, 1);
For delta = 1:
The reduction code for this kernel looks like:
float val = local_max;
for (int offset = warp_size / 2; offset > 0; offset /= 2) {
val = fmaxf(val, __shfl_down_sync(0xffffffff, val, offset));
}
if (blockDim.x > warp_size) {
if (tid % warp_size == 0) {
// which warp are we at?
// store the value in its first thread index
smem[tid / warp_size] = val;
}
__syncthreads();
if (tid < warp_size) {
val = (tid < CEIL_DIV(blockDim.x, warp_size)) ? smem[tid] : -INFINITY;
for (int offset = warp_size / 2; offset > 0; offset /= 2) {
val = fmaxf(val, __shfl_down_sync(0xffffffff, val, offset));
}
if (tid == 0) smem[0] = val;
}
} else {
if (tid == 0) smem[0] = val;
}
__syncthreads();
float global_max = smem[0];
__syncthreads();
and the kernel outputs:
>> GPU allocation time: 10.542080 ms
>> Host to device transfer time: 25.580065 ms
>> Kernel execution time: 5.174400 ms
>> Device to host transfer time: 45.923008 ms
This kernel is around 1.29 times (or, 22.73%) faster than the shared memory kernel! Using shuffle instructions eliminated the need of using sync barriers __syncthreads in each iteration as well.
Conclusion
In this worklog, we iteratively optimized the softmax operation starting from PyTorch and then writing a custom CUDA kernel for the same. With the above improvements, our custom softmax CUDA kernel became around 1.41 times (or, 29.17%) faster than PyTorch on RTX 1050Ti.
Thank you for reading!
#CUDA
#softmax
|
CS4402-9635: Optimizing CUDA code
Marc Moreno Maza
University of Western Ontario, London, Ontario (Canada)
UWO-CS4402-CS9635
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 1 / 114
Plan
1. Optimizing Matrix Transpose with CUDA
2. Performance Optimization
3. Parallel Reduction
4. Parallel Scan
5. Exercises
6. Exercises
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 2 / 114
Outline
1. Optimizing Matrix Transpose with CUDA
2. Performance Optimization
3. Parallel Reduction
4. Parallel Scan
5. Exercises
6. Exercises
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 3 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
ë the host code performs typical tasks: data allocation and transfer
between host and device, the launching and timing of several kernels,
result validation, and the deallocation of host and device memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
ë the host code performs typical tasks: data allocation and transfer
between host and device, the launching and timing of several kernels,
result validation, and the deallocation of host and device memory.
∎Benchmarks illustrate this section:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
ë the host code performs typical tasks: data allocation and transfer
between host and device, the launching and timing of several kernels,
result validation, and the deallocation of host and device memory.
∎Benchmarks illustrate this section:
ë we compare our matrix transpose kernels against a matrix copy kernel,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
ë the host code performs typical tasks: data allocation and transfer
between host and device, the launching and timing of several kernels,
result validation, and the deallocation of host and device memory.
∎Benchmarks illustrate this section:
ë we compare our matrix transpose kernels against a matrix copy kernel,
ë for each kernel, we compute the effective bandwidth, calculated in
GB/s as twice the size of the matrix (once for reading the matrix and
once for writing) divided by the time of execution,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
ë the host code performs typical tasks: data allocation and transfer
between host and device, the launching and timing of several kernels,
result validation, and the deallocation of host and device memory.
∎Benchmarks illustrate this section:
ë we compare our matrix transpose kernels against a matrix copy kernel,
ë for each kernel, we compute the effective bandwidth, calculated in
GB/s as twice the size of the matrix (once for reading the matrix and
once for writing) divided by the time of execution,
ë Each operation is run NUM_REFS times (for normalizing the
measurements),
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
ë the host code performs typical tasks: data allocation and transfer
between host and device, the launching and timing of several kernels,
result validation, and the deallocation of host and device memory.
∎Benchmarks illustrate this section:
ë we compare our matrix transpose kernels against a matrix copy kernel,
ë for each kernel, we compute the effective bandwidth, calculated in
GB/s as twice the size of the matrix (once for reading the matrix and
once for writing) divided by the time of execution,
ë Each operation is run NUM_REFS times (for normalizing the
measurements),
ë This looping is performed once over the kernel and once within the
kernel,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (1/2)
∎We optimize a transposition code for a matrix of floats. This operates
out-of-place:
ë input and output matrices address separate memory locations.
∎For simplicity, we consideran 𝑛× 𝑛matrix where 32 divides 𝑛.
∎We focus on the device code:
ë the host code performs typical tasks: data allocation and transfer
between host and device, the launching and timing of several kernels,
result validation, and the deallocation of host and device memory.
∎Benchmarks illustrate this section:
ë we compare our matrix transpose kernels against a matrix copy kernel,
ë for each kernel, we compute the effective bandwidth, calculated in
GB/s as twice the size of the matrix (once for reading the matrix and
once for writing) divided by the time of execution,
ë Each operation is run NUM_REFS times (for normalizing the
measurements),
ë This looping is performed once over the kernel and once within the
kernel,
ë The difference between these two timings is kernel launch and
synchronization overheads.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 4 / 114
Matrix Transpose Characteristics (2/2)
∎We present hereafter different kernels called from the host code, each
addressing different performance issues.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 5 / 114
Matrix Transpose Characteristics (2/2)
∎We present hereafter different kernels called from the host code, each
addressing different performance issues.
∎All kernels in this study launch thread blocks of dimension 32x8,
where each block transposes (or copies) a tile of dimension 32x32.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 5 / 114
Matrix Transpose Characteristics (2/2)
∎We present hereafter different kernels called from the host code, each
addressing different performance issues.
∎All kernels in this study launch thread blocks of dimension 32x8,
where each block transposes (or copies) a tile of dimension 32x32.
∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32
and 8, respectively.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 5 / 114
Matrix Transpose Characteristics (2/2)
∎We present hereafter different kernels called from the host code, each
addressing different performance issues.
∎All kernels in this study launch thread blocks of dimension 32x8,
where each block transposes (or copies) a tile of dimension 32x32.
∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32
and 8, respectively.
∎Using a thread block with fewer threads than elements in a tile is
advantageous for the matrix transpose:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 5 / 114
Matrix Transpose Characteristics (2/2)
∎We present hereafter different kernels called from the host code, each
addressing different performance issues.
∎All kernels in this study launch thread blocks of dimension 32x8,
where each block transposes (or copies) a tile of dimension 32x32.
∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32
and 8, respectively.
∎Using a thread block with fewer threads than elements in a tile is
advantageous for the matrix transpose:
ë each thread transposes several matrix elements, four in our case, and
much of the cost of calculating the indices is amortized over these
elements.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 5 / 114
Matrix Transpose Characteristics (2/2)
∎We present hereafter different kernels called from the host code, each
addressing different performance issues.
∎All kernels in this study launch thread blocks of dimension 32x8,
where each block transposes (or copies) a tile of dimension 32x32.
∎As such, the parameters TILE_DIM and BLOCK_ROWS are set to 32
and 8, respectively.
∎Using a thread block with fewer threads than elements in a tile is
advantageous for the matrix transpose:
ë each thread transposes several matrix elements, four in our case, and
much of the cost of calculating the indices is amortized over these
elements.
∎This study is based on a technical report by Greg Ruetsch (NVIDIA)
and Paulius Micikevicius (NVIDIA).
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 5 / 114
A simple copy kernel (1/2)
__global__ void copy(float *odata, float* idata, int width,
int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index
= xIndex + width*yIndex;
for (int r=0; r < nreps; r++) { // normalization outer loop
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index+i*width] = idata[index+i*width];
}
}
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 6 / 114
A simple copy kernel (2/2)
∎odata and idata are pointers to the input and output matrices,
// removing normalization
__global__ void copy(float *odata, float* idata, int width,
int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index
= xIndex + width*yIndex;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS)
odata[index+i*width] = idata[index+i*width];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 7 / 114
A simple copy kernel (2/2)
∎odata and idata are pointers to the input and output matrices,
∎width and height are the matrix x and y dimensions,
// removing normalization
__global__ void copy(float *odata, float* idata, int width,
int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index
= xIndex + width*yIndex;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS)
odata[index+i*width] = idata[index+i*width];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 7 / 114
A simple copy kernel (2/2)
∎odata and idata are pointers to the input and output matrices,
∎width and height are the matrix x and y dimensions,
∎nreps determines how many times the loop over data movement
between matrices is performed.
// removing normalization
__global__ void copy(float *odata, float* idata, int width,
int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index
= xIndex + width*yIndex;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS)
odata[index+i*width] = idata[index+i*width];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 7 / 114
A simple copy kernel (2/2)
∎odata and idata are pointers to the input and output matrices,
∎width and height are the matrix x and y dimensions,
∎nreps determines how many times the loop over data movement
between matrices is performed.
∎In this kernel, xIndex and yIndex are global 2D matrix indices,
// removing normalization
__global__ void copy(float *odata, float* idata, int width,
int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index
= xIndex + width*yIndex;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS)
odata[index+i*width] = idata[index+i*width];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 7 / 114
A simple copy kernel (2/2)
∎odata and idata are pointers to the input and output matrices,
∎width and height are the matrix x and y dimensions,
∎nreps determines how many times the loop over data movement
between matrices is performed.
∎In this kernel, xIndex and yIndex are global 2D matrix indices,
∎used to calculate index, the 1D index used to access matrix elements.
// removing normalization
__global__ void copy(float *odata, float* idata, int width,
int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index
= xIndex + width*yIndex;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS)
odata[index+i*width] = idata[index+i*width];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 7 / 114
A naive transpose kernel
_global__ void transposeNaive(float *odata, float* idata,
int width, int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index_in = xIndex + width * yIndex;
int index_out = yIndex + height * xIndex;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index_out+i] = idata[index_in+i*width];
}
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 8 / 114
Naive transpose kernel vs copy kernel
The performance of these two kernels on a 2048x2048 matrix using a
GTX280 is given in the following table:
The minor differences in code between the copy and naive transpose
kernels have a profound effect on performance.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 9 / 114
Coalesced Transpose (1/11)
∎Because device memory has a much higher latency and lower
bandwidth than on-chip memory, special attention must be paid to:
how global memory accesses are performed?
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 10 / 114
Coalesced Transpose (1/11)
∎Because device memory has a much higher latency and lower
bandwidth than on-chip memory, special attention must be paid to:
how global memory accesses are performed?
∎The simultaneous global memory accesses by each thread of a
half-warp (16 threads on G80) during the execution of a single read or
write instruction will be coalesced into a single access if:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 10 / 114
Coalesced Transpose (1/11)
∎Because device memory has a much higher latency and lower
bandwidth than on-chip memory, special attention must be paid to:
how global memory accesses are performed?
∎The simultaneous global memory accesses by each thread of a
half-warp (16 threads on G80) during the execution of a single read or
write instruction will be coalesced into a single access if:
1 The size of the memory element accessed by each thread is either 4, 8,
or 16 bytes.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 10 / 114
Coalesced Transpose (1/11)
∎Because device memory has a much higher latency and lower
bandwidth than on-chip memory, special attention must be paid to:
how global memory accesses are performed?
∎The simultaneous global memory accesses by each thread of a
half-warp (16 threads on G80) during the execution of a single read or
write instruction will be coalesced into a single access if:
1 The size of the memory element accessed by each thread is either 4, 8,
or 16 bytes.
2 The address of the first element is aligned to 16 times the element’s
size.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 10 / 114
Coalesced Transpose (1/11)
∎Because device memory has a much higher latency and lower
bandwidth than on-chip memory, special attention must be paid to:
how global memory accesses are performed?
∎The simultaneous global memory accesses by each thread of a
half-warp (16 threads on G80) during the execution of a single read or
write instruction will be coalesced into a single access if:
1 The size of the memory element accessed by each thread is either 4, 8,
or 16 bytes.
2 The address of the first element is aligned to 16 times the element’s
size.
3 The elements form a contiguous block of memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 10 / 114
Coalesced Transpose (1/11)
∎Because device memory has a much higher latency and lower
bandwidth than on-chip memory, special attention must be paid to:
how global memory accesses are performed?
∎The simultaneous global memory accesses by each thread of a
half-warp (16 threads on G80) during the execution of a single read or
write instruction will be coalesced into a single access if:
1 The size of the memory element accessed by each thread is either 4, 8,
or 16 bytes.
2 The address of the first element is aligned to 16 times the element’s
size.
3 The elements form a contiguous block of memory.
4 The 𝑖-th element is accessed by the 𝑖-th thread in the half-warp.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 10 / 114
Coalesced Transpose (1/11)
∎Because device memory has a much higher latency and lower
bandwidth than on-chip memory, special attention must be paid to:
how global memory accesses are performed?
∎The simultaneous global memory accesses by each thread of a
half-warp (16 threads on G80) during the execution of a single read or
write instruction will be coalesced into a single access if:
1 The size of the memory element accessed by each thread is either 4, 8,
or 16 bytes.
2 The address of the first element is aligned to 16 times the element’s
size.
3 The elements form a contiguous block of memory.
4 The 𝑖-th element is accessed by the 𝑖-th thread in the half-warp.
∎Coalescing happens even if some threads do not access memory
(divergent warp)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 10 / 114
Coalesced Transpose (2/11)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 11 / 114
Coalesced Transpose (3/11)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 12 / 114
Coalesced Transpose (4/11)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 13 / 114
Coalesced Transpose (5/11)
∎Allocating device memory through cudaMalloc() and choosing
TILE_DIM to be a multiple of 16 ensures alignment with a
segment of memory, therefore all loads from idata are coalesced.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 14 / 114
Coalesced Transpose (5/11)
∎Allocating device memory through cudaMalloc() and choosing
TILE_DIM to be a multiple of 16 ensures alignment with a
segment of memory, therefore all loads from idata are coalesced.
∎Coalescing behavior differs between the simple copy and naive
transpose kernels when writing to odata.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 14 / 114
Coalesced Transpose (5/11)
∎Allocating device memory through cudaMalloc() and choosing
TILE_DIM to be a multiple of 16 ensures alignment with a
segment of memory, therefore all loads from idata are coalesced.
∎Coalescing behavior differs between the simple copy and naive
transpose kernels when writing to odata.
∎In the case of the naive transpose, for each iteration of the i-loop a
half warp writes one half of a column of floats to different segments
of memory:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 14 / 114
Coalesced Transpose (5/11)
∎Allocating device memory through cudaMalloc() and choosing
TILE_DIM to be a multiple of 16 ensures alignment with a
segment of memory, therefore all loads from idata are coalesced.
∎Coalescing behavior differs between the simple copy and naive
transpose kernels when writing to odata.
∎In the case of the naive transpose, for each iteration of the i-loop a
half warp writes one half of a column of floats to different segments
of memory:
ë resulting in 16 separate memory transactions,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 14 / 114
Coalesced Transpose (5/11)
∎Allocating device memory through cudaMalloc() and choosing
TILE_DIM to be a multiple of 16 ensures alignment with a
segment of memory, therefore all loads from idata are coalesced.
∎Coalescing behavior differs between the simple copy and naive
transpose kernels when writing to odata.
∎In the case of the naive transpose, for each iteration of the i-loop a
half warp writes one half of a column of floats to different segments
of memory:
ë resulting in 16 separate memory transactions,
ë regardless of the compute capability.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 14 / 114
Coalesced Transpose (6/11)
∎The way to avoid uncoalesced global memory access is
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 15 / 114
Coalesced Transpose (6/11)
∎The way to avoid uncoalesced global memory access is
1 to read the data into shared memory and,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 15 / 114
Coalesced Transpose (6/11)
∎The way to avoid uncoalesced global memory access is
1 to read the data into shared memory and,
2 have each half warp access noncontiguous locations in shared memory
in order to write contiguous data to odata.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 15 / 114
Coalesced Transpose (6/11)
∎The way to avoid uncoalesced global memory access is
1 to read the data into shared memory and,
2 have each half warp access noncontiguous locations in shared memory
in order to write contiguous data to odata.
∎There is no performance penalty for noncontiguous access patterns in
shared memory as there is in global memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 15 / 114
Coalesced Transpose (6/11)
∎The way to avoid uncoalesced global memory access is
1 to read the data into shared memory and,
2 have each half warp access noncontiguous locations in shared memory
in order to write contiguous data to odata.
∎There is no performance penalty for noncontiguous access patterns in
shared memory as there is in global memory.
∎a __synchthreads() call is required to ensure that all reads from
idata to shared memory have completed before writes from shared
memory to odata commence.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 15 / 114
Coalesced Transpose (7/11)
__global__ void transposeCoalesced(float *odata,
float *idata, int width, int height) // no nreps param
{
__shared__ float tile[TILE_DIM][TILE_DIM];
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index_in = xIndex + (yIndex)*width;
xIndex = blockIdx.y * TILE_DIM + threadIdx.x;
yIndex = blockIdx.x * TILE_DIM + threadIdx.y;
int index_out = xIndex + (yIndex)*height;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
tile[threadIdx.y+i][threadIdx.x] =
idata[index_in+i*width];
}
__syncthreads();
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index_out+i*height] =
tile[threadIdx.x][threadIdx.y+i];
} }
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 16 / 114
Coalesced Transpose (8/11)
1 The half warp writes four half rows of the idata matrix tile to the
shared memory 32x32 array tile indicated by the yellow line
segments.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 17 / 114
Coalesced Transpose (8/11)
1 The half warp writes four half rows of the idata matrix tile to the
shared memory 32x32 array tile indicated by the yellow line
segments.
2 After a __syncthreads() call to ensure all writes to tile are
completed,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 17 / 114
Coalesced Transpose (8/11)
1 The half warp writes four half rows of the idata matrix tile to the
shared memory 32x32 array tile indicated by the yellow line
segments.
2 After a __syncthreads() call to ensure all writes to tile are
completed,
3 the half warp writes four half columns of tile to four half rows of an
odata matrix tile, indicated by the green line segments.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 17 / 114
Coalesced Transpose (9/11)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 18 / 114
Coalesced Transpose (9/11)
While there is a dramatic increase in effective bandwidth of the coalesced
transpose over the naive transpose, there still remains a large performance
gap between the coalesced transpose and the copy:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 18 / 114
Coalesced Transpose (9/11)
While there is a dramatic increase in effective bandwidth of the coalesced
transpose over the naive transpose, there still remains a large performance
gap between the coalesced transpose and the copy:
∎One possible cause of this performance gap could be the
synchronization barrier required in the coalesced transpose.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 18 / 114
Coalesced Transpose (9/11)
While there is a dramatic increase in effective bandwidth of the coalesced
transpose over the naive transpose, there still remains a large performance
gap between the coalesced transpose and the copy:
∎One possible cause of this performance gap could be the
synchronization barrier required in the coalesced transpose.
∎This can be easily assessed using the following copy kernel which
utilizes shared memory and contains a __syncthreads() call.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 18 / 114
Coalesced Transpose (10/11)
_global__ void copySharedMem(float *odata, float *idata,
int width, int height) // no nreps param
{
__shared__ float tile[TILE_DIM][TILE_DIM];
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index = xIndex + width*yIndex;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
tile[threadIdx.y+i][threadIdx.x] =
idata[index+i*width];
}
__syncthreads();
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index+i*width] =
tile[threadIdx.y+i][threadIdx.x];
} }
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 19 / 114
Coalesced Transpose (11/11)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 20 / 114
Coalesced Transpose (11/11)
The shared memory copy results seem to suggest that the use of shared
memory with a synchronization barrier has little effect on the performance,
certainly as far as the Loop in kernel column indicates when comparing
the simple copy and shared memory copy.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 20 / 114
Shared memory bank conflicts (1/6)
1 Shared memory is divided into 16 equally-sized memory modules,
called banks, which are organized such that successive 32-bit words
are assigned to successive banks.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 21 / 114
Shared memory bank conflicts (1/6)
1 Shared memory is divided into 16 equally-sized memory modules,
called banks, which are organized such that successive 32-bit words
are assigned to successive banks.
2 These banks can be accessed simultaneously, and to achieve maximum
bandwidth to and from shared memory the threads in a half warp
should access shared memory associated with different banks.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 21 / 114
Shared memory bank conflicts (1/6)
1 Shared memory is divided into 16 equally-sized memory modules,
called banks, which are organized such that successive 32-bit words
are assigned to successive banks.
2 These banks can be accessed simultaneously, and to achieve maximum
bandwidth to and from shared memory the threads in a half warp
should access shared memory associated with different banks.
3 The exception to this rule is when all threads in a half warp read
the same shared memory address, which results in a broadcast where
the data at that address is sent to all threads of the half warp in one
transaction.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 21 / 114
Shared memory bank conflicts (1/6)
1 Shared memory is divided into 16 equally-sized memory modules,
called banks, which are organized such that successive 32-bit words
are assigned to successive banks.
2 These banks can be accessed simultaneously, and to achieve maximum
bandwidth to and from shared memory the threads in a half warp
should access shared memory associated with different banks.
3 The exception to this rule is when all threads in a half warp read
the same shared memory address, which results in a broadcast where
the data at that address is sent to all threads of the half warp in one
transaction.
4 One can use the warp_serialize flag when profiling CUDA
applications to determine whether shared memory bank conflicts
occur in any kernel.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 21 / 114
Shared memory bank conflicts (2/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 22 / 114
Shared memory bank conflicts (3/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 23 / 114
Shared memory bank conflicts (4/6)
1 The coalesced transpose uses a 32 × 32 shared memory array of floats.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 24 / 114
Shared memory bank conflicts (4/6)
1 The coalesced transpose uses a 32 × 32 shared memory array of floats.
2 For this sized array, all data in columns k and k+16 are mapped to
the same bank.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 24 / 114
Shared memory bank conflicts (4/6)
1 The coalesced transpose uses a 32 × 32 shared memory array of floats.
2 For this sized array, all data in columns k and k+16 are mapped to
the same bank.
3 As a result, when writing partial columns from tile in shared
memory to rows in odata the half warp experiences a 16-way bank
conflict and serializes the request.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 24 / 114
Shared memory bank conflicts (4/6)
1 The coalesced transpose uses a 32 × 32 shared memory array of floats.
2 For this sized array, all data in columns k and k+16 are mapped to
the same bank.
3 As a result, when writing partial columns from tile in shared
memory to rows in odata the half warp experiences a 16-way bank
conflict and serializes the request.
4 A simple way to avoid this conflict is to pad the shared memory array
by one column:
__shared__ float tile[TILE_DIM][TILE_DIM+1];
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 24 / 114
Shared memory bank conflicts (5/6)
∎The padding does not affect shared memory bank access pattern when
writing a half warp to shared memory, which remains conflict free,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 25 / 114
Shared memory bank conflicts (5/6)
∎The padding does not affect shared memory bank access pattern when
writing a half warp to shared memory, which remains conflict free,
∎but by adding a single column now the access of a half warp of data
in a column is also conflict free.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 25 / 114
Shared memory bank conflicts (5/6)
∎The padding does not affect shared memory bank access pattern when
writing a half warp to shared memory, which remains conflict free,
∎but by adding a single column now the access of a half warp of data
in a column is also conflict free.
∎The performance of the kernel, now coalesced and memory bank
conflict free, is added to our table on the next slide.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 25 / 114
Shared memory bank conflicts (6/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 26 / 114
Shared memory bank conflicts (6/6)
∎While padding the shared memory array did eliminate shared memory
bank conflicts, as was confirmed by checking the warp_serialize
flag with the CUDA profiler, it has little effect (when implemented at
this stage) on performance.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 26 / 114
Shared memory bank conflicts (6/6)
∎While padding the shared memory array did eliminate shared memory
bank conflicts, as was confirmed by checking the warp_serialize
flag with the CUDA profiler, it has little effect (when implemented at
this stage) on performance.
∎As a result, there is still a large performance gap between the
coalesced and shared memory bank conflict free transpose and the
shared memory copy.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 26 / 114
Decomposing Transpose (1/6)
∎To investigate further, we revisit the data flow for the transpose and
compare it to that of the copy.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 27 / 114
Decomposing Transpose (1/6)
∎To investigate further, we revisit the data flow for the transpose and
compare it to that of the copy.
∎There are essentially two differences between the copy code and the
transpose:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 27 / 114
Decomposing Transpose (1/6)
∎To investigate further, we revisit the data flow for the transpose and
compare it to that of the copy.
∎There are essentially two differences between the copy code and the
transpose:
ë transposing the data within a tile, and
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 27 / 114
Decomposing Transpose (1/6)
∎To investigate further, we revisit the data flow for the transpose and
compare it to that of the copy.
∎There are essentially two differences between the copy code and the
transpose:
ë transposing the data within a tile, and
ë writing data to transposed tile.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 27 / 114
Decomposing Transpose (1/6)
∎To investigate further, we revisit the data flow for the transpose and
compare it to that of the copy.
∎There are essentially two differences between the copy code and the
transpose:
ë transposing the data within a tile, and
ë writing data to transposed tile.
∎We can isolate the performance between each of these two
components by implementing two kernels that individually perform
just one of these components:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 27 / 114
Decomposing Transpose (1/6)
∎To investigate further, we revisit the data flow for the transpose and
compare it to that of the copy.
∎There are essentially two differences between the copy code and the
transpose:
ë transposing the data within a tile, and
ë writing data to transposed tile.
∎We can isolate the performance between each of these two
components by implementing two kernels that individually perform
just one of these components:
fine-grained transpose: this kernel transposes the data within a tile,
but writes the tile to the location.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 27 / 114
Decomposing Transpose (1/6)
∎To investigate further, we revisit the data flow for the transpose and
compare it to that of the copy.
∎There are essentially two differences between the copy code and the
transpose:
ë transposing the data within a tile, and
ë writing data to transposed tile.
∎We can isolate the performance between each of these two
components by implementing two kernels that individually perform
just one of these components:
fine-grained transpose: this kernel transposes the data within a tile,
but writes the tile to the location.
coarse-grained transpose: this kernel writes the tile to the
transposed location in the odata matrix, but does not
transpose the data within the tile.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 27 / 114
Decomposing Transpose (2/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 28 / 114
Decomposing Transpose (3/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 29 / 114
Decomposing Transpose (4/6)
_global__ void transposeFineGrained(float *odata,
float *idata, int width, int height)
{
__shared__ float block[TILE_DIM][TILE_DIM+1];
int xIndex = blockIdx.x * TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y * TILE_DIM + threadIdx.y;
int index = xIndex + (yIndex)*width;
for (int i=0; i < TILE_DIM; i += BLOCK_ROWS) {
block[threadIdx.y+i][threadIdx.x] =
idata[index+i*width];
}
__syncthreads();
for (int i=0; i < TILE_DIM; i += BLOCK_ROWS) {
odata[index+i*height] =
block[threadIdx.x][threadIdx.y+i];
}
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 30 / 114
Decomposing Transpose (5/6)
__global__ void transposeCoarseGrained(float *odata,
float *idata, int width, int height)
{
__shared__ float block[TILE_DIM][TILE_DIM+1];
int xIndex = blockIdx.x * TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y * TILE_DIM + threadIdx.y;
int index_in = xIndex + (yIndex)*width;
xIndex = blockIdx.y * TILE_DIM + threadIdx.x;
yIndex = blockIdx.x * TILE_DIM + threadIdx.y;
int index_out = xIndex + (yIndex)*height;
for (int i=0; i<TILE_DIM; i += BLOCK_ROWS) {
block[threadIdx.y+i][threadIdx.x] =
idata[index_in+i*width];
}
syncthreads();
for (int i=0; i<TILE_DIM; i += BLOCK_ROWS) {
odata[index_out+i*height] =
block[threadIdx.y+i][threadIdx.x];
}
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 31 / 114
Decomposing Transpose (6/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 32 / 114
Decomposing Transpose (6/6)
∎The fine-grained transpose has performance similar to the shared
memory copy, whereas the coarse-grained transpose has roughly the
performance of the coalesced transpose.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 32 / 114
Decomposing Transpose (6/6)
∎The fine-grained transpose has performance similar to the shared
memory copy, whereas the coarse-grained transpose has roughly the
performance of the coalesced transpose.
∎Thus the performance bottleneck lies in writing data to the
transposed location in global memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 32 / 114
Partition Camping (1/4)
∎Just as shared memory performance can be degraded via bank
conflicts, an analogous performance degradation can occur with
global memory access through partition camping.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 33 / 114
Partition Camping (1/4)
∎Just as shared memory performance can be degraded via bank
conflicts, an analogous performance degradation can occur with
global memory access through partition camping.
∎Global memory is divided into either 6 partitions (on 8- and 9-series
GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 33 / 114
Partition Camping (1/4)
∎Just as shared memory performance can be degraded via bank
conflicts, an analogous performance degradation can occur with
global memory access through partition camping.
∎Global memory is divided into either 6 partitions (on 8- and 9-series
GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width.
∎To use global memory effectively, concurrent accesses to global
memory by all active warps should be divided evenly amongst
partitions.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 33 / 114
Partition Camping (1/4)
∎Just as shared memory performance can be degraded via bank
conflicts, an analogous performance degradation can occur with
global memory access through partition camping.
∎Global memory is divided into either 6 partitions (on 8- and 9-series
GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width.
∎To use global memory effectively, concurrent accesses to global
memory by all active warps should be divided evenly amongst
partitions.
∎partition camping occurs when:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 33 / 114
Partition Camping (1/4)
∎Just as shared memory performance can be degraded via bank
conflicts, an analogous performance degradation can occur with
global memory access through partition camping.
∎Global memory is divided into either 6 partitions (on 8- and 9-series
GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width.
∎To use global memory effectively, concurrent accesses to global
memory by all active warps should be divided evenly amongst
partitions.
∎partition camping occurs when:
ë global memory accesses are directed through a subset of partitions,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 33 / 114
Partition Camping (1/4)
∎Just as shared memory performance can be degraded via bank
conflicts, an analogous performance degradation can occur with
global memory access through partition camping.
∎Global memory is divided into either 6 partitions (on 8- and 9-series
GPUs) or 8 partitions (on 200-and 10-series GPUs) of 256-byte width.
∎To use global memory effectively, concurrent accesses to global
memory by all active warps should be divided evenly amongst
partitions.
∎partition camping occurs when:
ë global memory accesses are directed through a subset of partitions,
ë causing requests to queue up at some partitions while other partitions
go unused.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 33 / 114
Partition Camping (2/4)
∎Since partition camping concerns how active thread blocks behave,
the issue of how thread blocks are scheduled on multiprocessors is
important.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 34 / 114
Partition Camping (2/4)
∎Since partition camping concerns how active thread blocks behave,
the issue of how thread blocks are scheduled on multiprocessors is
important.
∎When a kernel is launched, the order in which blocks are assigned to
multiprocessors is determined by the one-dimensional block ID defined
as:
bid = blockIdx.x + gridDim.x*blockIdx.y;
which is a row-major ordering of the blocks in the grid.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 34 / 114
Partition Camping (2/4)
∎Since partition camping concerns how active thread blocks behave,
the issue of how thread blocks are scheduled on multiprocessors is
important.
∎When a kernel is launched, the order in which blocks are assigned to
multiprocessors is determined by the one-dimensional block ID defined
as:
bid = blockIdx.x + gridDim.x*blockIdx.y;
which is a row-major ordering of the blocks in the grid.
∎Once maximum occupancy is reached, additional blocks are assigned
to multiprocessors as needed.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 34 / 114
Partition Camping (2/4)
∎Since partition camping concerns how active thread blocks behave,
the issue of how thread blocks are scheduled on multiprocessors is
important.
∎When a kernel is launched, the order in which blocks are assigned to
multiprocessors is determined by the one-dimensional block ID defined
as:
bid = blockIdx.x + gridDim.x*blockIdx.y;
which is a row-major ordering of the blocks in the grid.
∎Once maximum occupancy is reached, additional blocks are assigned
to multiprocessors as needed.
∎How quickly and the order in which blocks complete cannot be
determined.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 34 / 114
Partition Camping (2/4)
∎Since partition camping concerns how active thread blocks behave,
the issue of how thread blocks are scheduled on multiprocessors is
important.
∎When a kernel is launched, the order in which blocks are assigned to
multiprocessors is determined by the one-dimensional block ID defined
as:
bid = blockIdx.x + gridDim.x*blockIdx.y;
which is a row-major ordering of the blocks in the grid.
∎Once maximum occupancy is reached, additional blocks are assigned
to multiprocessors as needed.
∎How quickly and the order in which blocks complete cannot be
determined.
∎So active blocks are initially contiguous but become less contiguous
as execution of the kernel progresses.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 34 / 114
Partition Camping (3/4)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 35 / 114
Partition Camping (3/4)
∎With 8 partitions of 256-byte width, all data in strides of 2048 bytes
(or 512 floats) map to the same partition.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 35 / 114
Partition Camping (3/4)
∎With 8 partitions of 256-byte width, all data in strides of 2048 bytes
(or 512 floats) map to the same partition.
∎Any float matrix with 512 × 𝑘columns, such as our 2048x2048 matrix,
will contain columns whose elements map to a single partition.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 35 / 114
Partition Camping (3/4)
∎With 8 partitions of 256-byte width, all data in strides of 2048 bytes
(or 512 floats) map to the same partition.
∎Any float matrix with 512 × 𝑘columns, such as our 2048x2048 matrix,
will contain columns whose elements map to a single partition.
∎With tiles of 32 × 32 floats whose one-dimensional block IDs are
shown in the figures, the mapping of idata and odata onto the
partitions is depectide below.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 35 / 114
Partition Camping (4/4)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 36 / 114
Partition Camping (4/4)
∎Cconcurrent blocks will be accessing tiles row-wise in idata which
will be roughly equally distributed amongst partitions
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 36 / 114
Partition Camping (4/4)
∎Cconcurrent blocks will be accessing tiles row-wise in idata which
will be roughly equally distributed amongst partitions
∎However these blocks will access tiles column-wise in odata which
will typically access global memory through just a few partitions.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 36 / 114
Partition Camping (4/4)
∎Cconcurrent blocks will be accessing tiles row-wise in idata which
will be roughly equally distributed amongst partitions
∎However these blocks will access tiles column-wise in odata which
will typically access global memory through just a few partitions.
∎Just as with shared memory, padding would be an option (potentially
expensive) but there is a better one . . .
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 36 / 114
Diagonal block reordering (1/7)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 37 / 114
Diagonal block reordering (2/7)
∎The key idea is to view the grid under a diagonal coordinate
system.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 38 / 114
Diagonal block reordering (2/7)
∎The key idea is to view the grid under a diagonal coordinate
system.
∎If blockIdx.x and blockIdx.y represent the diagonal coordinates,
then (for block-square matrixes) the corresponding cartesian
coordinates are given by the following mapping:
blockIdx_y = blockIdx.x;
blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x;
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 38 / 114
Diagonal block reordering (2/7)
∎The key idea is to view the grid under a diagonal coordinate
system.
∎If blockIdx.x and blockIdx.y represent the diagonal coordinates,
then (for block-square matrixes) the corresponding cartesian
coordinates are given by the following mapping:
blockIdx_y = blockIdx.x;
blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x;
∎One would simply include the previous two lines of code at the
beginning of the kernel, and write the kernel assuming the cartesian
interpretation of blockIdx fields, except using blockIdx_x and
blockIdx_y in place of blockIdx.x and blockIdx.y, respectively,
throughout the kernel.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 38 / 114
Diagonal block reordering (2/7)
∎The key idea is to view the grid under a diagonal coordinate
system.
∎If blockIdx.x and blockIdx.y represent the diagonal coordinates,
then (for block-square matrixes) the corresponding cartesian
coordinates are given by the following mapping:
blockIdx_y = blockIdx.x;
blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x;
∎One would simply include the previous two lines of code at the
beginning of the kernel, and write the kernel assuming the cartesian
interpretation of blockIdx fields, except using blockIdx_x and
blockIdx_y in place of blockIdx.x and blockIdx.y, respectively,
throughout the kernel.
∎This is precisely what is done in the transposeDiagonal kernel
hereafter.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 38 / 114
Decomposing Transpose (3/7)
__global__ void transposeDiagonal(float *odata,
float *idata, int width, int height)
{
__shared__ float tile[TILE_DIM][TILE_DIM+1];
int blockIdx_x, blockIdx_y;
// diagonal reordering
if (width == height) {
blockIdx_y = blockIdx.x;
blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x;
} else {
int bid = blockIdx.x + gridDim.x*blockIdx.y;
blockIdx_y = bid%gridDim.y;
blockIdx_x = ((bid/gridDim.y)+blockIdx_y)%gridDim.x;
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 39 / 114
Decomposing Transpose (4/7)
int xIndex = blockIdx_x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx_y*TILE_DIM + threadIdx.y;
int index_in = xIndex + (yIndex)*width;
xIndex = blockIdx_y*TILE_DIM + threadIdx.x;
yIndex = blockIdx_x*TILE_DIM + threadIdx.y;
int index_out = xIndex + (yIndex)*height;
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
tile[threadIdx.y+i][threadIdx.x] =
idata[index_in+i*width];
}
__syncthreads();
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index_out+i*height] =
tile[threadIdx.x][threadIdx.y+i];
}
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 40 / 114
Diagonal block reordering (5/7)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 41 / 114
Diagonal block reordering (6/7)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 42 / 114
Diagonal block reordering (7/7)
∎The bandwidth measured when looping within the kernel over the
read and writes to global memory is within a few percent of the
shared memory copy.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 43 / 114
Diagonal block reordering (7/7)
∎The bandwidth measured when looping within the kernel over the
read and writes to global memory is within a few percent of the
shared memory copy.
∎When looping over the kernel, the performance degrades slightly,
likely due to additional computation involved in calculating
blockIdx_x and blockIdx_y. However, even with this performance
degradation the diagonal transpose has over four times the bandwidth
of the other complete transposes.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 43 / 114
Diagonal block reordering (7/7)
∎The bandwidth measured when looping within the kernel over the
read and writes to global memory is within a few percent of the
shared memory copy.
∎When looping over the kernel, the performance degrades slightly,
likely due to additional computation involved in calculating
blockIdx_x and blockIdx_y. However, even with this performance
degradation the diagonal transpose has over four times the bandwidth
of the other complete transposes.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 43 / 114
Outline
1. Optimizing Matrix Transpose with CUDA
2. Performance Optimization
3. Parallel Reduction
4. Parallel Scan
5. Exercises
6. Exercises
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 44 / 114
Four principles
∎Expose as much parallelism as possible
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 45 / 114
Four principles
∎Expose as much parallelism as possible
∎Optimize memory usage for maximum bandwidth
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 45 / 114
Four principles
∎Expose as much parallelism as possible
∎Optimize memory usage for maximum bandwidth
∎Maximize occupancy to hide latency
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 45 / 114
Four principles
∎Expose as much parallelism as possible
∎Optimize memory usage for maximum bandwidth
∎Maximize occupancy to hide latency
∎Optimize instruction usage for maximum throughput
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 45 / 114
Expose Parallelism
∎Structure algorithm to maximize independent parallelism
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 46 / 114
Expose Parallelism
∎Structure algorithm to maximize independent parallelism
∎If threads of same block need to communicate, use shared memory
and __syncthreads()
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 46 / 114
Expose Parallelism
∎Structure algorithm to maximize independent parallelism
∎If threads of same block need to communicate, use shared memory
and __syncthreads()
∎If threads of different blocks need to communicate, use global
memory and split computation into multiple kernels
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 46 / 114
Expose Parallelism
∎Structure algorithm to maximize independent parallelism
∎If threads of same block need to communicate, use shared memory
and __syncthreads()
∎If threads of different blocks need to communicate, use global
memory and split computation into multiple kernels
∎Recall that there is no synchronization mechanism between blocks
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 46 / 114
Expose Parallelism
∎Structure algorithm to maximize independent parallelism
∎If threads of same block need to communicate, use shared memory
and __syncthreads()
∎If threads of different blocks need to communicate, use global
memory and split computation into multiple kernels
∎Recall that there is no synchronization mechanism between blocks
∎High parallelism is especially important to hide memory latency by
overlapping memory accesses with computation
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 46 / 114
Expose Parallelism
∎Structure algorithm to maximize independent parallelism
∎If threads of same block need to communicate, use shared memory
and __syncthreads()
∎If threads of different blocks need to communicate, use global
memory and split computation into multiple kernels
∎Recall that there is no synchronization mechanism between blocks
∎High parallelism is especially important to hide memory latency by
overlapping memory accesses with computation
∎Take advantage of asynchronous kernel launches by overlapping CPU
computations with kernel execution.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 46 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
ë Especially for GPUs as they devote many more transistors to ALUs
than memory
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
ë Especially for GPUs as they devote many more transistors to ALUs
than memory
∎Basic strategies:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
ë Especially for GPUs as they devote many more transistors to ALUs
than memory
∎Basic strategies:
ë Maximize use of low-latency, high-bandwidth memory
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
ë Especially for GPUs as they devote many more transistors to ALUs
than memory
∎Basic strategies:
ë Maximize use of low-latency, high-bandwidth memory
ë Optimize memory access patterns to maximize bandwidth
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
ë Especially for GPUs as they devote many more transistors to ALUs
than memory
∎Basic strategies:
ë Maximize use of low-latency, high-bandwidth memory
ë Optimize memory access patterns to maximize bandwidth
ë Leverage parallelism to hide memory latency by overlapping memory
accesses with computation as much as possible
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
ë Especially for GPUs as they devote many more transistors to ALUs
than memory
∎Basic strategies:
ë Maximize use of low-latency, high-bandwidth memory
ë Optimize memory access patterns to maximize bandwidth
ë Leverage parallelism to hide memory latency by overlapping memory
accesses with computation as much as possible
ë Write kernels with high arithmetic intensity (ratio of arithmetic
operations to memory transactions)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Optimize Memory Usage: Basic Strategies
∎Processing data is cheaper than moving it around:
ë Especially for GPUs as they devote many more transistors to ALUs
than memory
∎Basic strategies:
ë Maximize use of low-latency, high-bandwidth memory
ë Optimize memory access patterns to maximize bandwidth
ë Leverage parallelism to hide memory latency by overlapping memory
accesses with computation as much as possible
ë Write kernels with high arithmetic intensity (ratio of arithmetic
operations to memory transactions)
ë Sometimes recompute data rather than cache it
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 47 / 114
Minimize CPU < −> GPU Data Transfers
∎CPU < −> GPU memory bandwidth much lower than GPU memory
bandwidth
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 48 / 114
Minimize CPU < −> GPU Data Transfers
∎CPU < −> GPU memory bandwidth much lower than GPU memory
bandwidth
∎Minimize CPU < −> GPU data transfers by moving more code from
CPU to GPU
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 48 / 114
Minimize CPU < −> GPU Data Transfers
∎CPU < −> GPU memory bandwidth much lower than GPU memory
bandwidth
∎Minimize CPU < −> GPU data transfers by moving more code from
CPU to GPU
ë Even if sometimes that means running kernels with low parallelism
computations
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 48 / 114
Minimize CPU < −> GPU Data Transfers
∎CPU < −> GPU memory bandwidth much lower than GPU memory
bandwidth
∎Minimize CPU < −> GPU data transfers by moving more code from
CPU to GPU
ë Even if sometimes that means running kernels with low parallelism
computations
ë Intermediate data structures can be allocated, operated on, and
deallocated without ever copying them to CPU memory
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 48 / 114
Minimize CPU < −> GPU Data Transfers
∎CPU < −> GPU memory bandwidth much lower than GPU memory
bandwidth
∎Minimize CPU < −> GPU data transfers by moving more code from
CPU to GPU
ë Even if sometimes that means running kernels with low parallelism
computations
ë Intermediate data structures can be allocated, operated on, and
deallocated without ever copying them to CPU memory
∎Group data transfers: One large transfer much better than many
small ones.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 48 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
ë Global memory is not cached on G8x.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
ë Global memory is not cached on G8x.
ë Global memory has High latency instructions: 400-600 clock cycles
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
ë Global memory is not cached on G8x.
ë Global memory has High latency instructions: 400-600 clock cycles
ë Shared memory has low latency: a few clock cycles
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
ë Global memory is not cached on G8x.
ë Global memory has High latency instructions: 400-600 clock cycles
ë Shared memory has low latency: a few clock cycles
∎Optimize access patterns to get:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
ë Global memory is not cached on G8x.
ë Global memory has High latency instructions: 400-600 clock cycles
ë Shared memory has low latency: a few clock cycles
∎Optimize access patterns to get:
ë Coalesced global memory accesses
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
ë Global memory is not cached on G8x.
ë Global memory has High latency instructions: 400-600 clock cycles
ë Shared memory has low latency: a few clock cycles
∎Optimize access patterns to get:
ë Coalesced global memory accesses
ë Shared memory accesses with no or few bank conflicts and
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
Optimize Memory Access Patterns
∎Effective bandwidth can vary by an order of magnitude depending on
access pattern:
ë Global memory is not cached on G8x.
ë Global memory has High latency instructions: 400-600 clock cycles
ë Shared memory has low latency: a few clock cycles
∎Optimize access patterns to get:
ë Coalesced global memory accesses
ë Shared memory accesses with no or few bank conflicts and
ë to avoid partition camping.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 49 / 114
A Common Programming Strategy
1 Partition data into subsets that fit into shared memory
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 50 / 114
A Common Programming Strategy
1 Partition data into subsets that fit into shared memory
2 Handle each data subset with one thread block
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 50 / 114
A Common Programming Strategy
1 Partition data into subsets that fit into shared memory
2 Handle each data subset with one thread block
3 Load the subset from global memory to shared memory, using
multiple threads to exploit memory-level parallelism.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 50 / 114
A Common Programming Strategy
1 Partition data into subsets that fit into shared memory
2 Handle each data subset with one thread block
3 Load the subset from global memory to shared memory, using
multiple threads to exploit memory-level parallelism.
4 Perform the computation on the subset from shared memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 50 / 114
A Common Programming Strategy
1 Partition data into subsets that fit into shared memory
2 Handle each data subset with one thread block
3 Load the subset from global memory to shared memory, using
multiple threads to exploit memory-level parallelism.
4 Perform the computation on the subset from shared memory.
5 Copy the result from shared memory back to global memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 50 / 114
A Common Programming Strategy
Partition data into subsets that fit into shared memory
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 51 / 114
A Common Programming Strategy
Handle each data subset with one thread block
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 52 / 114
A Common Programming Strategy
Load the subset from global memory to shared memory, using multiple
threads to exploit memory-level parallelism.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 53 / 114
A Common Programming Strategy
Perform the computation on the subset from shared memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 54 / 114
A Common Programming Strategy
Copy the result from shared memory back to global memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 55 / 114
A Common Programming Strategy
∎Carefully partition data according to access patterns
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 56 / 114
A Common Programming Strategy
∎Carefully partition data according to access patterns
∎If read only, use __constant__ memory (fast)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 56 / 114
A Common Programming Strategy
∎Carefully partition data according to access patterns
∎If read only, use __constant__ memory (fast)
∎for read/write access within a tile, use __shared__ memory (fast)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 56 / 114
A Common Programming Strategy
∎Carefully partition data according to access patterns
∎If read only, use __constant__ memory (fast)
∎for read/write access within a tile, use __shared__ memory (fast)
∎for read/write scalar access within a thread, use registers (fast)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 56 / 114
A Common Programming Strategy
∎Carefully partition data according to access patterns
∎If read only, use __constant__ memory (fast)
∎for read/write access within a tile, use __shared__ memory (fast)
∎for read/write scalar access within a thread, use registers (fast)
∎R/W inputs/results cudaMalloc’ed, use global memory (slow)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 56 / 114
Outline
1. Optimizing Matrix Transpose with CUDA
2. Performance Optimization
3. Parallel Reduction
4. Parallel Scan
5. Exercises
6. Exercises
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 57 / 114
Parallel reduction: presentation
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 58 / 114
Parallel reduction: presentation
∎Common and important data parallel primitive.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 58 / 114
Parallel reduction: presentation
∎Common and important data parallel primitive.
∎Easy to implement in CUDA, but hard to get right.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 58 / 114
Parallel reduction: presentation
∎Common and important data parallel primitive.
∎Easy to implement in CUDA, but hard to get right.
∎Serves as a great optimization example.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 58 / 114
Parallel reduction: presentation
∎Common and important data parallel primitive.
∎Easy to implement in CUDA, but hard to get right.
∎Serves as a great optimization example.
∎This section is based on slides and technical reports by Mark Harris
(NVIDIA).
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 58 / 114
Parallel reduction: challenges
∎One needs to be able to use multiple thread blocks:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 59 / 114
Parallel reduction: challenges
∎One needs to be able to use multiple thread blocks:
ë to process very large arrays,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 59 / 114
Parallel reduction: challenges
∎One needs to be able to use multiple thread blocks:
ë to process very large arrays,
ë to keep all multiprocessors on the GPU busy,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 59 / 114
Parallel reduction: challenges
∎One needs to be able to use multiple thread blocks:
ë to process very large arrays,
ë to keep all multiprocessors on the GPU busy,
ë to have each thread block reducing a portion of the array.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 59 / 114
Parallel reduction: challenges
∎One needs to be able to use multiple thread blocks:
ë to process very large arrays,
ë to keep all multiprocessors on the GPU busy,
ë to have each thread block reducing a portion of the array.
∎But how do we communicate partial results between thread blocks?
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 59 / 114
Parallel reduction: CUDA implementation strategy
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 60 / 114
Parallel reduction: CUDA implementation strategy
∎We decompose computation into multiple kernel invocations
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 60 / 114
Parallel reduction: CUDA implementation strategy
∎We decompose computation into multiple kernel invocations
∎For this problem of parallel reduction, all kernels are in fact the same
code.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 60 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
ë Bandwidth: for memory-bound kernels
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
ë Bandwidth: for memory-bound kernels
∎Reductions have very low arithmetic intensity:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
ë Bandwidth: for memory-bound kernels
∎Reductions have very low arithmetic intensity:
ë 1 flop per element loaded (bandwidth-optimal)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
ë Bandwidth: for memory-bound kernels
∎Reductions have very low arithmetic intensity:
ë 1 flop per element loaded (bandwidth-optimal)
∎Therefore we should strive for peak bandwidth
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
ë Bandwidth: for memory-bound kernels
∎Reductions have very low arithmetic intensity:
ë 1 flop per element loaded (bandwidth-optimal)
∎Therefore we should strive for peak bandwidth
∎We will use G80 GPU (following Mark Harris tech report) for this
example:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
ë Bandwidth: for memory-bound kernels
∎Reductions have very low arithmetic intensity:
ë 1 flop per element loaded (bandwidth-optimal)
∎Therefore we should strive for peak bandwidth
∎We will use G80 GPU (following Mark Harris tech report) for this
example:
ë 384-bit memory interface, 1800 MHz
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: what is our goal?
∎We should use the right metric between:
ë GFLOP/s: for compute-bound kernels
ë Bandwidth: for memory-bound kernels
∎Reductions have very low arithmetic intensity:
ë 1 flop per element loaded (bandwidth-optimal)
∎Therefore we should strive for peak bandwidth
∎We will use G80 GPU (following Mark Harris tech report) for this
example:
ë 384-bit memory interface, 1800 MHz
ë 384 × 1800⇑8 = 86.4𝐺𝐵⇑𝑠
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 61 / 114
Parallel reduction: interleaved addressing (1/2)
__global__ void reduce0(int *g_idata, int *g_odata) {
extern __shared__ int sdata[];
// each thread loads one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[tid] = g_idata[i];
__syncthreads();
// do reduction in shared mem
for(unsigned int s=1; s < blockDim.x; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 62 / 114
Parallel reduction: interleaved addressing (2/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 63 / 114
Parallel reduction: branch divergence in interleaved
addressing (1/2)
∎Main performance concern with branching is divergence.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 64 / 114
Parallel reduction: branch divergence in interleaved
addressing (1/2)
∎Main performance concern with branching is divergence.
ë Branch divergence occurs when threads in the same warp take different
paths upon a conditional branch.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 64 / 114
Parallel reduction: branch divergence in interleaved
addressing (1/2)
∎Main performance concern with branching is divergence.
ë Branch divergence occurs when threads in the same warp take different
paths upon a conditional branch.
ë Penalty: different execution paths are likely to serialized (at compile
time).
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 64 / 114
Parallel reduction: branch divergence in interleaved
addressing (1/2)
∎Main performance concern with branching is divergence.
ë Branch divergence occurs when threads in the same warp take different
paths upon a conditional branch.
ë Penalty: different execution paths are likely to serialized (at compile
time).
∎One should be careful branching when branch condition is a function
of thread ID.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 64 / 114
Parallel reduction: branch divergence in interleaved
addressing (1/2)
∎Main performance concern with branching is divergence.
ë Branch divergence occurs when threads in the same warp take different
paths upon a conditional branch.
ë Penalty: different execution paths are likely to serialized (at compile
time).
∎One should be careful branching when branch condition is a function
of thread ID.
ë Below, branch granularity is less than warp size:
If (threadIdx.x > 2) { }
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 64 / 114
Parallel reduction: branch divergence in interleaved
addressing (1/2)
∎Main performance concern with branching is divergence.
ë Branch divergence occurs when threads in the same warp take different
paths upon a conditional branch.
ë Penalty: different execution paths are likely to serialized (at compile
time).
∎One should be careful branching when branch condition is a function
of thread ID.
ë Below, branch granularity is less than warp size:
If (threadIdx.x > 2) { }
ë Below, branch granularity is a whole multiple of warp size:
If (threadIdx.x / WARP_SIZE > 2) { }
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 64 / 114
Parallel reduction: branch divergence in interleaved
addressing (2/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 65 / 114
Parallel reduction: non-divergent interleaved addressing
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 66 / 114
Parallel reduction: shared memory bank conflicts
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 67 / 114
Parallel reduction: sequential addressing (1/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 68 / 114
Parallel reduction: sequential addressing (2/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 69 / 114
Parallel reduction: performance for 4Mb element reduction
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 70 / 114
Parallel reduction: idle threads (1/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 71 / 114
Parallel reduction: idle threads (2/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 72 / 114
Parallel reduction: instruction bottlenecks (1/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 73 / 114
Parallel reduction: instruction bottlenecks (2/2)
∎At 17 GB/s, we’re far from bandwidth bound:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 74 / 114
Parallel reduction: instruction bottlenecks (2/2)
∎At 17 GB/s, we’re far from bandwidth bound:
ë And we know reduction has low arithmetic intensity
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 74 / 114
Parallel reduction: instruction bottlenecks (2/2)
∎At 17 GB/s, we’re far from bandwidth bound:
ë And we know reduction has low arithmetic intensity
∎Therefore a likely bottleneck is instruction overhead:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 74 / 114
Parallel reduction: instruction bottlenecks (2/2)
∎At 17 GB/s, we’re far from bandwidth bound:
ë And we know reduction has low arithmetic intensity
∎Therefore a likely bottleneck is instruction overhead:
ë auxiliary instructions that are not loads, stores, or arithmetic for the
core computation,
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 74 / 114
Parallel reduction: instruction bottlenecks (2/2)
∎At 17 GB/s, we’re far from bandwidth bound:
ë And we know reduction has low arithmetic intensity
∎Therefore a likely bottleneck is instruction overhead:
ë auxiliary instructions that are not loads, stores, or arithmetic for the
core computation,
ë in other words: address arithmetic and loop overhead.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 74 / 114
Parallel reduction: instruction bottlenecks (2/2)
∎At 17 GB/s, we’re far from bandwidth bound:
ë And we know reduction has low arithmetic intensity
∎Therefore a likely bottleneck is instruction overhead:
ë auxiliary instructions that are not loads, stores, or arithmetic for the
core computation,
ë in other words: address arithmetic and loop overhead.
∎Strategy: unroll loops.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 74 / 114
Parallel reduction: unrolling the last warp (1/3)
∎As reduction proceeds, the number of active threads decreases;
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 75 / 114
Parallel reduction: unrolling the last warp (1/3)
∎As reduction proceeds, the number of active threads decreases;
ë When 𝑠≤32, we have only one warp left.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 75 / 114
Parallel reduction: unrolling the last warp (1/3)
∎As reduction proceeds, the number of active threads decreases;
ë When 𝑠≤32, we have only one warp left.
∎Instructions are SIMD synchronous within a warp
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 75 / 114
Parallel reduction: unrolling the last warp (1/3)
∎As reduction proceeds, the number of active threads decreases;
ë When 𝑠≤32, we have only one warp left.
∎Instructions are SIMD synchronous within a warp
∎That implies when 𝑠≤32:
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 75 / 114
Parallel reduction: unrolling the last warp (1/3)
∎As reduction proceeds, the number of active threads decreases;
ë When 𝑠≤32, we have only one warp left.
∎Instructions are SIMD synchronous within a warp
∎That implies when 𝑠≤32:
ë We do not need to use __syncthreads()
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 75 / 114
Parallel reduction: unrolling the last warp (1/3)
∎As reduction proceeds, the number of active threads decreases;
ë When 𝑠≤32, we have only one warp left.
∎Instructions are SIMD synchronous within a warp
∎That implies when 𝑠≤32:
ë We do not need to use __syncthreads()
ë We do not need to perform the test if (tid < s)
because it doesn’t
save any work.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 75 / 114
Parallel reduction: unrolling the last warp (1/3)
∎As reduction proceeds, the number of active threads decreases;
ë When 𝑠≤32, we have only one warp left.
∎Instructions are SIMD synchronous within a warp
∎That implies when 𝑠≤32:
ë We do not need to use __syncthreads()
ë We do not need to perform the test if (tid < s)
because it doesn’t
save any work.
∎Let’s unroll the last 6 iterations of the inner loop!
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 75 / 114
Parallel reduction: unrolling the last warp (2/3)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 76 / 114
Parallel reduction: unrolling the last warp (3/3)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 77 / 114
Parallel reduction: complete unrolling (1/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 78 / 114
Parallel reduction: complete unrolling (2/2)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 79 / 114
Parallel reduction: coarsening the base case (1/6)
∎The work and span of the whole reduction process are Θ(𝑛) and
Θ(log(𝑛)), respectively.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 80 / 114
Parallel reduction: coarsening the base case (1/6)
∎The work and span of the whole reduction process are Θ(𝑛) and
Θ(log(𝑛)), respectively.
∎If we allocate Θ(𝑛) threads (for each kernel call) we necessarily do
Θ(𝑛log(𝑛)) work in total, that is, a significant overhead factor.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 80 / 114
Parallel reduction: coarsening the base case (1/6)
∎The work and span of the whole reduction process are Θ(𝑛) and
Θ(log(𝑛)), respectively.
∎If we allocate Θ(𝑛) threads (for each kernel call) we necessarily do
Θ(𝑛log(𝑛)) work in total, that is, a significant overhead factor.
∎Therefore, we need to allocate Θ(𝑛⇑log(𝑛))) threads, with each
thread doing Θ(log(𝑛)) work.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 80 / 114
Parallel reduction: coarsening the base case (1/6)
∎The work and span of the whole reduction process are Θ(𝑛) and
Θ(log(𝑛)), respectively.
∎If we allocate Θ(𝑛) threads (for each kernel call) we necessarily do
Θ(𝑛log(𝑛)) work in total, that is, a significant overhead factor.
∎Therefore, we need to allocate Θ(𝑛⇑log(𝑛))) threads, with each
thread doing Θ(log(𝑛)) work.
∎On G80, best perf with 64-256 blocks of 128 threads with 1024-4096
elements per thread.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 80 / 114
Parallel reduction: coarsening the base case (2/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 81 / 114
Parallel reduction: coarsening the base case (3/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 82 / 114
Parallel reduction: coarsening the base case (4/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 83 / 114
Parallel reduction: coarsening the base case (5/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 84 / 114
Parallel reduction: coarsening the base case (6/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 85 / 114
Outline
1. Optimizing Matrix Transpose with CUDA
2. Performance Optimization
3. Parallel Reduction
4. Parallel Scan
5. Exercises
6. Exercises
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 86 / 114
Parallel scan: presentation
∎Another common and important data parallel primitive.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 87 / 114
Parallel scan: presentation
∎Another common and important data parallel primitive.
∎This problem seems inherently sequential, but there is an efficient
parallel algorithm.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 87 / 114
Parallel scan: presentation
∎Another common and important data parallel primitive.
∎This problem seems inherently sequential, but there is an efficient
parallel algorithm.
∎Applications: sorting, lexical analysis, string comparison, polynomial
evaluation, stream compaction, building histograms and data
structures (graphs, trees, etc.) in parallel.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 87 / 114
Parallel scan: definitions
∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆
with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 88 / 114
Parallel scan: definitions
∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆
with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆.
∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of
𝑛elements of 𝑆defined by
𝐵(︀𝑖⌋︀= {
𝐴(︀0⌋︀
if
𝑖= 0
𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀
if
0 < 𝑖< 𝑛
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 88 / 114
Parallel scan: definitions
∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆
with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆.
∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of
𝑛elements of 𝑆defined by
𝐵(︀𝑖⌋︀= {
𝐴(︀0⌋︀
if
𝑖= 0
𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀
if
0 < 𝑖< 𝑛
∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆:
𝐶(︀𝑖⌋︀= {
0
if
𝑖= 0
𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀
if
0 < 𝑖< 𝑛
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 88 / 114
Parallel scan: definitions
∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆
with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆.
∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of
𝑛elements of 𝑆defined by
𝐵(︀𝑖⌋︀= {
𝐴(︀0⌋︀
if
𝑖= 0
𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀
if
0 < 𝑖< 𝑛
∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆:
𝐶(︀𝑖⌋︀= {
0
if
𝑖= 0
𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀
if
0 < 𝑖< 𝑛
∎An exclusive scan can be generated from an inclusive scan by shifting
the resulting array right by one element and inserting the identity.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 88 / 114
Parallel scan: definitions
∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆
with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆.
∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of
𝑛elements of 𝑆defined by
𝐵(︀𝑖⌋︀= {
𝐴(︀0⌋︀
if
𝑖= 0
𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀
if
0 < 𝑖< 𝑛
∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆:
𝐶(︀𝑖⌋︀= {
0
if
𝑖= 0
𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀
if
0 < 𝑖< 𝑛
∎An exclusive scan can be generated from an inclusive scan by shifting
the resulting array right by one element and inserting the identity.
∎Similarly, an inclusive scan can be generated from an exclusive scan.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 88 / 114
Parallel scan: definitions
∎Let 𝑆be a set, let + ∶𝑆× 𝑆→𝑆be an associative operation on 𝑆
with 0 as identity. Let 𝐴(︀0⋯𝑛−1⌋︀be an array of 𝑛elements of 𝑆.
∎Tthe all-prefixes-sum or inclusive scan of 𝐴computes the array 𝐵of
𝑛elements of 𝑆defined by
𝐵(︀𝑖⌋︀= {
𝐴(︀0⌋︀
if
𝑖= 0
𝐵(︀𝑖−1⌋︀+ 𝐴(︀𝑖⌋︀
if
0 < 𝑖< 𝑛
∎The exclusive scan of 𝐴computes the array 𝐵of 𝑛elements of 𝑆:
𝐶(︀𝑖⌋︀= {
0
if
𝑖= 0
𝐶(︀𝑖−1⌋︀+ 𝐴(︀𝑖−1⌋︀
if
0 < 𝑖< 𝑛
∎An exclusive scan can be generated from an inclusive scan by shifting
the resulting array right by one element and inserting the identity.
∎Similarly, an inclusive scan can be generated from an exclusive scan.
∎We shall focus on exclusive scan.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 88 / 114
Parallel scan: sequential algorithm
void scan( float* output, float* input, int length)
{
output[0] = 0; // since this is a prescan, not a scan
for(int j = 1; j < length; ++j)
{
output[j] = input[j-1] + output[j-1];
}
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 89 / 114
Parallel scan: naive parallel algorithm (1/4)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 90 / 114
Parallel scan: naive parallel algorithm (1/4)
∎This algorithm is not work-efficient since its work is 𝑂(𝑛log2(𝑛)).
We will fix this issue later.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 90 / 114
Parallel scan: naive parallel algorithm (1/4)
∎This algorithm is not work-efficient since its work is 𝑂(𝑛log2(𝑛)).
We will fix this issue later.
∎In addition is not suitable for a CUDA implementation either. Indeed,
it works in place which is not feasible for a sufficiently large array
requiring several thread blocks
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 90 / 114
Parallel scan: naive parallel algorithm (2/4)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 91 / 114
Parallel scan: naive parallel algorithm (2/4)
In order to realize CUDA implementation potentially using many thread
blocks, one needs to use a double-buffer.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 91 / 114
Parallel scan: naive parallel algorithm (3/4)
∎Computing a scan of an array of 8 elements using the naïve scan
algorithm.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 92 / 114
Parallel scan: naive parallel algorithm (3/4)
∎Computing a scan of an array of 8 elements using the naïve scan
algorithm.
∎The CUDA version (next slide) can handle arrays only as large as can
be processed by a single thread block running on 1 GPU
multiprocessor.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 92 / 114
Parallel scan: naive parallel algorithm (4/4)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 93 / 114
Parallel scan: work-efficient parallel algorithm (1/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 94 / 114
Parallel scan: work-efficient parallel algorithm (2/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 95 / 114
Parallel scan: work-efficient parallel algorithm (3/6)
x[n-1] := 0;
for i := log(n) downto 1 do
for k from 0 to n-1 by 2^(2*d) in parallel do {
t := x[k + 2^d -1];
x[k + 2^d -1] := x[k + 2^(d-1) -1];
x[k + 2^(d-1) -1] := t + x[k + 2^(d-1) -1];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 96 / 114
Parallel scan: work-efficient parallel algorithm (4/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 97 / 114
Parallel scan: work-efficient parallel algorithm (5/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 98 / 114
Parallel scan: work-efficient parallel algorithm (6/6)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635 99 / 114
Parallel scan: performance
∎See above the performance of the work-efficient, bank-conflict-free
Scan implemented in CUDA compared to a sequential scan
implemented in C++.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635100 / 114
Parallel scan: performance
∎See above the performance of the work-efficient, bank-conflict-free
Scan implemented in CUDA compared to a sequential scan
implemented in C++.
∎The CUDA scan was executed on an NVIDIA GeForce 8800 GTX GPU,
the sequential scan on a single core of an Intel Core Duo Extreme 2.93
GHz.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635100 / 114
Outline
1. Optimizing Matrix Transpose with CUDA
2. Performance Optimization
3. Parallel Reduction
4. Parallel Scan
5. Exercises
6. Exercises
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635101 / 114
Outline
1. Optimizing Matrix Transpose with CUDA
2. Performance Optimization
3. Parallel Reduction
4. Parallel Scan
5. Exercises
6. Exercises
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635102 / 114
Exercise 1 (1/4)
(1) Write a C function incrementing a float array A of size N
(2) Write a CUDA kernel incrementing a float array A of size N for a 1D
grid, using 1D thread blocks, and assuming that each thread
increments one element.
(3) Assuming that each thread block counts 64 threads, write the host
code launching the kernel (including memory allocation on the device
and host-device data transfers)
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635103 / 114
Exercise 1 (2/4)
(1) Write a C function incrementing a float array A of size N
void increment_Array_On_Host(float* A, int N)
{
int i;
for (i=0; i< N; i++)
A[i] = A[i] + 1.f;
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635104 / 114
Exercise 1 (3/4)
(2) Write a CUDA kernel incrementing a float array A of size N for a 1D
grid, using 1D thread blocks, and assuming that each thread
increments one element.
__global__ void increment_On_Device(float *A, int N)
{
int idx = blockIdx.x*blockDim.x + threadIdx.x;
if (idx<N)
A[idx] = A[idx]+1.0f;
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635105 / 114
Exercise 1 (4/4)
(3) Assuming that each thread block counts 64 threads, write the host
code launching the kernel (including memory allocation on the device
and host-device data transfers)
float *A_h;
float *A_d;
cudaMalloc((void **) &A_d, size);
// Allocate memory on the host for A and initialize A
..................................................
cudaMemcpy(A_d, A_h, sizeof(float)*N,
cudaMemcpyHostToDevice);
int bSize = 64;
intnBlocks = N/bSize + (N%bSize == 0?0:1);
Increment_On_Device <<< nBlocks, bSize >>> (A_d, N);
cudaMemcpy(A_h, A_d, sizeof(float)*N, cudaMemcpyDeviceToHost
free(A_h);
cudaFree(A_d);
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635106 / 114
Exercise 2 (1/4)
We recall below the Sieve of Eratosthenes
def eratosthenes_sieve(n):
# Create a candidate list within which non-primes will be
# marked as None; only candidates below sqrt(n) need be checked
candidates = range(n+1)
fin = int(n**0.5)
# Loop over the candidates, marking out each multiple.
for i in xrange(2, fin+1):
if not candidates[i]:
continue
candidates[2*i::i] = [None] * (n//i - 1)
# Filter out non-primes and return the list.
return [i for i in candidates[2:] if i]
Write a CUDA kernel implementing the Sieve of Eratosthenes on an input n:
(1) Start with a naive single thread-block kernel not using shared memory;
(2) Then, use shared memory and multiple thread blocks.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635107 / 114
Exercise 2 (2/4)
(1) A naive kernel not using shared memory.
__global__ static void Sieve(int * sieve,int sieve_size)
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx > 1) {
for(int i=idx+idx;i <
sieve_size;i+=idx)
sieve[i] = 1;
}
}
The launching code could be:
cudaMalloc((void**) &device_sieve, sizeof(int) * sieve_size);
Sieve<<<1, sqrt(sieve_size), 0>>>(device_sieve, sieve_size);
But this would be quite inefficient. Why?
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635108 / 114
Exercise 2 (3/4)
(1) A kernel using shared memory.
__global__ static void Sieve(int * sieve,int sieve_size)
{
int b_x = blockIdx.x;
int b_w = blockDim.x;
int t_x = threadIdx.x;
int offset = b_x * b_w;
int ix = offset + tid;
int t_y = threadIdx.y;
// copy the segment (tile) to shared memory
_shared__ int A[b_w];
A[tid] = sieve[ix];
__syncthreads();
knocker = tid;
// tid knocks down numbers that are multiple
// of knocker in the range [offset, offset + b_w)
}
This code is almost correct . . . Let’s fix it!
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635109 / 114
Exercise 2 (4/4)
(1) A kernel using shared memory.
knocker = t_y;
// tid knocks down numbers that are multiple
// of knocker in the range [offset, offset + b_w[
int start = (offset % knocker == 0)
? offset : (offset / knocker +1) * knocker;
for (int jx = start; jx < offset + b_w; jx += knoecker)
A[jx - offset] =1;
__syncthreads();
sieve[ix] = A[tid];
}
This code is almost correct . . . Let’s fix it!
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635110 / 114
Exercise 3 (1/4)
Write a CUDA kernel (and the launching code) implementing the reversal
of an input integer n. This reversing process will be out-of-place. As in the
previous exercise:
(1) start with a naive kernel not using shared memory
(2) then develop a kernel using shared memory.
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635111 / 114
Exercise 3 (2/4)
__global__ void reverseArrayBlock(int *d_out, int *d_in)
{
int inOffset = blockDim.x * blockIdx.x;
int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x);
int in = inOffset + threadIdx.x;
int out = outOffset + (blockDim.x - 1 - threadIdx.x);
d_out[out] = d_in[in];
}
int numThreadsPerBlock = 256;
int numBlocks = dimA / numThreadsPerBlock;
dim3 dimGrid(numBlocks);
dim3 dimBlock(numThreadsPerBlock);
reverseArrayBlock<<< dimGrid,
dimBlock >>>( d_b, d_a );
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635112 / 114
Exercise 3 (3/4)
__global__ void reverseArrayBlock(int *d_out, int *d_in)
{
extern __shared__ int s_data[];
int inOffset
= blockDim.x * blockIdx.x;
int in
= inOffset + threadIdx.x;
// Load one element per thread from device memory and store it
// *in reversed order* into temporary shared memory
s_data[blockDim.x - 1 - threadIdx.x] = d_in[in];
// Block until all threads in the block have
// written their data to shared mem
__syncthreads();
// write the data from shared memory in forward order,
// but to the reversed block offset as before
int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x);
int out = outOffset + threadIdx.x;
d_out[out] = s_data[threadIdx.x];
}
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635113 / 114
Exercise 3 (4/4)
int numThreadsPerBlock = 256;
int numBlocks = dimA / numThreadsPerBlock;
int sharedMemSize = numThreadsPerBlock * sizeof(int);
// launch kernel
dim3 dimGrid(numBlocks);
dim3 dimBlock(numThreadsPerBlock);
reverseArrayBlock<<< dimGrid, dimBlock,
sharedMemSize >>>( d_b, d_a );
CS4402-9635: Optimizing CUDA code
UWO-CS4402-CS9635114 / 114 |
AMS 148 Chapter 8: Optimization in CUDA, and Advanced Topics
Steven Reeves
1
Optimizing Data Transfers in CUDA C/C++
This section we will discuss code optimization with how to efficiently transfer data between the host and the
device. The peak bandwidth between the device memory and the GPU is much higher (720 GB/s for the
Tesla P100 found in Hummingbird) than the peak bandwidth between host memory and device memory (8
GB/s on a PCIex16 Generation 2). This disparity means that if your implementation requires many data
transfers from GPU to host or vice-versa it will greatly halter your performance. Let us begin with a few
general guidlines for host-device data transfers. We wish to:
• Minimize the ammount of data transferred between host and device when possible, even if that means
running kernels on the GPU that get little or no speed-up compared to running them on the host CPU.
• Hihger bandwidth is possible between the host and the device when using page-locked (or ’pinned’)
memory.
• Batching many small transfers into one larger transfer performs better as it eliminates most of the
per-transfer overhead.
• Data transfers between the host and device can sometimes be overlapped with kernel execution and
other data transfers.
First let us talk about how to measure time spent in data transfers without modifying source code.
1.1
Measuring Data Transfers Times with nvprof
To measure the time spent during each data transfer, we could record a CUDA event before and after each
transfer and use cudaEventElaspsedTime() as we have in the past. However, we can retrieve the elapsed
transfer time without deploying CUDA events by using nvprof, a command-line CUDA profiler included
with the CUDA Toolkit. Let’s try the following code example:
Listing 1: Example Code for Profiling
int
main ()
{
const
unsigned
int N = 1048576;
const
unsigned
int
bytes = N*sizeof(int);
int *h_a = (int*) malloc(bytes);
int *d_a;
cudaMalloc (( int **)&d_a , bytes);
memset(h_a , 0, bytes);
cudaMemcpy(d_a , h_a , bytes , cudaMemcpyHostToDevice );
cudaMemcpy(h_a , d_a , bytes , cudaMemcpyDeviceToHost );
return
0;
}
To profile this code, we just need to compile it using nvcc, and the run nvprof with the program fileneame
as an argument.
Listing 2: Running nvprof
$ nvcc
profile.cu -o profile.exe
$ nvprof ./ profile.exe
1
Using the Citrisdance (NVIDIA K20) server I recieve the following output:
==17821==
NVPROF
is
profiling
process
17821 ,
command: ./ profile.exe
==17821==
Profiling
application : ./ profile.exe
==17821==
Profiling
result:
Type
Time (%)
Time
Calls
Avg
Min
Max
Name
GPU
activities:
51.35%
1.5589 ms
1
1.5589 ms
1.5589 ms
1.5589 ms
[CUDA
memcpy
DtoH]
48.65%
1.4772 ms
1
1.4772 ms
1.4772 ms
1.4772 ms
[CUDA
memcpy
HtoD]
API
calls:
98.80%
480.32 ms
1
480.32 ms
480.32 ms
480.32 ms
cudaMalloc
0.93%
4.5109 ms
2
2.2554 ms
1.7877 ms
2.7232 ms
cudaMemcpy
0.21%
1.0257 ms
188
5.4550 us
220 ns
205.44 us
cuDeviceGetAttribute
0.04%
197.15 us
2
98.574 us
66.692 us
130.46 us
cuDeviceTotalMem
0.02%
93.452 us
2
46.726 us
44.398 us
49.054 us
cuDeviceGetName
0.00%
6.8620 us
4
1.7150 us
342 ns
5.2240 us
cuDeviceGet
0.00%
3.6260 us
3
1.2080 us
347 ns
2.4570 us
cuDeviceGetCount
We see that nvprof gives a full breakdown of the program, including the CUDA API calls and the GPU
activities. We see that the majority of the time is used on the memory allocation, but barring this API call,
the next most time consuming are the memory transfers. Memory transfers are much more common than
allocation in most applications.
1.2
Minimizing Data Transfers
We should not use only the GPU exectuation time of a kernel relative to CPU functions to decided whether
to run the GPU or CPU version. We also need to consider the cost of moving data across the PCI-e bus,
especially when we are initially porting code to CUDA. Because of the hetrogenous programming model of
CUDA (using both the CPU and GPU), code can be ported to the GPU one kernel at a time. In the initial
stages of writing CUDA code, data transfers may donimate the overall execution time. It is worth while to
monitor time spent on data transfer separately from time spent during computation within a kernel. It is
easy to use the command-line profiler for this, as demonstrated above. As more of our code is ported to
CUDA, we will remove intermediate transfers and descrease the overall execution time correspondingly.
1.3
Pinned Memory
Pageable memory space means memory contents that can be paged in/ paged out between DRAM and a
secondary storage device. Host memory allocations are generally pageable.
The main motivation for using pinned memory is to perform asynchronous transfers of data from the host
to the device. This is accomplished by using the CUDA primitive cudaMemcpyAsync and related functions.
Additionally, certain performance benefits come from using pinned (or page-locked memory). In this section
we will give a few examples about how to allocate pinned memory and we investigate features of this type
of memory.
1.3.1
About pinned memory
The workings of paged memory is best described by the post on http://devblogs.nvidia.com which will be
paraphrased here.
Data allocations on the CPU are pageable by default. The GPU cannot access data directly from pageable
host memory, so when a data transfer from pageable host memory to device is invoked, the CUDA driver
will first allocate a temporary page-locked, or ”pinned” host array. Then the data is copied into this pinned
array for transfer to the device. An illustration of this process is provided by Nvidia blogpost:
2
Figure 1:
Regular Data Transfer vs Pinned Data Transfer; NVIDIA Developer Blog
As shown in figure 1, pinned memory is used as a staging area for transfers from the device to the hsot.
We can avoid the cost of the transfer betweeen pageable and pinned host arrays by directly allocation the
host arrays in pinned memory. Allocate pinned host memory in CUDA C/C++ using cudaMallocHost() or
cudaHostAlloc() and free the memory with cudaFreeHost(). It is possible for pinned memory allocation
to fail, so we should check for errors using the cudaError t class. The following code demonstrates allocation
of pinned memory with error chekcing.
cudaError_t
status = cudaMallocHost (( void **)&h_aPinned , bytes);
if (status != cudaSuccess)
printf("Error
allocating
pinned
host
memory\n");
Data transfers using host pinned memory use the same cudaMemcpy() syntax as transfers with pageable
memory. We can use the following ”bandwidthtest” program to compare the rates between pagable and
pinned transfer rates.
Listing 3: Bandwidth Test
#include
<stdio.h>
#include
<assert.h>
//
Convenience
function
for
hcekcing
CUDA
runtime
API
results
// can be
wrapped
around
any
runtime
API
call. No -op in
release
builds.
inline
cudaError_t
checkCuda(cudaError_t
result)
{
#if
deifned(DEBUG) ||
defined(_DEBUG)
if( result !=
cudaSuccess ) {
fprintf(stderr , "CUDA
Runtime
Error: %s\n",
cudaGetErrorString (result));
assert(result ==
cudaSuccess );
}
#endif
return
result;
}
void
profileCopies (float
*h_a ,
float
*h_b ,
float
*d,
unsigned
int
n,
char
*desc)
{
printf("\n%s transfers\n", desc);
unsigned
int
bytes = n * sizeof(float);
// events
for
timing
cudaEvent_t
startEvent , stopEvent;
checkCuda( cudaEventCreate (& startEvent) );
checkCuda( cudaEventCreate (& stopEvent) );
checkCuda( cudaEventRecord (startEvent , 0) );
checkCuda( cudaMemcpy(d, h_a , bytes , cudaMemcpyHostToDevice ) );
checkCuda( cudaEventRecord (stopEvent , 0) );
checkCuda( cudaEventSynchronize (stopEvent) );
3
float
time;
checkCuda( cudaEventElapsedTime (&time , startEvent , stopEvent) );
printf("
Host to Device
bandwidth (GB/s): %f\n", bytes * 1e-6 / time);
checkCuda( cudaEventRecord (startEvent , 0) );
checkCuda( cudaMemcpy(h_b , d, bytes , cudaMemcpyDeviceToHost ) );
checkCuda( cudaEventRecord (stopEvent , 0) );
checkCuda( cudaEventSynchronize (stopEvent) );
checkCuda( cudaEventElapsedTime (&time , startEvent , stopEvent) );
printf("
Device to Host
bandwidth (GB/s): %f\n", bytes * 1e-6 / time);
for (int i = 0; i < n; ++i) {
if (h_a[i] != h_b[i]) {
printf("*** %s transfers
failed
***\n", desc);
break;
}
}
// clean up
events
checkCuda( cudaEventDestroy (startEvent) );
checkCuda( cudaEventDestroy (stopEvent) );
}
int
main ()
{
unsigned
int
nElements = 4*1024*1024;
const
unsigned
int
bytes = nElements * sizeof(float);
// host
arrays
float *h_aPageable , *h_bPageable;
float *h_aPinned , *h_bPinned;
// device
array
float *d_a;
//
allocate
and
initialize
h_aPageable = (float *) malloc(bytes);
// host
pageable
h_bPageable = (float *) malloc(bytes);
// host
pageable
checkCuda( cudaMallocHost (( void **)&h_aPinned , bytes) ); // host
pinned
checkCuda( cudaMallocHost (( void **)&h_bPinned , bytes) ); // host
pinned
checkCuda( cudaMalloc (( void **)&d_a , bytes) );
//
device
for (int i = 0; i < nElements; ++i) h_aPageable [i] = i;
memcpy(h_aPinned , h_aPageable , bytes);
memset(h_bPageable , 0, bytes);
memset(h_bPinned , 0, bytes);
// output
device
info
and
transfer
size
cudaDeviceProp
prop;
checkCuda( cudaGetDeviceProperties (&prop , 0) );
printf("\nDevice: %s\n", prop.name);
printf("Transfer
size (MB): %d\n", bytes / (1024 * 1024));
//
perform
copies
and
report
bandwidth
profileCopies (h_aPageable , h_bPageable , d_a , nElements , "Pageable");
profileCopies (h_aPinned , h_bPinned , d_a , nElements , "Pinned");
printf("n");
//
cleanup
cudaFree(d_a);
cudaFreeHost (h_aPinned);
cudaFreeHost (h_bPinned);
free( h_aPageable );
free( h_bPageable );
return
0;
}
The data transfer rate can depend on the type of host stysem (motherboard, CPU, and chipset) as well
as the GPU. Running this program on HummingBird we have the following output
Listing 4: Output of Bandwidth Test
Device: Tesla P100 -PCIE -16GB
Transfer
size (MB): 16
4
Pageable
transfers
Host to Device
bandwidth (GB/s): 2.990821
Device
to Host
bandwidth (GB/s): 4.364375
Pinned
transfers
Host to Device
bandwidth (GB/s): 12.017788
Device
to Host
bandwidth (GB/s): 12.726673
On Hummingbird the CPU is an AMD Opteron processor and sets a decent pageable transfer rate.
However, we find that the pinned transfer rate is much more impressive, offering more than 4 times bandwidth
Host to Device and 3 times bandwidth Device to Host. Note that the pageable transfer rate depends on the
speed of the CPU. Faster CPUs will offer a better pageable bandwidths, however with modern GPUs pinned
bandwidths will exceed this capability.
A word of warning, you should not over-allocate pinned memory. Doing so can reduce overall system
performance because it reduces the amount of physical memory available to the operating system and other
programs. How much is too much is difficult to tell a priory, so as with most optimizaitons, test your code
and the systems they run on for optimal performance parameters.
1.3.2
Batching Small Transfers
Due to the overhead associated with CPU to GPU memory transfers, it is better to batch many small
transfers together into a single transfer. This is easy to do by using a temporary array, preferable pinned,
and packing it with the data to be transferred.
2
CUDA Streams
In the previous section we discussed how to transfer data efficiently between the host and device. In this
section, we discuss how to overlap data transfers with computation on the host, computation on the device,
and other data transfers between the hsot and device. Through the use of CUDA streams, overlap between
data transfers and other operations can be achieved.
A stream in CUDA is a sequence of operations that execute on the device in the order in which they
are issued by the host code. While operations within a stream are garaunteed to execute in the prescribed
order, operations in different streams can be interleaved and, when possible, they will run concurrently.
2.1
The default stream
All device operations (kernels and data transfers) in CUDA run in a stream. When no stream is specified,
the default stream (also referred to the ”null stream”) is used. The default stream is different from other
streams as it is a synchronizing stream with respect to the device: no operation on the default stream will
begin until all previous issued operation in any stream on the device have completed, and an operation in
the default stream must complete before any other operation (in any stream on the device) will begin.
Let us check out a simple code snippet that use the default stream:
Listing 5: Code Snippet Default Stream
cudaMemcpy(d_a , a, numBytes , cudaMemcpyHostToDevice );
increment <<<1,N>>>(d_a)
cudaMemcpy(a, d_a , numBytes , cudaMemcpyDeviceToHost );
In this snippet, from the persepctive of the PGU¡ all three operation are issued to the same (default) stream
and will execute in the order that they were issued.
From the perspective of the CPU, the implicit data transfers are blocking or synchronous transfers, while
kernel launching is asynchronous. Since the host-to-device transfer on the first line is synchronous, the CPU
thread will not reach the kernel call on the second line until the host to device transfer is complete. Once
the kernel is issued, the CPU thread moves to the third line, but the transfer on that line will not begin due
to the device-side order of execution.
The asynchronous behavior of the kernel launches from the CPU’s perspective makes overlapping device
and host computations very easy, take the following snippet for example:
5
cudaMemcpy(d_a , a, numBytes , cudaMemcpyHostToDevice );
increment <<<1,N>>>(d_a)
myCpuFunction (b)
cudaMemcpy(a, d_a , numBytes , cudaMemcpyDeviceToHost );
In this code, as soon as the kernel is launched on the device the CPU thread executes myCpuFunction(),
overlapping the CPU function with the kernel execution on the GPU. Whether the host function or device
kernel completes first doesn’t afftect the subsequent device to host memory transfer, which begins only after
the kernel is completed. From the perspective of the GPU, nothign has changed from the previous example;
the device is completely unaware of myCpuFunction().
2.2
Non-default streams
Streams other than the null CUDA C/C++ are declared, created, and destroyed in the host code, as in this
example:
Listing 6: Streams
cudaStream_t
stream1;
cudaError_t
result;
result = cudaStreamCreate (& stream1);
result = cudaStreamDestroy (& stream1);
In order to issue a data transfer to a non default CUDA stream, we use the cudaMemcpyAsync() function,
which is similar to the regular cudaMemcpy() function we have been using before, but it takes another
argument.
result = cudaMemcpyAsyn (d_a , a, N, cudaMemcpyHostToDevice , stream1);
This function is asynchronous with the host, so control returns to the host thread immediately after the
transfer is issued. There are 2D and 3D extensions of this as well. To issue a kernel to a non-default stream,
we specify the stream in question as a fourth argument in the kernel configuration:
increment <<<1,N,0,stream1 >>>(d_a);
2.3
Synchronization with streams
Since all operations in non-default streams are asynchronous with respect to the host code, you will find
that there may be situations that will require you to synchronize all streams. There are several ways to do
this. The brute force way to do this is to utilize cudaDeviceSynchronize() which creates a barrier on the
host thread until all previously issued GPU operations are complete. In most cases this is too much, and
can hurt performance due to stalling the device and host thread.
The CUDA stream API has multiple less severe methods of synchronization.
The function
cudaStreamSynchronize(stream) can be used to block the host from prorceeding until all previous is-
sued operations on the specified stream have completed. The function cudaStreamQuery(stream) tests
whether all operations issued to the specified stream have completed, without blocking host execution. The
fuctions cudaEventSyncrhonize(event) and cudaEventQuery(event) do the similar only that their result
is based offof whether a particular event has been recorded. You can also synchronize operations within a
single stream on a specific event using cudaStreamWaitEvent(event).
2.4
Overlapping Kernel Execution and Data Transfer
Previously we demonstrated overlapping kernel execution in the default stream with execution of operations
on the CPU. But our goal in this section is to show how to overlap kernel execution with data transfers.
There are several requirements for this to happen:
• The device must be capable of ”concurrent copy and exectuion”
• The kernel execution and the data transfer to be overlapped must both occur in different, non default
streams
• The host memory involved in the data transfer must be pinned memory.
6
So we’re going to modify the simple code from the previous section, the full code available on github. In this
modified snippet, we break up the array of size N into chunks of streamSize elements. SInce the kernel
operates independently on all elements, each of the chunks can be processed indepednently. The number of
(non-default) streams used is nStreams=N/streamSize. There are multiple ways to implement the domain
decompostion of the data and processing, one way is to loop over the operations for each chunk.
for (int i = 0; i < nStreams; ++i) {
int
offset = i * streamSize;
cudaMemcpyAsync (& d_a[offset], &a[offset], streamBytes , cudaMemcpyHostToDevice , stream[i]);
kernel <<<streamSize/blockSize , blockSize , 0, stream[i]>>>(d_a , offset);
cudaMemcpyAsync (&a[offset], &d_a[offset], streamBytes , cudaMemcpyDeviceToHost , stream[i]);
}
Another approach is to batch similar operations together, issuing all the host to device transfers first, followed
by the kernel lauches, lastly the device to host transfers.
for (int i = 0; i < nStreams; ++i) {
int
offset = i * streamSize;
cudaMemcpyAsync (& d_a[offset], &a[offset],
streamBytes , cudaMemcpyHostToDevice , cudaMemcpyHostToDevice , stream[i]);
}
for (int i = 0; i < nStreams; ++i) {
int
offset = i * streamSize;
kernel <<<streamSize/blockSize , blockSize , 0, stream[i]>>>(d_a , offset);
}
for (int i = 0; i < nStreams; ++i) {
int
offset = i * streamSize;
cudaMemcpyAsync (&a[offset], &d_a[offset],
streamBytes , cudaMemcpyDeviceToHost , cudaMemcpyDeviceToHost , stream[i]);
}
Both asyncrhonous methods shown above yield the correct results, and in both cases dependent operations
are issued to the same stream in the order in which they need to be executed. The results can vary depending
on GPU architectures: On citrisdance a Kepler based server I get the following results:
Device : Tesla
K20c
Time
for
sequential
transfer
and
execute (ms): 7.590112
max
error: 1.192093e -07
Time
for
asynchronous
V1
transfer
and
execute (ms): 3.995456
max
error: 1.192093e -07
Time
for
asynchronous
V2
transfer
and
execute (ms): 3.975712
max
error: 1.192093e -07
However, on Hummingbird the performance is faster:
Device : Tesla P100 -PCIE -16 GB
Time
for
sequential
transfer
and
execute (ms): 3.154144
max
error: 1.192093e -07
Time
for
asynchronous
V1
transfer
and
execute (ms): 1.971200
max
error: 1.192093e -07
Time
for
asynchronous
V2
transfer
and
execute (ms): 1.959584
max
error: 1.192093e -07
This is mostly due to hardware advances in the years, but also due to changes in compute combatibility.
For devices that have compute cability 3.5 or higher, the Hyper-Q feature eliminates the need to tailor the
launch order so either approach yields similar results. However, on older devices, such as the Tesla C1060,
a compute 1.5 device the results are different.
Device : Tesla
C1060
Time
for
sequential
transfer
and
execute (ms ): 12.92381
max
error : 2.3841858E
-07
Time
for
asynchronous
V1
transfer
and
execute (ms ): 13.63690
max
error : 2.3841858E
-07
Time
for
asynchronous
V2
transfer
and
execute (ms ): 8.84588
max
error : 2.3841858E
-07
We see that with this device the version one transfer runs slower than the serially transfered method. A good
way to understand this is that the C1060 only has one copy engine and one kernel engine. The following
diagram illustrates the method of operation on the C1060
7
Figure 2: Singe Engine, Nvidia Developer Blogs
whereas a newer device, say a C2050, which contains two copy engines and kernel engines, the timelines
are more like this:
8
Figure 3: Multiple Engines, Nvidia Developer Blogs
The number of engines in these previous models dictated the nature of asynchronous operations. The
Hyper-Q firmware allows for more effective use of the grid management. The following illustrations show
the profiling of an applicaiton without and with hyper-Q.
Figure 4: Profiling of older multi-engine GPUs without Hyper-Q
9
Figure 5: Multi-engine GPUs with Hyper-Q
GPUs with Hyper-Q allow the hardward to compact asynchronous launches using either method. Allow-
ing the developer to not worry (as much) about the hardware implementation.
3
Optimizing Calculations
In the previous sections we looked into to optimizing memory transactions, in this section we’ll look into
optimizing calculations within kernels.
As an example we will follow Mark Harris’ Optimizing Parallel
Reduction in CUDA presentation.
Using this, we will discuss 7 different versions of the reduction kernel.
Using these versions we will
look at several important optimization strategies. Utilizing these strategies we strive to reach GPU peak
performance. We will need to choose the right metric, FLOP/s for compute-bound kernels, and Bandwidth
for memory-bound kernels. Reductions have very low arithmetic intensity, 1 flop per element loaded, making
this bandwidth optimal. The following code is tested on aa Nvidia Titan XP GPU, which has a theoretical
peak bandwidth of 547.6 GB/s. In what follows we will try to design an algorithm that gets close to this
theoretical bandwidth. A true measurement is the effictive bandwdith
BWeffective = (RB + WB)/(t × 109)
where RB is the number of bytes read by the kernel, and WB is the number of bytes written. We multiply
t by 109 to retrieve GB/s.
The reduction that we have had before utilizes a method called address interleaving.
Listing 7: First Reduction
template
<class T>
__global__
void
reduce0(T *g_idata , T *g_odata){
extern
__shared__ T sdata [];
// each
thread
loads
one
element
from
global to
shared
mem
unsigned
int tid = threadIdx.x;
unsigned
int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[tid] = g_idata[i];
__syncthreads ();
// do
reduction
in
shared
mem
for(unsigned
int s=1; s < blockDim.x; s *= 2) {
if (tid % (2*s) == 0)
{
sdata[tid] += sdata[tid + s];
}
__syncthreads ();
}
// write
result
for
this
block to
global
mem
if (tid == 0)
g_odata[blockIdx.x] = sdata [0];
}
The if statement within this code is highly divergent, it branches the threads and many launched threads
are not active, and can result in very poor performance. We will test this code and the code that follows
using 1 million integers.
10
Upon execution of this first reduce, the elapsed time is 0.132096 ms, with a effective bandwidth of 31.814
GB/s. This is far from the the theoretical bandwidth. Let us change this kernel to remove the divergent
branching.
Listing 8: Reduction without divergent branching
template
<class T>
__global__
void
reduce1(T *d_out , const T *d_in)
{
// sdata is
allocated
in the
kernel
call: via
dynamic
shared
memeory
extern
__shared__ T sdata [];
int
myId = threadIdx.x + blockDim.x*blockIdx.x;
int tid = threadIdx.x;
// load
shared
mem
from
global
mem
sdata[tid] = d_in[myId ];
__syncthreads (); // always
sync
before
using
sdata
//do
reduction
over
shared
memory
for(int s = 1; s<blockDim.x; s *=2)
{
int
index = 2*s*tid; // Strided
indexing!
if(index < blockDim.x)
{
sdata[index] += sdata[index + s];
}
__syncthreads (); // make
sure
all
additions
are
finished
}
// only
tid 0 writes
out
result!
if(tid == 0)
{
d_out[blockIdx.x] = sdata [0];
}
}
All we changed in this code was the divergent inner loop. In this case we moved to a strided indexing
scheme to create non-divergent branches. Upon execution of this kernel, we find that the elapsed time is
now 0.071264 ms with an effective bandwidth of 58.9709GB/s. This shows that we have a 1.85 times speed
up over the original. There’s an additional problem here, we have shared memory bank conflicts, induced
by the strided index.
We will change our looping scheme to yield sequential addressing, which will alleviate the shared memory
bank conflict. To do this we swap the inner loop to be the following
Listing 9: Reduce with sequential addressing
for(unsigned
int s = blockDim.x/2; s >0; s
>>=1)
{
if(tid < s)
{
sdata[tid] += sdata[tid+s];
}
__syncthreads (); // make
sure
all
additions
are
finished
}
all else the same. By executing this kernel, we have the following reports: Elapsed time : 0.062816ms,
Effective Bandwidth : 66.9017 GB/s, yielding a 1.13× speed up from the previous kernel.
The problem with this implementation is that there are many idle threads, we only use half of the
threads in the thread block upon the first loop of the iteration. In this case, we will half the number of
blocks used, and change the way we load into shared memeory. For this purpose we will do two loads, and
do the first addition in the reduction kernel upon loading into shared memory. So, we change our kernel to
use:
Listing 10: First Add During Load
int
myId = threadIdx.x + (blockDim.x*2)*blockIdx.x;
int tid = threadIdx.x;
// load
shared
mem
from
global
mem
sdata[tid] = d_in[myId] + d_in[myId + blockDim.x];
__syncthreads (); // always
sync
before
using
sdata
11
Upon execution, we reduce the elapsed time to 0.034528ms and increase the bandwidth to 121.713GB/s.
Giving a speed up of 1.8192 times.
At 121GB/s we’re far from the bandwidth upper bound. Therefore there is likely a bottleneck in the
instruction overhead. Ancillary instructions that are not loads, stores, or arithmetic for core computation
can become bottlenecks. In this case, address arithmetic and loop overhead. Our strategy for mitigating
this will be to unroll loops.
In the next kernel we will unroll the last warp. That is, we will change our loop to end at s = 32. Note
that this will save useless work in every warp, since we will be taking stages out of the for loop. Without
the unroll, every warp executes all iterations of the for loop and if statement.
Listing 11: Unrolling the last warp
for(unsigned
int s = blockDim.x/2; s >32; s
>>=1)
{
if(tid < s)
{
sdata[tid] += sdata[tid+s];
}
__syncthreads (); // make
sure
all
additions
are
finished
}
if(tid < 32)
{
sdata[tid] += sdata[tid +32];
sdata[tid] += sdata[tid +16];
sdata[tid] += sdata[tid +8];
sdata[tid] += sdata[tid +4];
sdata[tid] += sdata[tid +2];
sdata[tid] += sdata[tid +1];
}
The effects of unrolling the last warp is fairly noticable. We reduce the kernel time to 0.023936ms with
bandwidth being 175.572 GB/s, giving a speed up of 1.442513369 times from the last kernel.
Now if we know the know the number of iterations at compile time, then we could completely unroll the
reduction. Luckily, the block size is limiited by the GPU to 1024 threads, and we’re sticking to powers of
2 for blocksizes. So we can easily unroll for a fixed block size, which we will use a precompiler derictive to
define blockSize. Using this value all if statements involving blockSize will be evaulated at compile time.
Therefore we can change the above for loop to a different statment
Listing 12: Unrolling all the loops
if(blockSize
>= 512){
if(tid < 256)
{
sdata[tid] += sdata[tid + 256];
}
__syncthreads ();
}
if(blockSize
>= 256){
if(tid < 128)
{
sdata[tid] += sdata[tid + 128];
}
__syncthreads ();
}
if(blockSize
>= 128){
if(tid < 64)
{
sdata[tid] += sdata[tid + 64];
}
__syncthreads ();
}
if(tid < 32)
{
sdata[tid] += sdata[tid +32];
sdata[tid] += sdata[tid +16];
sdata[tid] += sdata[tid +8];
sdata[tid] += sdata[tid +4];
sdata[tid] += sdata[tid +2];
sdata[tid] += sdata[tid +1];
}
12
Using this method we shave the time down 0.02192 ms, and raise the bandwidth to 191.72GB/s, yielding a
moderate speed up just about 1.1.
Before getting to the last optimization, lets consider the complexity of the reduce algorithm. We know
that there are log2(N) parallel steps, where each step S does N/2S independent operations. For N = 2D,
the reduction algorithm performs
X
S∈[1···D]
2D−S = N −1
operations, making is work efficient. With P threads operating in parallel, the time complexity of the reduce
is O(N/P + log2(N)). Now we need to think about cost, the cost of the parallel algorithm is the number of
processors times the time complexity. If we allocate N processors, then the cost is N log(N), which is not
cost effecient. Brent’s theorem suggests that we use O(N/ log(N))threads. Each thread will do O(log(N))
sequential work. So all O(N/ log(N)) threads will cooperatie for O(log N) steps, therefore resulting in O(N)
cost. This is called algorithm cascading, and in practice can lead to significant speed ups.
To cascade the reduction, we will combine sequential and parallel reduction methods. Each thread loads
and sums multiple elements into shared memory, then we will perform the tree based reduction in shared
memory. Brent’s theorem suggests that each thread should sum O(log(N)) elements. It can be sometimes
beneficial to push this further, we can hide latency with more work per thread. We will do this using 32
elements per thread. The changes in this last reduction are almost entirely in the load/serial reduce:
Listing 13: Algorithm Cascading
int
myId = threadIdx.x + (blockDim.x*2)*blockIdx.x;
int tid = threadIdx.x;
int
gridSize = blockDim.x*2* gridDim.x;
sdata[tid] = 0;
// load
shared
mem
from
global
mem
while(myId < n)
{
sdata[tid] += d_in[myId] + d_in[myId + blockDim.x];
myId +=
gridSize;
}
__syncthreads ();
Then upon kernel launch we have
reduce6 <<<dimGrid6 , dimBlock , 1024* sizeof(int) >>>(reduced , array , 32);
Using the algorithm cascading we reduce the elapsed time down to 0.008192 ms, offering 513GB/s bandwidth,
and giving a 2.67578 times speed up.
Kernel
Time 220 integers
Bandwidth
Step Speed Up
Cummulative Speed Up
Kernel 1
0.132096 ms
31.814 GB/s
-
-
Kernel 2
0.071264 ms
58.8198 GB/s
1.853x
1.853x
Kernel 3
0.062816 ms
66.9017 GB/s
1.134x
2.102x
Kernel 4
0.034528 ms
131.713 GB/s
1.819x
3.826x
Kernel 5
0.023936 ms
175.572 GB/s
1.442x
5.519x
Kernel 6
0.021920 ms
191.720 GB/s
1.092x
6.026x
Kernel 7
0.008192 ms
513.001 GB/s
2.676x
16.13x
Table 1: Performance of Reduction via optimization
So putting it all together, we can achieve a speed up of over 16 times. Here is an interesting oberservation:
• Algorithmic Optimizations By changing the addressing and using algorithm cascading we achieved
10.23× speed up collectively.
• Coding Optimization By unrolling the loops we only achieved 1.58× speed up collectively.
So a good rule of thumb is to optimizing your algorithm first, then optimize your code using unrolling of
loops.
13
In conclusion – to fully optimize CUDA code, you should understand CUDA performance characteristics.
Mostly, memory coalescing, divergent branching, bank conflicts, and latency hiding. Consider the algorithm
that you are programming, ascertain if it is a compute limited algorithm or bandwidth limited.
Using
parallel algorithm complexity theory, we found how to cascade the algorithm, allowing for quite a substantial
optimization. Identify bottlenecks in your algorithm, like we did with the memory and instruction overhead.
Finally, be sure to optimize the your algorithm, and then optimize your code.
14 |
Fundamental CUDA
Optimization
NVIDIA Corporation
Outline
Fermi/Kepler Architecture
Kernel optimizations
Launch configuration
Global memory throughput
Shared memory access
Instruction throughput / control flow
Optimization of CPU-GPU interaction
Maximizing PCIe throughput
Overlapping kernel execution with memory copies Most concepts in this presentation apply to any language or API on NVIDIA GPUs
Most concepts in this
presentation apply to
any language or API
on NVIDIA GPUs
512 Scalar Processor (SP) cores execute parallel
thread instructions
16 Streaming Multiprocessors (SMs)
each contains
32 scalar processors
32 fp32 / int32 ops / clock,
16 fp64 ops / clock
4 Special Function Units (SFUs)
Shared register file (128KB)
48 KB / 16 KB Shared memory
16KB / 48 KB L1 data cache
20-Series Architecture (Fermi)
Kepler cc 3.5 SM (GK110)
“SMX” (enhanced SM)
192 SP units (“cores”)
64 DP units
LD/ST units
4 warp schedulers
Each warp scheduler is dual-
issue capable
K20: 13 SMX’s, 5GB
K20X: 14 SMX’s, 6GB
K40: 15 SMX’s, 12GB
Software
Hardware
Threads are executed by scalar processors
Thread
Scalar
Processor
Thread
Block
Multiprocessor
Thread blocks are executed on multiprocessors
Thread blocks do not migrate
Several concurrent thread blocks can reside on one
multiprocessor - limited by multiprocessor
resources (shared memory and register file)
...
Grid
Device
A kernel is launched as a grid of thread blocks
Execution Model
Thread
Block
Multiprocessor
32 Threads
32 Threads
32 Threads
...
Warps
A thread block consists of
32-thread warps
A warp is executed
physically in parallel
(SIMD) on a multiprocessor
=
Warps
Host
CPU
Chipset
DRAM
Device
DRAM
Global
Constant
Texture
Local
GPU
Multiprocessor
Registers
Shared Memory
Multiprocessor
Registers
Shared Memory
Multiprocessor
Registers
Shared Memory
Constant and Texture
Caches
L1 / L2 Cache
Memory Architecture
© NVIDIA Corporation 2011
Launch Configuration
Launch Configuration
Key to understanding:
Instructions are issued in order
A thread stalls when one of the operands isn’t ready:
Memory read by itself doesn’t stall execution
Latency is hidden by switching threads
GMEM latency: 400-800 cycles
Arithmetic latency: 18-22 cycles
How many threads/threadblocks to launch?
Conclusion:
Need enough threads to hide latency
Launch Configuration
Hiding arithmetic latency:
Need ~18 warps (576 threads) per SM
Or, latency can also be hidden with independent instructions from the
same warp
For example, if instruction never depends on the output of preceding
instruction, then only 9 warps are needed, etc.
Maximizing global memory throughput:
Depends on the access pattern, and word size
Need enough memory transactions in flight to saturate the bus
Independent loads and stores from the same thread
Loads and stores from different threads
Larger word sizes can also help (float2 is twice the transactions of float, for
example)
Maximizing Memory Throughput
Increment of an array of 64M elements
Two accesses per thread (load then store)
The two accesses are dependent, so really 1 access per thread at a time
Tesla C2050, ECC on, theoretical bandwidth: ~120 GB/s
Several independent smaller
accesses have the same effect
as one larger one.
For example:
Four 32-bit ~= one 128-bit
Launch Configuration: Summary
Need enough total threads to keep GPU busy
Typically, you’d like 512+ threads per SM
More if processing one fp32 element per thread
Of course, exceptions exist
Threadblock configuration
Threads per block should be a multiple of warp size (32)
SM can concurrently execute up to 8 thread blocks
Really small thread blocks prevent achieving good occupancy
Really large thread blocks are less flexible
I generally use 128-256 threads/block, but use whatever is best for the application
For more details:
Vasily Volkov’s GTC2010 talk “Better Performance at Lower Occupancy”
(http://www.gputechconf.com/page/gtc-on-demand.html#session2238)
Global Memory
Throughput
Memory Hierarchy Review
Local storage
Each thread has own local storage
Mostly registers (managed by the compiler)
Shared memory / L1
Program configurable: 16KB shared / 48 KB L1 OR 48KB shared / 16KB L1
Shared memory is accessible by the threads in the same threadblock
Very low latency
Very high throughput: 1+ TB/s aggregate
L2
All accesses to global memory go through L2, including copies to/from CPU host
Global memory
Accessible by all threads as well as host (CPU)
High latency (400-800 cycles)
Throughput: up to 177 GB/s
Memory Hierarchy Review
L2
Global Memory
Registers
L1 SM-N
SM-N
SMEM
Registers
L1 SM-0
SM-0
SMEM
Registers
L1 SM-1
SM-1
SMEM
GMEM Operations
Two types of loads:
Caching
Default mode
Attempts to hit in L1, then L2, then GMEM
Load granularity is 128-byte line
Non-caching
Compile with –Xptxas –dlcm=cg option to nvcc
Attempts to hit in L2, then GMEM
– Do not hit in L1, invalidate the line if it’s in L1 already
Load granularity is 32-bytes
Stores:
Invalidate L1, write-back for L2
Load Operation
Memory operations are issued per warp (32 threads)
Just like all other instructions
Operation:
Threads in a warp provide memory addresses
Determine which lines/segments are needed
Request the needed lines/segments
Caching Load
Warp requests 32 aligned, consecutive 4-byte words
Addresses fall within 1 cache-line
Warp needs 128 bytes
128 bytes move across the bus on a miss
Bus utilization: 100%
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Non-caching Load
Warp requests 32 aligned, consecutive 4-byte words
Addresses fall within 4 segments
Warp needs 128 bytes
128 bytes move across the bus on a miss
Bus utilization: 100%
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Caching Load
Warp requests 32 aligned, permuted 4-byte words
Addresses fall within 1 cache-line
Warp needs 128 bytes
128 bytes move across the bus on a miss
Bus utilization: 100%
...
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
addresses from a warp
0
Non-caching Load
Warp requests 32 aligned, permuted 4-byte words
Addresses fall within 4 segments
Warp needs 128 bytes
128 bytes move across the bus on a miss
Bus utilization: 100%
...
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
addresses from a warp
0
Caching Load
Warp requests 32 misaligned, consecutive 4-byte words
Addresses fall within 2 cache-lines
Warp needs 128 bytes
256 bytes move across the bus on misses
Bus utilization: 50%
96
192
128
160
224
288
256
...
addresses from a warp
32
64
0
352
320
384
448
416
Memory addresses
Non-caching Load
Warp requests 32 misaligned, consecutive 4-byte words
Addresses fall within at most 5 segments
Warp needs 128 bytes
160 bytes move across the bus on misses
Bus utilization: at least 80%
Some misaligned patterns will fall within 4 segments, so 100% utilization
96
192
128
160
224
288
256
...
addresses from a warp
32
64
0
352
320
384
448
416
Memory addresses
Caching Load
All threads in a warp request the same 4-byte word
Addresses fall within a single cache-line
Warp needs 4 bytes
128 bytes move across the bus on a miss
Bus utilization: 3.125%
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Non-caching Load
All threads in a warp request the same 4-byte word
Addresses fall within a single segment
Warp needs 4 bytes
32 bytes move across the bus on a miss
Bus utilization: 12.5%
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Caching Load
Warp requests 32 scattered 4-byte words
Addresses fall within N cache-lines
Warp needs 128 bytes
N*128 bytes move across the bus on a miss
Bus utilization: 128 / (N*128)
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Non-caching Load
Warp requests 32 scattered 4-byte words
Addresses fall within N segments
Warp needs 128 bytes
N*32 bytes move across the bus on a miss
Bus utilization: 128 / (N*32)
...
addresses from a warp
96
192
128
160
224
288
256
32
64
352
320
384
448
416
Memory addresses
0
Impact of Address Alignment
Warps should access aligned regions for maximum memory throughput
L1 can help for misaligned loads if several warps are accessing a contiguous
region
ECC further significantly reduces misaligned store throughput
Experiment:
– Copy 16MB of floats
– 256 threads/block
Greatest throughput
drop:
– CA loads:
15%
– CG loads:
32%
GMEM Optimization Guidelines
Strive for perfect coalescing
Align starting address (may require padding)
A warp should access within a contiguous region
Have enough concurrent accesses to saturate the bus
Process several elements per thread
Multiple loads get pipelined
Indexing calculations can often be reused
Launch enough threads to maximize throughput
Latency is hidden by switching threads (warps)
Try L1 and caching configurations to see which one works best
Caching vs non-caching loads (compiler option)
16KB vs 48KB L1 (CUDA call)
Shared Memory
Shared Memory
Uses:
Inter-thread communication within a block
Cache data to reduce redundant global memory accesses
Use it to improve global memory access patterns
Organization:
32 banks, 4-byte wide banks
Successive 4-byte words belong to different banks
Performance:
4 bytes per bank per 2 clocks per multiprocessor
smem accesses are issued per 32 threads (warp)
serialization: if N threads of 32 access different 4-byte words in the same
bank, N accesses are executed serially
multicast: N threads access the same word in one fetch
Could be different bytes within the same word
Bank Addressing Examples
No Bank Conflicts
No Bank Conflicts
Bank 31
Bank 7
Bank 6
Bank 5
Bank 4
Bank 3
Bank 2
Bank 1
Bank 0
Thread 31
Thread 7
Thread 6
Thread 5
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
Bank 31
Bank 7
Bank 6
Bank 5
Bank 4
Bank 3
Bank 2
Bank 1
Bank 0
Thread 31
Thread 7
Thread 6
Thread 5
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
Bank Addressing Examples
2-way Bank Conflicts
8-way Bank Conflicts
Thread 31
Thread 30
Thread 29
Thread 28
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
Bank 31
Bank 7
Bank 6
Bank 5
Bank 4
Bank 3
Bank 2
Bank 1
Bank 0
Thread 31
Thread 7
Thread 6
Thread 5
Thread 4
Thread 3
Thread 2
Thread 1
Thread 0
Bank 9
Bank 8
Bank 31
Bank 7
Bank 2
Bank 1
Bank 0
x8
x8
Shared Memory: Avoiding Bank Conflicts
32x32 SMEM array
Warp accesses a column:
32-way bank conflicts (threads in a warp access the same bank)
31
2
1
0
31
2
1
0
31
2
1
0
warps:
0 1 2 31
Bank 0
Bank 1
…
Bank 31
2
0
1
31
Shared Memory: Avoiding Bank Conflicts
Add a column for padding:
32x33 SMEM array
Warp accesses a column:
32 different banks, no bank conflicts
31
2
1
0
31
2
1
0
31
2
1
0
warps:
0 1 2 31 padding
Bank 0
Bank 1
…
Bank 31
31
2
0
1
Instruction Throughput
& Control Flow
Runtime Math Library and Intrinsics
Two types of runtime math library functions
__func(): many map directly to hardware ISA
Fast but lower accuracy (see CUDA Programming Guide for full details)
Examples: __sinf(x), __expf(x), __powf(x, y)
func(): compile to multiple instructions
Slower but higher accuracy (5 ulp or less)
Examples: sin(x), exp(x), pow(x, y)
A number of additional intrinsics:
__sincosf(), __frcp_rz(), ...
Explicit IEEE rounding modes (rz,rn,ru,rd)
Control Flow
Instructions are issued per 32 threads (warp)
Divergent branches:
Threads within a single warp take different paths
if-else, ...
Different execution paths within a warp are serialized
Different warps can execute different code with no impact on
performance
Avoid diverging within a warp
Example with divergence:
if (threadIdx.x > 2) {...} else {...}
Branch granularity < warp size
Example without divergence:
if (threadIdx.x / WARP_SIZE > 2) {...} else {...}
Branch granularity is a whole multiple of warp size
CPU-GPU Interaction
Pinned (non-pageable) memory
Pinned memory enables:
faster PCIe copies
memcopies asynchronous with CPU
memcopies asynchronous with GPU
Usage
cudaHostAlloc / cudaFreeHost
instead of malloc / free
cudaHostRegister / cudaHostUnregister
pin regular memory after allocation
Implication:
pinned memory is essentially removed from host virtual memory
Streams and Async API
Default API:
Kernel launches are asynchronous with CPU
Memcopies (D2H, H2D) block CPU thread
CUDA calls are serialized by the driver
Streams and async functions provide:
Memcopies (D2H, H2D) asynchronous with CPU
Ability to concurrently execute a kernel and a memcopy
Stream = sequence of operations that execute in issue-order on
GPU
Operations from different streams may be interleaved
A kernel and memcopy from different streams can be overlapped
Overlap kernel and memory copy
Requirements:
D2H or H2D memcopy from pinned memory
Kernel and memcopy in different, non-0 streams
Code:
cudaStream_t stream1, stream2;
cudaStreamCreate(&stream1);
cudaStreamCreate(&stream2);
cudaMemcpyAsync( dst, src, size, dir, stream1 );
kernel<<<grid, block, 0, stream2>>>(…);
potentially
overlapped
Call Sequencing for Optimal Overlap
CUDA calls are dispatched to the hw in the sequence they were issued
Fermi can concurrently execute:
Up to 16 kernels
Up to 2 memcopies, as long as they are in different directions (D2H and H2D)
A call is dispatched if both are true:
Resources are available
Preceding calls in the same stream have completed
Scheduling:
Kernels are executed in the order in which they were issued
Threadblocks for a given kernel are scheduled if all threadblocks for preceding
kernels have been scheduled and there still are SM resources available
Note that if a call blocks, it blocks all other calls of the same type behind it,
even in other streams
Type is one of { kernel, memcopy}
Stream Examples (current HW)
K1,M1,K2,M2: K1
K1 M1
M1 K2
K2 M2
M2
K1,K2,M1,M2: K1
K1 M1
M1 K2
K2 M2
M2
K1,M1,M2: K1
K1 M1
M1 M2
M2
K1,M2,M1: K1
K1 M1
M1 M2
M2
K1,M2,M2: K1
K1 M2
M2 M2
M2
Time
K:
Kernel
M:
Memcopy
Integer: Stream ID
More on Dual Copy
Fermi is capable of duplex communication with the host
PCIe bus is duplex
The two memcopies must be in different streams, different directions
Not all current host systems can saturate duplex PCIe bandwidth:
Likely issues with IOH chips
If this is important to you, test your host system
Duplex Copy: Experimental Results CPU
CPU--00 IOH
IOH X58
X58 DRAM
DRAM GPU
GPU--00 CPU
CPU--00 IOH
IOH D36
D36 DRAM
DRAM GPU
GPU--00 CPU
CPU--11 DRAM
DRAM
10.8 GB/s
7.5 GB/s
QPI, 6.4 GT/s
25.6 GB/s
3xDDR3, 1066 MHz
25.8 GB/s
PCIe, x16
16 GB/s
Duplex Copy: Experimental Results CPU
CPU--00 IOH
IOH X58
X58 DRAM
DRAM GPU
GPU--00 CPU
CPU--00 IOH
IOH D36
D36 DRAM
DRAM GPU
GPU--00 CPU
CPU--11 DRAM
DRAM
10.8 GB/s
11 GB/s
QPI, 6.4 GT/s
25.6 GB/s
3xDDR3, 1066 MHz
25.8 GB/s
PCIe, x16
16 GB/s
Unified Virtual Addressing
No UVA: Multiple Memory Spaces
UVA : Single Address Space
System
Memory CPU
CPU GPU0
GPU0
GPU0
Memory GPU1
GPU1
GPU1
Memory
PCI-e
0x0000
0xFFFF
0x0000
0xFFFF
0x0000
0xFFFF
System
Memory CPU
CPU GPU0
GPU0
GPU0
Memory GPU1
GPU1
GPU1
Memory
PCI-e
0x0000
0xFFFF
Easier to Program with Single Address Space
Summary
Kernel Launch Configuration:
Launch enough threads per SM to hide latency
Launch enough threadblocks to load the GPU
Global memory:
Maximize throughput (GPU has lots of bandwidth, use it effectively)
Use shared memory when applicable (over 1 TB/s bandwidth)
GPU-CPU interaction:
Minimize CPU/GPU idling, maximize PCIe throughput
Use analysis/profiling when optimizing:
“Analysis-driven Optimization” part of the tutorial following
Questions? |
Abstract
Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.
Keywords
Data Layout Optimization, CUDA Performance Optimization, GPU Memory Optimization, Dynamic Programming, Matrix Multiplication, Memory Access Pattern Optimization in CUDA
Share and Cite:
1. Introduction
Graphics Processing Units (GPUs) have become ubiquitous accelerators in modern computing systems, offering tremendous parallel processing capabilities. Today, a single GPU provides thousands of compute cores capable of delivering teraflops of computational power, making GPU accelerator cards increasingly deployed in everything from mobile devices to cloud servers for a wide range of applications including artificial intelligence, scientific computing, and graphics [1] [2] . Despite these advancements, designing preferment and efficient GPU code is fraught with programming complexities and architectural constraints, particularly in threading, memory access, and parallelism management.
A critical challenge in leveraging GPU capabilities is managing data movement and organization on systems where there are processing-speed mismatches across components. For GPUs, which feature wide vector units and high memory bandwidth, disorganized or sparse data access patterns can incur high latency, leading to inefficiencies as memory controllers struggle to keep pace [3] . Optimizing data layout and access patterns is thus essential for feeding compute units efficiently and maximizing floating-point throughput. Another significant concern is Amdahl’s law, which posits that the speedup potential from parallel hardware is limited by the serial portions of code, indicating that various overheads such as kernel launch latency, host-to-device transfers, and synchronization delays can undermine the benefits of extensive parallelism. The efficient execution of matrix multiplication operations is a cornerstone in various computational domains, including deep learning, scientific simulations, and big data analytics. Optimizing these operations on GPUs can unlock significant performance gains, enabling faster training of neural networks, more accurate simulations, and accelerated data processing pipelines.
In this paper, we present a comprehensive approach aimed at addressing the aforementioned challenges, highlighting the efficacy of our proposed data layout optimization mechanism through the application of matrix multiplication. By focusing on the chained matrix multiplication (CMM) problem, recognized for its importance in various fields such as machine learning, physics simulations, and graphics, we develop and implement a dynamic programming algorithm optimized for CMM, utilizing CUDA code. The integration of a data layout transformation demonstrates a substantial improvement in memory locality and coalescing during parallel processing, underscoring the effectiveness of our approach [4] . Additionally, to further enhance performance, we explore additional parallelization techniques at the data level, including parallel diagonal computation and 2D thread block mapping, to maximize fine-grained concurrency. At the task level, we leverage streams and events to enable concurrent execution of multiple problem instances, optimizing the overlap between data transfers and computational processes, thereby further refining our optimization strategy.
This paper’s principles and techniques serve as a case study for unlocking the full potential of modern parallel accelerators. As the adoption of heterogeneous and GPU-based high-performance computing continues to grow rapidly, the programming practices and optimization strategies we discuss will be essential for harnessing the benefits of this technology [5] . Our work addresses key optimization challenges around parallelism management, data organization, and orchestration strategy [6] , offering insights that can be applied to adapt and implement various complex workloads on massively parallel processors.
2. Related Work
Prior work has developed optimizations for irregular data access [7] , data layout selection [8] , and communication reducing techniques when mapping algorithms onto GPU systems. While these existing approaches have demonstrated varying degrees of success, they often overlook the intricate interplay between memory access patterns, data layout, and the underlying GPU architecture. Additionally, many techniques are tailored to specific application domains or workloads, limiting their broader applicability. Our work aims to address these limitations by proposing a novel Data Layout strategy that restructures memory data arrangement to enhance locality and coalescing, thereby optimizing performance across a wide range of GPU-accelerated applications. Li et al. [9] proposed a simple yet effective data layout arbitration framework that automatically picks up the beneficial data layout for different DNNs under different pruning schemes. The proposed framework is built upon a formulated cache estimation model. Experimental results indicate that their approach is always able to select the most beneficial data layout and achieves the average training performance improvement with 14.3% and 3.1% compared to uniformly using two popular data layouts.
Zhenkun et al. [10] proposed a system dubbed Distributed Sampling and Pipelining (DSP) for multi-GPU GNN training. DSP adopts a tailored data layout to utilize the fast NVLink connections among the GPUs, which stores the graph topology and popular node features in GPU memory. For efficient graph sampling with multiple GPUs, they introduced a collective sampling primitive (CSP), which pushes the sampling tasks to data to reduce communication. They also design a producer-consumer-based pipeline, which allows tasks from different mini-batches to run congruently to improve GPU utilization. They compare DSP with state-of-the-art GNN training frameworks, and the results show that DSP consistently outperforms the baselines under different datasets, GNN models, and GPU counts. The speedup of DSP can be up to 26x and is over 2x in most cases. Wan et al. [11] introduced two online data layout reorganization approaches for achieving good tradeoffs between read and write performance. They demonstrated the benefits of using two approaches for the ECP particle-in-cell simulation WarpX, which serves as a motif for a large class of important Exascale applications. They showed that by understanding application I/O patterns and carefully designing data layouts they increased read performance by more than 80%.
Stoltzfus and Emani [12] proposed a machine learning-based approach to build a classifier to determine the best class of GPU memory that will minimize GPU kernel execution time. This approach utilizes a set of performance counters obtained from profiling runs along with hardware features to generate the trained model. They evaluated their approach on several generations of NVIDIA GPUs, including Kepler, Maxwell, Pascal, and Volta on a set of benchmarks. Their results showed that the trained model achieves prediction accuracy of over 90%. Majeti et al. Zhong et al. [13] introduce a new graph format with a data layout such that it supports coalesced access. Despite these promising results, existing optimization techniques for CUDA programs have inherent limitations. Memory bandwidth constraints, latency, and the non-uniform memory access (NUMA) architecture of GPUs may limit the applicability or performance benefits of these techniques in some scenarios. Furthermore, the dynamic nature of data access patterns in certain applications could reduce the effectiveness of static data layout optimizations.
3. Data Layout Technique
To optimize data access for parallel I/O, a data layout technique has been proposed and developed. This technique is successful in reducing the execution time of CUDA kernels and reducing their memory consumption. One of the types of data layout techniques implemented in this article is related to changing the arrangement of data in memory to improve memory access patterns and locality. The underlying principle of our Data Layout strategy is to restructure the memory layout of data structures, such as matrices, to align with the access patterns of the target algorithm. By storing elements that are accessed consecutively in contiguous memory locations, we can enhance spatial locality and leverage hardware caching mechanisms more effectively. This approach reduces cache misses and improves coalesced memory accesses, leading to more efficient utilization of the GPU’s memory subsystem.
Consider the case of matrix multiplication, a critical operation in various domains. Traditionally, matrices are stored in row-major or column-major order, which may not be optimal for certain access patterns. Our Data Layout technique explores alternative storage formats, such as diagonal-based or blocked layouts, to improve memory access locality for the specific algorithm being executed. Here we want to implement this technique on a matrix of numbers. Before we use the data layout on the matrix, it is important to understand the various data layout patterns. In a matrix, there are different ways in which data is written to memory, including row-based and column-based data layouts:
3.1. Row-Based Storage
In row-based storage, data for a single row of a table is stored consecutively on memory. This means that all the columns of a given row are stored together, which can make it efficient for operations that need to access an entire row of data at once.
3.2. Column-Based Storage
In column-based storage, each column of a matrix is stored consecutively on memory. This can be more efficient for operations that only need to access a subset of the columns in a matrix. Despite the promising results, certain inherent limitations of data layout optimization techniques in GPU programming warrant consideration. Issues such as memory bandwidth constraints, latency, and the non-uniform memory access (NUMA) architecture of GPUs may limit the applicability or performance benefits of these techniques in some scenarios. Furthermore, the dynamic nature of data access patterns in certain applications could reduce the effectiveness of static data layout optimizations.
4. Case Study: Chained Matrix Multiplication Problem
We address the problem of chained matrix multiplication (CMM), a cornerstone in computing, with a dynamic programming algorithm. This algorithm optimizes the order of matrix multiplication operations, a task crucial for minimizing computational workloads. The goal is to develop an algorithm that determines the optimal order for multiplying n matrices, as the optimal order depends only on the matrix dimensions. Consider the multiplication of the following n matrices:
A
1
×
A
2
×
A
3
×
⋯
×
A
n
The number of multiplications required to multiply two matrices An×m × Bm×k is n × m × k times. Matrix multiplication is an associative operation, meaning that the order in which we multiply do not matter. Therefore, the number of multiplications is dependent on the order of matrix multiplication. For example, for four matrix multiplication of A20×2 × B2×30 × C30×12 × D12×8 (n = 4), there are five different orders in which we can multiply four matrices, each possibly resulting in a different number of elementary multiplications:
• A(B(CD)): 30 × 12 × 8 + 2 × 30 × 8 + 20 × 2 × 8 = 3680
• (AB)(CD): 20 × 2 × 30 + 30 × 12 × 8 + 20 × 30 × 8 = 8880
• A((BC)D): 2 × 30 × 12 + 2 × 12 × 8 + 20 × 2 × 8 = 1232
• ((AB)C)D: 20 × 2 × 30 + 20 × 30 × 12 + 20 × 12 × 8 = 10320
• (A(BC))D: 2 × 30 × 12 + 20 × 2 × 12 + 20 × 12 × 8 = 3120
The third order is the optimal order for multiplying the four matrices. In this problem, the goal is to develop an algorithm that determines the optimal order for multiplying n matrices. The optimal order depends only on the dimensions of the matrices. Therefore, besides n, these dimensions would be the only input to the algorithm.
5. Serial Algorithm by Dynamic Programming Method
In this section, the dynamic programming solution for the problem of chain multiplication of matrices is described. We first present the serial dynamic programming solution for the CMM problem [14] , which avoids redundant calculations by breaking down the problem into subproblems and storing the results in a matrix M. The provided code is a serial implementation, without taking advantage of parallel processing.
Input: int n, int dim [
0
⋯
n
]. Here, n is the number of matrices and dim contains the dimensions of the matrices. For instance, for A20×2 × B2×30 × C30×12 × D12×8, inputs are n = 4 and dim = [20, 2, 30, 12].
Output: int A[n + 1][n + 1], int M[n + 1][n + 1]. Here, M [i, j] is the minimum number of multiplications in the ith to jth matrix multiplication. Also, if A[i, j] = k(i ≤ k < j), then the optimal order of multiplication in the ith to jth matrix multiplication is (Ai × ∙∙∙ × Ak) × (Ak+1 × ∙∙∙ × Aj) and the optimal number of multiplications is M[1, n].
Algorithm: Inside each parenthesis, the multiplications are obtained according to the optimal order for the matrices inside the parentheses. Of these factorizations, the one that yields the minimum number of multiplications must be the optimal one. The number of multiplications for the kth factorization is the minimum number needed to obtain each factor plus the number needed to multiply the two factors. This means that it equals:
M
[
1
]
[
n
]
=
min
1
≤
k
<
n
(
M
[
1
]
[
k
]
+
M
[
k
+
1
]
[
n
]
+
d
1
d
k
d
n
)
To calculate the intermediate values, the formula is as follows:
M
[
1
]
[
n
]
=
min
1
≤
k
<
n
(
M
[
1
]
[
k
]
+
M
[
k
+
1
]
[
n
]
+
d
1
d
k
d
n
)
With:
M
[
i
]
[
i
]
=
0
,
fori
=
0
⋯
n
Calculations are performed diagonally, starting from the main diameter as shown in Figure 1.
Figure 1. The order of calculations in algorithm.
5.1. Serial Algorithm by Dynamic Programming Method for CMM Problem
This is an implementation of a dynamic programming solution for the Chain Matrix Multiplication (CMM) problem [14] . This problem involves finding the most efficient way to multiply a sequence of matrices together. The dynamic programming approach efficiently avoids redundant calculations by breaking down the problem into subproblems and storing the results of those subproblems in the matrix M. The provided code is a serial implementation, meaning it doesn’t take advantage of parallel processing. The CMM problem and its dynamic programming solution are commonly used in algorithmic optimization for matrix chain multiplication scenarios. The serial code of this algorithm is presented in Listing 1.
Listing 1. Serial version by dynamic programming method for CMM problem.
5.2. Parallel Version in CUDA C++
Considering that in dynamic programming algorithms, arrays are used to store data, there is a very good parallelization capability in them. Each data item in a diagonal of this matrix is calculated by a thread. In the first step, n threads set to zero the values of the main diameter. Then, at each step, one thread is set aside, so that finally, in the last round, only one thread calculates the value of M [1][n]. Considering that the values calculated in each diameter are dependent on the values of the previous diameters, therefore, at the end of each round, synchronization must be done between all the threads. This action is done through the __syncthreads() instruction. This instruction only synchronizes the threads in a block. So, we are only able to use one block in calculations. Global synchronization between all blocks of a kernel has not been implemented in the CUDA programming model, and no instructions have been published for it by NVIDIA. Therefore, the kernel configuration in this program is <<<1, n>>>.
In the CUDA version, the matrix is converted to a one-dimensional array. In the CUDA programming guide [15] , it is recommended to use a one-dimensional array instead of a matrix in the kernel. So, M[i][j] in the matrix is mapped to M[(n + 1) × i + j] in the one-dimensional array.
Kernal launching in this program is:
CMM_CUDA_kernel<<<1, n>>>(dev_dim, dev_m, dev_result, n).
dev_dim, which is sent as an argument to the kernel, is a one-dimensional array of the dimensions of the matrix. dev_m is also the matrix M that is used for calculations and has the same function as the serial version. Before kernel launching, the data must be transferred from the main memory of the CPU to the global memory of the GPU. After the execution of the kernel, the results should be transferred in the reverse direction.
6. Tuning of Algorithm by Data Layout Technique
In the serial algorithm, the computation in the matrix M is done diagonally. However, in the C++ language, matrices are stored as rows in memory. Therefore, thread accesses to memory are not consecutive, reducing locality and increasing cache misses and uncoalesced accesses to global memory, which decreases performance.
To address this issue, we apply the Data Layout technique by storing the elements of each diagonal together in memory, as described in Section 3. This change requires modifications to the indices accessed in the algorithm. By restructuring the data layout, we increase locality and improve the probability of cache hits and coalesced accesses, leading to significant performance improvements. The proposed Data Layout strategy is based on the principle of organizing data in a way that aligns with the access patterns of the program. By storing related data elements consecutively in memory, we can increase the likelihood of cache hits and coalesced memory accesses, reducing memory bandwidth bottlenecks and improving overall throughput. The use of this technique requires changes in the codes and the indices accessed in the algorithm must be changed. We explain this technique with an example. For instance, consider the data of a matrix as shown in Figure 2.
Figure 2. Initial Matrix data.
We only describe an upper triangular matrix because this type of matrix is also used in the chained multiplication problem. Figure 3 illustrates the typical way to store this matrix in the memory.
Figure 3. Data layout in memory.
Figure 4 exhibits how our proposed data layout technique that we used in this problem stores the data of the above matrix in the memory.
Figure 4. Proposed data layout technique in memory.
Therefore, Locality increases strongly and increases the possibility of cache hit and coalesced accesses. We applied this technique to the algorithm of chained multiplication of matrices and obtained promising results which are reported in the following section.
7. Additional Parallelization Techniques
7.1. Data Level Parallelism
To further exploit parallelism in the chain matrix multiplication algorithm, we apply techniques to partition the data at a finer granularity.
Parallelizing Diagonal Computations. Our existing implementation maps one GPU thread to compute each element along the diagonal. We modify this to use multiple threads to compute each element by decomposing the computation in dimensions as shown in Listing 2.
Listing 2. Parallelizing diagonal computation.
We use a (16 × 16) thread block, enabling (256) threads to cooperate in computing each matrix element. This adds finer-grained parallelism within each
diagonal. On our GPU with (128) CUDA cores per SM, this enables each SM to process 2 diagonal elements in parallel. The (16 × 16) block also improves memory access patterns. Compared to the one thread per element approach, this parallel diagonal computation reduces kernel time by 41%
2D Thread Block Decomposition. We also decompose the total computation into 2D thread blocks, assigning each diagonal across multiple blocks as specified in Listing 3.
Listing 3. Block decomposition.
This spreads the work of a diagonal over more GPU cores for greater parallelism. This allows more SMs and CUDA cores to operate on a diagonal in parallel. With N/16 blocks, more SMs participate, and overall parallelism improves. Using 2D thread blocks gives a 23% kernel speedup over the parallel diagonal method alone.
7.2. Task Level Parallelism
To overlap computation and transfers between the CPU and GPU, we leverage streams, events, and concurrency [16] as illustrated in Listing 4.
Listing 4. Task level parallelism.
We also parallelize across multiple independent problem instances by allocating separate streams and CUDA contexts for each instance. This enables entirely concurrent execution. The streams and asynchronous calls prevent these operations from blocking each other. This improves GPU utilization and end-to-end runtime. With 2 streams per instance, we get up to 4× speedup with 4 problems run in parallel.
8. Evaluation
We conducted a series of experiments to evaluate the performance of our proposed Data Layout strategy compared to existing optimization techniques, focusing on the chained matrix multiplication (CMM) problem. The experiments were performed on a system equipped with an NVIDIA Tesla V100 GPU, utilizing the CUDA programming model. We implemented our optimization technique and compared it against conventional memory layouts such as row-major and column-major order, as well as other state-of-the-art optimization strategies proposed in prior literature.
We leverage cuda Event tool for profiling the execution time of programs, which provides very good accuracy compared to the clock() function. To use this tool for recording the execution time, we use the solution shown in Listing 5.
Listing 5. Recording execution time.
Table 1 presents the execution times for the serial CPU dynamic programming algorithm, the baseline CUDA GPU parallel implementation, and the CUDA version optimized with the Data Layout technique, across different numbers of input matrices (n = 1016 to 1024). Note that it is not possible to run the program for n > 1024, as the maximum number of threads per block is 1024. All times are measured in milliseconds. As shown in Figure 5, the CUDA implementation demonstrates more than 2× speedup compared to the serial algorithm across all matrix sizes. For n = 1024 matrices, the serial algorithm takes 1287 ms, while the CUDA implementation requires only 542 ms, achieving a 2.4× runtime improvement by harnessing the parallel processing power of the GPU.
Table 1. Execution time comparison.
Figure 6 provides deeper insight into the performance trends for smaller input sizes in the matrix chain multiplication problem. We plot execution times for a range of n = 10 to 400 matrices to examine why the serial CPU implementation outperforms the CUDA GPU code at very small n. The breakeven point where CUDA becomes faster occurs around 218 matrices. Below this threshold, the parallel CUDA overheads of copying memory between host and device as well as launching computational kernels overwhelm the relatively minor parallelism benefits for small inputs.
Figure 5. Execution time comparison between serial and CUDA version.
However, beyond n = 218 matrices, the runtime of the serial algorithm grows super linearly due to its algorithmic complexity of O(n3). In contrast, CUDA runtime grows roughly linearly thanks to exploiting parallel hardware. Ultimately, this allows the CUDA performance curve to cross below the serial line. Profiling shows kernel launch overheads are relatively fixed at around 0.4 ms, while serial algorithm runtime scales worse than linearly. This highlights why CUDA provides increasing returns as the problem size grows—parallel hardware continues delivering a fixed amount of extra throughput, surpassing serial execution.
Figure 6. Execution time comparison between serial and CUDA version.
Figure 7 compares the execution time between our baseline naive CUDA implementation and the version optimized with the data layout transformation technique. The optimized CUDA code accesses matrix data with significantly improved locality and coalescing, providing up to a 2× faster runtime, with an average speedup exceeding 1.8×. Performance gains are consistent across all input sizes, demonstrating the effectiveness of our Data Layout technique in accelerating memory access patterns.
Compared to previous optimization approaches that relied on compiler auto-vectorization or manual code transformations, our Data Layout strategy offers a more systematic and architecture-aware solution. By explicitly restructuring the memory layout, we can ensure optimal data access patterns tailored to the GPU’s memory hierarchy, leading to superior performance gains.
This runtime comparison shows the clear performance benefits of optimizing memory access patterns on the GPU using our data layout transformation. Rather than relying on the default row-major matrix storage, we rearrange elements to store diagonals consecutively in memory. This matches the access pattern of our dynamic programming algorithm, which iterates diagonally through the matrix. Laying out data to improve locality directly accelerates these memory reads and writes. We see execution time reduced by over 50%, with the optimized CUDA implementation running over 1.8× faster across all input sizes. At 1024 matrices, runtime drops from 542ms in the baseline CUDA code down to just 272 ms with data layout improvements. By enhancing memory coalescing and exploiting caching, the GPU no longer wastes cycles waiting on scattered uncoalesced memory accesses. This demonstrates data layout changes can unlock substantial performance gains by alleviating bottlenecks related to noncontiguous data access. The optimization builds on earlier CUDA speedups for a combined 4× total improvement over the original serial algorithm.
Figure 7. Comparison of CUDA version and Optimized CUDA version.
Figure 8 depicts the speedup attained by our optimized CUDA GPU algorithm with the data layout strategy, relative to the performance of the baseline naive CUDA implementation. We observe an average 1.9× speedup, reaching as high as 2.04× for some matrix input sizes. This demonstrates the effectiveness of improved memory locality in unlocking performance, cutting execution time by up to 50%.
Figure 8. Comparison of CUDA version and Optimized CUDA version.
9. Discussion and Future Work
The remarkable performance improvements achieved by our Data Layout strategy highlight its potential for unlocking the true computational power of GPUs across a wide range of applications. While our case study focused on chained matrix multiplication, the underlying principles of our approach are applicable to various algorithms and data structures that exhibit non-contiguous or irregular memory access patterns. Despite the promising results, certain inherent limitations of our Data Layout technique warrant consideration. Issues such as memory bandwidth constraints, latency, and the non-uniform memory access (NUMA) architecture of GPUs may limit the applicability or performance benefits in some scenarios. Furthermore, the dynamic nature of data access patterns in certain applications could reduce the effectiveness of static data layout optimizations. Looking ahead, our future work will focus on exploring the scalability and effectiveness of our Data Layout strategy in large-scale GPU clusters and cloud environments. By conducting extensive experiments across distributed systems, we aim to provide deeper insights into the potential challenges and opportunities of our approach in these advanced computing paradigms. Additionally, we plan to investigate the integration of our technique with other optimization strategies, such as dynamic data layout transformations and adaptive memory management, to further enhance performance and mitigate the limitations mentioned above.
10. Conclusions
In this study, we presented a novel Data Layout strategy for optimizing CUDA programs, particularly focusing on dynamic programming algorithms for chained matrix multiplication. By restructuring memory data arrangement to improve locality and coalescing, our approach achieved significant performance enhancements, reducing execution time by up to 50% and memory consumption compared to the baseline implementation. The effectiveness of our technique underscores the importance of architecture-aware optimizations in unlocking the full potential of GPU-accelerated applications. As the adoption of heterogeneous and GPU-based computing continues to grow rapidly across various domains, the principles and strategies discussed in this work will be instrumental in harnessing the benefits of these powerful parallel architectures.
While our findings are promising, we acknowledge the limitations and challenges associated with our approach and emphasize the need for further research to extend its applicability and address potential scalability concerns. By continuing to explore innovative optimization techniques and leveraging the synergies between hardware and software, we can pave the way for more efficient and high-performance GPU computing solutions.
It’s important to acknowledge that the scope of our experiments was limited by time constraints, restricting our ability to conduct a more extensive investigation. Future studies will aim to address this limitation by allocating more time for rigorous experimentation and analysis.
Acknowledgements
This work is supported by NSF award #2348330.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper.
References
[1]
Harris, M. (2007) Optimizing Parallel Reduction in CUDA. Nvidia Developer Technology, 2, 70.
[2]
Hong, S. and Hyesoon, K. (2010) An Integrated GPU Power and Performance Model. ACM SIGARCH Computer Architecture News, 38, 280-289. https://doi.org/10.1145/1816038.1815998
[3]
Volkov, V. and Demmel, J.W. (2008) Benchmarking GPUs to Tune Dense Linear Algebra. Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, Austin, 15-21 November 2008. https://doi.org/10.1109/SC.2008.5214359
[4]
Wu, Y.N., Tsai, P.-A., Muralidharan, S., Parashar, A., Sze, V. and Emer, J. (2023) HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity. Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, Association for Computing Machinery, New York, NY, USA, 1106-1120. https://doi.org/10.1145/3613424.3623786
[5]
Liu, G., et al. (2018) A Scalable Parallel Method for Large-Scale Matrix Computations. The Journal of Supercomputing, 74, 6641-6656.
[6]
Peled, L., Mannor, S., Weiser, U. and Etsion, Y. (2015) Semantic Locality and Context-Based Prefetching Using Reinforcement Learning. ACM SIGARCH Computer Architecture News, 43, 285-297. https://doi.org/10.1145/2872887.2749473
[7]
Aldinucci, M., Drocco, M., Mastrostefano, F. and Vanneschi, M. (2018) Hardware-Conscious Autonomic Management of Distributed Workflows. International Conference on Algorithms and Architectures for Parallel Processing, Springer, Cham, 27-31 August 2018, 343-359.
[8]
Ballard, G., Zheng, G., Demmel, J. and Yelick, K. (2017) An Efficient and Generic Event-Based Profiling Framework for GPU Architectures. IEEE Transactions on Parallel and Distributed Systems, 29, 169-182.
[9]
Li, B.Y., et al. (2022) Optimizing Data Layout for Training Deep Neural Networks. Companion Proceedings of the Web Conference 2022, New York, April 2022, 548-554. https://doi.org/10.1145/3487553.3524856
[10]
Cai, Z., Hu, L., Shi, B., Chen, Y., Hu, C. and Tang, J. (2023) DSP: Efficient GNN Training with Multiple GPUs. Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, Montreal, February 2023, 392-404. https://doi.org/10.1145/3572848.3577528
[11]
Wan, L.P., et al. (2022) Improving I/O Performance for Exascale Applications through Online Data Layout Reorganization. IEEE Transactions on Parallel and Distributed Systems, 33, 878-890. https://doi.org/10.1109/TPDS.2021.3100784
[12]
Stoltzfus, L., et al. () Data Placement Optimization in GPU Memory Hierarchy Using Predictive Modeling. Proceedings of the Workshop on Memory Centric High Performance Computing, Dallas, November 2018, 45-49. https://doi.org/10.1145/3286475.3286482
[13]
Zhong, J.L. and He, B.S. (2014) Medusa: Simplified Graph Processing on GPUs. IEEE Transactions on Parallel and Distributed Systems, 25, 1543-1552. https://doi.org/10.1109/TPDS.2013.111
[14]
Neapolitan, R. and Naimipour, K. (2008) Foundations of Algorithms Using C Pseudocode. 3rd Edition, Jones and Bartlett Publishers, Inc., Sudbury, USA.
[15]
NVIDIA (2023) CUDA C Programming Guide (Version 12.2). https://docs.nvidia.com/cuda/archive/12.2.0/pdf/CUDA_C_Best_Practices_Guide.pdf
[16]
Segura, A., Arnau, J.-M. and González, A. (2019) SCU: A GPU Stream Compaction Unit for Graph Processing. Proceedings of the 46th International Symposium on Computer Architecture, Phoenix, Arizona, 22-26 June 2019, 424-435. https://doi.org/10.1145/3307650.3322254
Journals Menu
Copyright © 2025 by authors and Scientific Research Publishing Inc.
This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.
Home
About SCIRP
Service
Policies
|
On this page
GPU MODE Lecture 6: Optimizing Optimizers in PyTorch
Christian Mills
September 2, 2024
Introduction
Optimization Analogy: Towing Cars
Optimizer Basics and Optimization Levels
parameter = parameter - learning_rate * gradient
Implementation: Processes parameters one by one in a for loop.
Simplified Example:
for param in params:
# Retrieve necessary data for the current parameter
# Perform operations (add, multiply, lerp, etc.)
# Update the parameter
Visualization: Each parameter update is a sequence of operations (gray circles), represented as a column. M operations per parameter, N parameters total, resulting in M x N operations.
Implementation: PyTorch’s current default; operates on entire parameter lists at once using vectorized operations.
Simplified Example:
# Add a constant to all parameters in the list
# Multiply all parameters by a constant
# ... other operations
Visualization: Each operation (blue circles) is performed on all parameters simultaneously. M operations total.
Multi-Tensor Apply: The Powerhouse
Standard Add (Simplified):
__device__ void add_kernel(float* self, float* other, float* res, float alpha=1);
ForEach Add (Challenge): How would you design a CUDA kernel signature to handle tensor lists?
Attempt 1: Passing Standard Vector (Failed)
Attempt 2: Passing Pointers to Pointers (Failed)
Attempt 3: Passing by Chonky Boy (Partially Successful)
Idea: Pass tensor data pointers by value using a struct.
Implementation:
Outcome: Works initially, but encounters issues with the kernel argument space limit.
Kernel Argument Space Limit: The kernel argument space has a maximum size of 4 kilobytes.
Problem: If the struct containing tensor pointers exceeds 4 kilobytes, only a portion of the struct gets passed to the kernel, leading to illegal memory access when accessing pointers beyond the limit.
Repro Example:
params = [torch.rand(2,3, device="cuda") for _ in range(N)]
torch._foreach_norm(params, ord=1)
torch.cuda.synchronize()
Observation: Illegal memory access occurs when the number of tensors exceeds 423.
Conclusion: The struct approach works as long as the number of tensor pointers does not exceed the 4 kilobyte limit.
Solution 1: Batching (Current Implementation)
Solution 2: Revisiting Pointers to Pointers with Memcpy
Solution 3: Unified Memory (Future Exploration)
Fused Optimizers and Multi-Tensor Apply
Torch Compile and the Future of Fused Optimizers
Torch Compile (Inductor): PyTorch’s compiler that excels at vertical fusion of operations.
Potential: Automate the vertical fusion of optimizer operations, eliminating the need for handwritten CUDA kernels.
Benefits:
Current Status:
Example:
optimizer = torch.optim.AdamW(model.parameters())
@torch.compile(fullgraph=False)
def compiled_step():
optimizer.step()
# ... training loop
compiled_step()
Limitations:
Future Directions:
Q&A Session
Obtaining Triton Kernel Code
Visualizing Kernel Graphs
Compile Time Dependency on Number of Tensors
Caching Compiled Results
Memory Management with Structs Containing Pointers
Memcpy Size Limit
Memcpy Direction Argument
Device-to-Device Copy
I’m Christian Mills, a deep learning consultant specializing in practical AI implementations. I help clients leverage cutting-edge AI technologies to solve real-world problems.
Interested in working together? Fill out my Quick AI Project Assessment form or learn more about me.
Content licensed under CC BY-NC-SA 4.0
© 2025 Christian J. Mills
Code samples licensed under the MIT License
|
CUDA Convolution
- GPGPU Programming -
Dec, 2008
Sangyoon Lee (sjames @ evl.uic.edu)
Electronic Visualization Laboratory
University of Illinois at Chicago
* This project is a part of CS525 GPU Programming Class instructed by Andy Johnson.
1. Concept and Brief
A given final exam is to explore CUDA optimization with Convoluiton filter application from nvidia's CUDA 2.0 SDK. There are three type of convolution filter in SDK. I mainly used convolutionTexture and convolutionSeparable application.
- Dataset (Images)
Images used in final is provided by Andy (see class website). I used 1kby1k, 2kby2k and 4kby4k image for performance testing. For some reason, 8kby8k image does not work well on my system.
- Development platform
Mac OSX 10.5, MacBook Pro 2.5GHz, Geforce 8600M GT 512MB, nvidia CUDA SDK 2.0
Intermediate and Final version of application is available to download and test. See more details in section 7 below.
2. Starting
Since there is well optimized application is available in SDK, I started to look at their code. First of all, original code uses random data instead of real images. Each data pixel in image represented as single float in these application. To make this application work with real image, I implemeted image loader and writer in RAW format. This code was slightly modified the module we used in project 2.
Each color component of pixel is composed of three values, RGB. To apply convolution filter on image, there are two ways. The first one is simply to map each component as single float and run convolution filter three times for each channel. The second apporach is to modify the original code to use uchar4 or int type as dataset so that we can compute separate channel value within CUDA kernel. I implemeted both ways in convolutionTexuture and convolutionSeparable but later on I only used the first method since it makes kernel code much simpler.
First thing I tried is top-down approach. I mean I took nvidia application and started to remove some of optimization techniques. Then, realized that it is not easy to make this all the way down to naive approach. So, restarted implementation from the most naive way and optimized it to close to the nvidia's application. Later section will explain these steps.
If you are not familiar with convolution filter, please take a look wikipedia entry, Gaussian blur or Convolution. Also, class lecture note (week4, convolution) is useful.
3. Step 0: the most Naive approach
From the idea of convolutio filter itself, the most naive approach is to use global memory to send data to device and each thread accesses this to compute convolution kernel. Our convolution kernel size is radius 8 (total 17x17 multiplicaiton for single pixel value). In image border area, reference value will be set to 0 during computation. This naive approach includes many of conditional statements and this causes very slow execution.
There is no idle threads since total number of threads invoked is the same as total pixel numbers. CUDA kernel block size is 16x16. Below execution time is a mean value over 10 times execution. As you can see, it is extremely slow here.
Figure 1. Memory Access Pattern in Naive approach: each threads in block access 17x17 times global memory.
Below is CUDA kernel code.
__global__ void convolutionGPU(
...............................float *d_Result,
...............................float *d_Data,
...............................int dataW,
...............................int dataH )
{
....//////////////////////////////////////////////////////////////////////
....// most slowest way to compute convolution
....//////////////////////////////////////////////////////////////////////
....// global mem address for this thread
....const int gLoc = threadIdx.x +
.....................blockIdx.x * blockDim.x +
.....................threadIdx.y * dataW +
.....................blockIdx.y * blockDim.y * dataW;
....float sum = 0;
....float value = 0;
....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++) // row wise
........for (int j = -KERNEL_RADIUS; j <= KERNEL_RADIUS; j++) // col wise
........{
............// check row first
............if (blockIdx.x == 0 && (threadIdx.x + i) < 0) // left apron
................value = 0;
............else if ( blockIdx.x == (gridDim.x - 1) &&
........................(threadIdx.x + i) > blockDim.x-1 ) // right apron
................value = 0;
............else
............{
................// check col next
................if (blockIdx.y == 0 && (threadIdx.y + j) < 0) // top apron
....................value = 0;
................else if ( blockIdx.y == (gridDim.y - 1) &&
............................(threadIdx.y + j) > blockDim.y-1 ) // bottom apron
....................value = 0;
................else // safe case
....................value = d_Data[gLoc + i + j * dataW];
............}
............sum += value * d_Kernel[KERNEL_RADIUS + i] * d_Kernel[KERNEL_RADIUS + j];
........}
........d_Result[gLoc] = sum;
}
4. Step 1: Shared Memory
We all experienced the importance of shared memory throughout project 2. Now, it is time to incorporate with this feature from naive code. When I read nvidia convolution document, I thought that it is OK to invoke many of threads and each thread load data from global mem to shared mem. Then, let some of threads idle. Those are thread loaded apron pixels and do not compute convolution.
The first attempt was to keep active thread size as same as previous and increase block size for apron pixels. This did not work since convolution kernel radius is 8 and it make block size to 32 x 32 (1024). This is bigger than G80 hardware limit (512 threads max per block).
Therefore, I changes scheme as all threads are active and each thread loads four pixels and keep the block size 16x16. Shared Memory size used is 32x32 (this includes all necessary apron pixel values for 16x16 active pixels). Below shows quite a bit of performance improve. This is almost x2.8 speed up over naive approach (in 2048 resolution).
Figure 2. Shared Memory Model for naive approach: each threads in block load 4 values from global memory. Threfore, total shared memory size is 4 times bigger than active convolution pixels to include apron area (kernel radius 8 and block size 16x16. active pixels 256 float vs. shared memory size is 1024 float).
Below codes illustrate the convolution kernel.
__global__ void convolutionGPU(
................................float *d_Result,
................................float *d_Data,
................................int dataW,
................................int dataH
................................)
{
....// Data cache: threadIdx.x , threadIdx.y
....__shared__ float data[TILE_W + KERNEL_RADIUS * 2][TILE_W + KERNEL_RADIUS * 2];
....// global mem address of this thread
....const int gLoc = threadIdx.x +
........................IMUL(blockIdx.x, blockDim.x) +
........................IMUL(threadIdx.y, dataW) +
........................IMUL(blockIdx.y, blockDim.y) * dataW;
....// load cache (32x32 shared memory, 16x16 threads blocks)
....// each threads loads four values from global memory into shared mem
....// if in image area, get value in global mem, else 0
....int x, y; // image based coordinate
....// original image based coordinate
....const int x0 = threadIdx.x + IMUL(blockIdx.x, blockDim.x);
....const int y0 = threadIdx.y + IMUL(blockIdx.y, blockDim.y);
....// case1: upper left
....x = x0 - KERNEL_RADIUS;
....y = y0 - KERNEL_RADIUS;
....if ( x < 0 || y < 0 )
........data[threadIdx.x][threadIdx.y] = 0;
....else
........data[threadIdx.x][threadIdx.y] = d_Data[ gLoc - KERNEL_RADIUS - IMUL(dataW, KERNEL_RADIUS)];
....// case2: upper right
....x = x0 + KERNEL_RADIUS;
....y = y0 - KERNEL_RADIUS;
....if ( x > dataW-1 || y < 0 )
........data[threadIdx.x + blockDim.x][threadIdx.y] = 0;
....else
........data[threadIdx.x + blockDim.x][threadIdx.y] = d_Data[gLoc + KERNEL_RADIUS - IMUL(dataW, KERNEL_RADIUS)];
....// case3: lower left
....x = x0 - KERNEL_RADIUS;
....y = y0 + KERNEL_RADIUS;
....if (x < 0 || y > dataH-1)
........data[threadIdx.x][threadIdx.y + blockDim.y] = 0;
....else
........data[threadIdx.x][threadIdx.y + blockDim.y] = d_Data[gLoc - KERNEL_RADIUS + IMUL(dataW, KERNEL_RADIUS)];
....// case4: lower right
....x = x0 + KERNEL_RADIUS;
....y = y0 + KERNEL_RADIUS;
....if ( x > dataW-1 || y > dataH-1)
........data[threadIdx.x + blockDim.x][threadIdx.y + blockDim.y] = 0;
....else
........data[threadIdx.x + blockDim.x][threadIdx.y + blockDim.y] = d_Data[gLoc + KERNEL_RADIUS + IMUL(dataW, KERNEL_RADIUS)];
....__syncthreads();
....// convolution
....float sum = 0;
....x = KERNEL_RADIUS + threadIdx.x;
....y = KERNEL_RADIUS + threadIdx.y;
....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++)
........for (int j = -KERNEL_RADIUS; j <= KERNEL_RADIUS; j++)
............sum += data[x + i][y + j] * d_Kernel[KERNEL_RADIUS + j] * d_Kernel[KERNEL_RADIUS + i];
....d_Result[gLoc] = sum;
}
One more optimization tested here. The use of faster integer multiplication instruction (above code already has this change), __mul24. After replacing all integer multiplication with __mul24, I got slight better performance.
5. Step 2: Filter Separation
Here very important aspect of convolution filter is that it can be separated by row and column. This will reduce computation complexity from m*m to m+m. Basically we apply two separate convolution. The first one is row-wise and the second one is column-wise from the first result data (apply column convolution over row-wise filtered data). This also reduces some of conditional statement and total number of apron pixel data in each path since we do not need to consider vertical apron in row-convolution kernel and horizontal apron in column-convolution kernel.
This gives me a great improvement over the last shared memory optimized version. This is almost x6.2 speed-up from the last version (in resolution 2048). Code already includes __mul24 instruction.
Figure 3. Shared Memory for separate filter: this time only twice bigger memory is necessary for each filter
Unfortunately compiler directive for loopunroll (#pragma unroll 17) does not give any significat improvement. Below shows kernel code for separable convolution filter. Applicatin also modified to run twice of the kernel for row and col convolution.
////////////////////////////////////////////////////////////////////////////////
// Row convolution filter
////////////////////////////////////////////////////////////////////////////////
__global__ void convolutionRowGPU(
....................................float *d_Result,
....................................float *d_Data,
....................................int dataW,
....................................int dataH
..................................)
{
....// Data cache: threadIdx.x , threadIdx.y
....__shared__ float data[TILE_W + KERNEL_RADIUS * 2][TILE_H];
....// global mem address of this thread
....const int gLoc = threadIdx.x +
........................IMUL(blockIdx.x, blockDim.x) +
........................IMUL(threadIdx.y, dataW) +
........................IMUL(blockIdx.y, blockDim.y) * dataW;
....int x; // image based coordinate
....// original image based coordinate
....const int x0 = threadIdx.x + IMUL(blockIdx.x, blockDim.x);
.... // case1: left
....x = x0 - KERNEL_RADIUS;
....if ( x < 0 )
........data[threadIdx.x][threadIdx.y] = 0;
....else
........data[threadIdx.x][threadIdx.y] = d_Data[ gLoc - KERNEL_RADIUS];
.... // case2: right
....x = x0 + KERNEL_RADIUS;
....if ( x > dataW-1 )
........data[threadIdx.x + blockDim.x][threadIdx.y] = 0;
....else
........data[threadIdx.x + blockDim.x][threadIdx.y] = d_Data[gLoc + KERNEL_RADIUS];
.... __syncthreads();
....// convolution
....float sum = 0;
....x = KERNEL_RADIUS + threadIdx.x;
....
....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++)
........sum += data[x + i][threadIdx.y] * d_Kernel[KERNEL_RADIUS + i];
.... d_Result[gLoc] = sum;
}
////////////////////////////////////////////////////////////////////////////////
// Column convolution filter
////////////////////////////////////////////////////////////////////////////////
__global__ void convolutionColGPU(
....................................float *d_Result,
....................................float *d_Data,
....................................int dataW,
....................................int dataH
.................................)
{
....// Data cache: threadIdx.x , threadIdx.y
....__shared__ float data[TILE_W][TILE_H + KERNEL_RADIUS * 2];
.... // global mem address of this thread
....const int gLoc = threadIdx.x +
........................IMUL(blockIdx.x, blockDim.x) +
........................IMUL(threadIdx.y, dataW) +
........................IMUL(blockIdx.y, blockDim.y) * dataW;
....int y; // image based coordinate
....// original image based coordinate
....const int y0 = threadIdx.y + IMUL(blockIdx.y, blockDim.y);
....// case1: upper
....y = y0 - KERNEL_RADIUS;
....if ( y < 0 )
........data[threadIdx.x][threadIdx.y] = 0;
....else
........data[threadIdx.x][threadIdx.y] = d_Data[ gLoc - IMUL(dataW, KERNEL_RADIUS)];
....// case2: lower
....y = y0 + KERNEL_RADIUS;
....if ( y > dataH-1 )
........data[threadIdx.x][threadIdx.y + blockDim.y] = 0;
....else
........data[threadIdx.x][threadIdx.y + blockDim.y] = d_Data[gLoc + IMUL(dataW, KERNEL_RADIUS)];
....__syncthreads();
....// convolution
....float sum = 0;
....y = KERNEL_RADIUS + threadIdx.y;
....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++)
........sum += data[threadIdx.x][y + i] * d_Kernel[KERNEL_RADIUS + i];
....d_Result[gLoc] = sum;
}
6. Step 3: Reorganize Shared Memory
Until step 2, I used 2D array of shared memory to make indexing a bit simpler. Inside computation loop, there is possibility of bank conflict for warp since each thread access first column major memory at the same time. Now, let's re-arrange this shared memory to 1D array so that all threads access data consequently and optimize memory bus here. This only requires changes of indexing in kernel code. Below table shows performance after this re-arrange.
Figure 4. 1D Shared Memory Access Pattern for row filter: shows the first four iteration of convolution computataion. red area is indicating values accessed by half warp threads. Even we obtained fair amount speed up with this re-arrangement, memory access pattern is not aligned well enough to meet the requirement of half-warp alignment for optimal performance.
As you can see above table data, it achived about x3.2 speed-up over the first separable convoluiton implementation (in resolution 2048). Following shows kernel code for this optimization. From step 0 to this point, we made x57 speed-up overall (in resolution 2048).
////////////////////////////////////////////////////////////////////////////////
// Row convolution filter
////////////////////////////////////////////////////////////////////////////////
__global__ void convolutionRowGPU(
................................................float *d_Result,
................................................float *d_Data,
................................................int dataW,
................................................int dataH
.............................................)
{
....// Data cache: threadIdx.x , threadIdx.y
....__shared__ float data[ TILE_H * (TILE_W + KERNEL_RADIUS * 2) ];
....// global mem address of this thread
....const int gLoc = threadIdx.x +
............................IMUL(blockIdx.x, blockDim.x) +
............................IMUL(threadIdx.y, dataW) +
............................IMUL(blockIdx.y, blockDim.y) * dataW;
....int x; // image based coordinate
....// original image based coordinate
....const int x0 = threadIdx.x + IMUL(blockIdx.x, blockDim.x);
....const int shift = threadIdx.y * (TILE_W + KERNEL_RADIUS * 2);
....// case1: left
....x = x0 - KERNEL_RADIUS;
....if ( x < 0 )
........data[threadIdx.x + shift] = 0;
....else
........data[threadIdx.x + shift] = d_Data[ gLoc - KERNEL_RADIUS];
....// case2: right
....x = x0 + KERNEL_RADIUS;
....if ( x > dataW-1 )
........data[threadIdx.x + blockDim.x + shift] = 0;
....else
........data[threadIdx.x + blockDim.x + shift] = d_Data[gLoc + KERNEL_RADIUS];
....__syncthreads();
....// convolution
....float sum = 0;
....x = KERNEL_RADIUS + threadIdx.x;
....for (int i = -KERNEL_RADIUS; i <= KERNEL_RADIUS; i++)
........sum += data[x + i + shift] * d_Kernel[KERNEL_RADIUS + i];
....d_Result[gLoc] = sum;
}
////////////////////////////////////////////////////////////////////////////////
// Row convolution filter
////////////////////////////////////////////////////////////////////////////////
__global__ void convolutionColGPU(
....................................float *d_Result,
....................................float *d_Data,
....................................int dataW,
....................................int dataH
..................................)
{
....// Data cache: threadIdx.x , threadIdx.y
....__shared__ float data[TILE_W * (TILE_H + KERNEL_RADIUS * 2)];
....// global mem address of this thread
....const int gLoc = threadIdx.x +
........................IMUL(blockIdx.x, blockDim.x) +
........................IMUL(threadIdx.y, dataW) +
........................IMUL(blockIdx.y, blockDim.y) * dataW;
....int y; // image based coordinate
....// original image based coordinate
....const int y0 = threadIdx.y + IMUL(blockIdx.y, blockDim.y);
....const int shift = threadIdx.y * (TILE_W);
....// case1: upper
....y = y0 - KERNEL_RADIUS;
....if ( y < 0 )
........data[threadIdx.x + shift] = 0;
....else
........data[threadIdx.x + shift] = d_Data[ gLoc - IMUL(dataW, KERNEL_RADIUS)];
....// case2: lower
....y = y0 + KERNEL_RADIUS;
....const int shift1 = shift + IMUL(blockDim.y, TILE_W);
....if ( y > dataH-1 )
........data[threadIdx.x + shift1] = 0;
....else
........data[threadIdx.x + shift1] = d_Data[gLoc + IMUL(dataW, KERNEL_RADIUS)];
....__syncthreads();
....// convolution
....float sum = 0;
....for (int i = 0; i <= KERNEL_RADIUS*2; i++)
........sum += data[threadIdx.x + (threadIdx.y + i) * TILE_W] * d_Kernel[i];
....d_Result[gLoc] = sum;
}
7. Step 4: nvidia convolution app
In step 3, we made many of optimizations and improved performance greately. Now, there is a bit of further possible optimization to maximize memory bandwidth by change block organization as we see in nvidia's convolution document. Instead of changing code from step 3, I modified nvidia's original code to use image data to see the difference in performance. As I explained in the very beginning, there are two version of convolution app from nvidia. Following table shows those two application performance (modiifed version).
Compared to the result from step 3, convolutionSeparable optimization shows x2 speed-up (in resolution 2048). This application's kernel code is the same as the original one from nvidia. Only applicaiton side code is modified.
Below table shows a couple of experiments I ran at the beginning of top-down approach with this code (original nvidia convolutionSeparable app).
As we can see here, loop unrolling does not impact on performance that much but __mul24 intruction gives x1.3 speed-up.
Here is a brief performance chart from step 1 to step 4 (step 0 is excludes due to its huge number).
8. Application
Here are three different version of convolution application. The first one is my own implemetation until step 3 and the other two applicaitons are the one I modified to use texture image instead of random data from nvidia application (see details in section 2 & 7).
Image Data: hubble.tar.gz (34MB. only include 1kby1k, 2kby2k, 4kby4k raw image from Andy's ditribution)
Need to copy these image file in directory name of hubble at the same level of each convolution application directory. (if you copied application in xxx/convolution directory, then image directory must be xxx/hubble )
Download application source & executable:
convolution.tar.gz (version of step 3)
convolutionTexture.tar.gz (version of step 4, texture)
convolutionSeparable.tar.gz (version of step 4, separable)
Execution
inside bin/darwin/release
./convolution [-i=image resolution] [-n=number of total run]
./convolutionTexture [-i=image resolution] [-n=number of total run]
./convolutionSeparable [-i=image resolution] [-n=number of total run]
default image resolution is 1024 and total run is 10 times.
Compile
in each application directory, type 'make'
9. References
[1] Nvidia CUDA Programming Guide 2.0, http://www.nvidia.com/object/cuda_develop.html
[2] Victor Podlozhnyuk, Image Convolution with CUDA, nvidia CUDA 2.0 SDK convolutionSpeparable document
|
Optimize CUDA Host/Device Transfers
justin.p.mckennon
·
This post is Topic #2 (part 2) in our series Parallel Code: Maximizing your Performance Potential.
In my previous post, CUDA Host/Device Transfers and Data Movement, I provided an introduction into the bottlenecks associated with host/device transfers and data movement. This post will delve a bit further into the subject and provide a few nifty ways to mitigate these very costly operations.
In every single CUDA application (well any useful ones, that is) there is at the very least one host-to-device transfer and one device-to-host transfer. More complicated applications often have many transfers between the host and device. In CUDA programming, this is one of the most expensive operations in terms of timing.
So, if these host/device data transfers are so costly, how do you avoid them? Well, you can’t. But what you can do is minimize the number of transfers between host and device in your application, and mask their impact on the performance of your application.
First, any intermediate data structures that are used within your kernel should always be allocated and destroyed solely on the device. This removes the need to map these structures to host memory and removes the need to transfer this data between the host and device.
If your application has multiple host/device transfers, every effort should be made to batch these transfers into one large transfer. I like to think of this as if you were carrying groceries. Why make multiple trips out to the car when you can load up your arms and do it all at once? Most GPUs support transfer speeds between 5GB/sec and 11GB/sec.
For situations where there is no way around transferring data between host and device, more advanced techniques can be employed to lessen the impact on your application: pinned (also known as page-locked, or mapped) memory and asynchronous transfers.
Pinned Memory
The cudaHostAlloc() function allows you to allocate host memory that can be read from the device and written directly to by the device. This allocated memory is called pinned memory. Pinned memory transfers attain the highest bandwidth between the host and device. During execution, a block that requires host data only needs to wait for a small portion of the data to be transferred (when operating through pinned memory). Typical host-to-device copies make all blocks wait until all of the data associated with the copy operation is transferred. Keep in mind, however, that pinning too much memory can degrade overall system performance by reducing the amount of memory available to the system for paging operations. How much memory you can safely pin differs from system to system, so definitely experiment with this to find the optimal amount.
Asynchronous Transfers
Standard host/device transfers are known as blocking transfers. Control of the main thread is returned only after the data transfer is complete. The cudaMemcpyAsync() function is effectively a non-blocking version of the standard cudaMemcpy(). When executing an asynchronous transfer via cudaMemcpyAsync(), control is returned immediately to the main thread. If you’re not jumping up and down with excitement after hearing that, you should be!
Asynchronous transfers required pinned memory and make use of CUDA streams. In CUDA, streams are essentially sequences of operations that are performed in order on the device. Creating multiple streams is a bit more of an advanced CUDA technique, but one that must be learned if you want the most bang for your buck. With multiple streams in a single application, operations within separate streams can be overlapped, providing a great way to mask the host/device transfer time. Let’s look at an example of how using multiple streams can benefit you and your application:
cudaMemcpyAsync(deviceArray,hostArray,size,cudaMemcpyHostToDevice,0);
kernel<<>>(deviceArray);
//your code
Here, both the transfer and kernel are using the default stream, 0. During execution, the kernel will not be launched until the entire copy operation is complete and control has been returned back to the main thread. This is because both the kernel and memory copy are part of the same stream. Now, let’s look at the code using multiple streams:
cudaStreamCreate(&mystream1);
cudaStreamCreate(&mystream2);
cudaMemcpyAsync(deviceArray,hostArray,size,cudaMemcpyHostToDevice,mystream1);
kernel<<>>(otherDataArray);
//your code
By defining two new streams, we are able to make use of concurrent copy and compute. The memory copy is executing in one stream while the kernel is off in another stream, asynchronous from one another. An important note is to make sure that your device supports concurrent copy and execute before you put this in all of your code. This can be done via the deviceOverlap field of the cudaDeviceProp structure.
While this is an advanced technique, if your data can be broken into chunks and transferred in various stages, you can launch multiple kernel instances to operate on each chunk of data as it arrives on the device. Doing so will almost completely mask the transfer time between the host and device.
So, armed with the knowledge of streams, asynchronous transfers, and pinned memory, you now have some insight on how to squeeze out some more performance from your application. My next post will discuss how to efficiently make use of the available memory types accessible to you within your GPU application.
You May Also Like
DGX A100 review: Throughput and Hardware Summary
When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a…
Deploying GPUs for Classroom and Remote Learning
As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always…
What Can You Do with a $15k NVIDIA Data Science Workstation? – Change Healthcare Data Science
NVIDIA’s Data Science Workstation Platform is designed to bring the power of accelerated computing to a broad set of data science workflows. Recently, we found out what happens when you lend a talented data scientist (with a serious appetite for after-hours projects + coffee) a $15k accelerated data science tool. You can recreate a massive…
|
=אa1@�ى�i��4�t� �E`��w�t�p��L��J� t����_�a95��V�!�y ���
ytך(��_�bG�o�B��&���l��k��"�G?^��痔ޚ����g�k
�,���v��e�������;pu�ڸ��q�)��p��B�û8��w
���{��'��7���V֗4��T�RȤ �_�w��c��3�}��,#����t,Z\e�Z��`�o���7ݙ�5��g|��x����2�ܰa�W���2������@X�.���r*_n�,����-Wx�i��Z-$���D]�N=8.�
���.K�_��1:1�d�nhE`h@���;��wnw ?��/�����EO��,�C7^�5Sh
�s�B֙�dSx�OW�s-~�V�y������x�.6Q �{|xQ֧G�Z#F�Z�8��āR3&ܖ���HP�g�f �-G�O��x_��{zF�հ�)�������6�U��{��&�mJ����9�7�!��o�����&�>ќj��X��eanJF*MNj�|K��$Oʴ9[�2��P�xH��[W'ʍ7�>L��+����A=8o�Е�Q{����;�)�I���|�#�CJ�M�����^�&R/�2q ar�Ǥ��E�w�&�#}��U��/h�ϑOyW��_Ѧ�%�E32�J���}B�7U.����pݖlE��B<M�a�{G*Ɖ�l�1��@�jʸ �n�M�cFU��쬫PD���Y�@œi���#��v�p�|��S�������xO�E�ѐӃ��\q3P���&����������������O��u��3��Θ;��ȷ:P�������嫞�W���)��G�����%�:'�4>���ǁ��b���J�Y3�g�����������ALM_�4�/[�����?������\��U閱?DȐ�h��a�&�<(("2`ɩ�����# �����eaG���d�z��?��*m��MS�>� �9����'6�^S�0#��'c%l���=l�`k����e���_�|`u(��:������ѧ�'�U=�y��\>?)��0��M���vF��F�
�pSyVF��ZqJ���{��ۍ������
b�=�d1*P��RG.>&���J����Xy�ϲ�p���P,�U�|G�]��C�&=%���w'.�8�W��88#Ð�\vjFr���D�^p6ȳ0?��ΐ���wޤ�-;3#��|��R��������iK/���=-i�mX�ܚ��%�1z�]�0®��!�?>_!��2�ݭ��4��DMX兄��*�a ���Z8VU)�ߋ��Y�2���5����з��r�g�Yvv�K��1���Ң8
endstream
endobj
113 0 obj
<>
endobj
135 0 obj
<>stream
x��=K�@�&V�_��/=С��VQ$R)�E�:�ڣi0�p9
���p��C�"4K'�@F?�?������~��1������t���8�Ȥ����
|�IM��:��5�[��e�^A
�^�\g~�wڶ �f���Q��W�9s)�4-�����k��y
��P�%��-�\��aX�ܠ�x��X"aG���?�9g� �KɌV�u��������E����">%��f����'Fo�X��e����3�=_26������%�j�+����E�Ø�D,��(�҃��� ���>�,$��@�&YB���n�
endstream
endobj
55 0 obj
<>
endobj
136 0 obj
<>stream
x��W tSe��!4\�d^�roY�EYD� ���,�-m�k�6m�$͞��7��fi�.t��B���\�>:#�Tt�3ϗz{�y_��Ggμ3���MN�}�����1r!D+7lX�$�4=�� ���$a>o��T�:� a��C��_���y�ǣ=��A~��YYPX.��ʖ%�L���Ē%��&/�?I�LiNzZ~�4Yvf^���M�Z���)+O��,[&+\���r�������
�Y�͚�,ϑe'o�,Δ�ff$�P�/Kޘ���<�c�W���2��
22�����9��O1yŎ�v�,\�ZZ,+Y��Ҿ��227e�l;����g��'�)�b*��XJL#^&~GL'�ۈW���l�yb.���G�"#V/��:b!���@<@�'&�1It
���? ?y0ቄ���QO��J.%o�V�yx욱�+��WՉߎ_?��`{p��)��L��д���3&����j���:4Z���c�Q��5�&u��
�Is�N��j�BУd��������!��S$NB>�Yf�1K-�=`$+�/�TC��a��8Sah`�40�� ?ͫ�&��b�W��0�1VH�@��T�|��b����o&$�߂�~A� f�=$�6�s�!�rE]�K���?�yX7�bt�t9&�QjFPely<�r�s
u8&���hs�E��3�����(U��ѱ�"=f��ak�Z�F.3�����)�`�b��K0TBy�<�*��Fc�00600�b���J��(_�ڸ1#}�Nɨ������d�! A1�xx��T��?X_�a�8h�������&�};jlr�$����F_Q��HꕌBm78����t�ld���طL |�W ��zwXM'�q˯FQg��?�!Z��X�eR/�L��qA��"�4��|{}�n���n����ȏ��oD�J�I'D,A0�Z��*�3���� ��L�#��8H�#�{�e����h~/[\�C7�5��;����ڒ�fѥ˅�gΟˏ�c:U46#*��u,�������>����S��)��
=�&!
M��(���~��S����l�Y {{s��T��XGܶ5}��������.�o|v�|��`-Np��UK� ϸB�97�
}%n�FGLn-�y��W�$~4:����r��7���@Х6k1B�B:�c����@�p��$��o�ϭ�C�3��O�˫_a���>���!���#,G�
�r� 3Q&�=N�(5�WA��8��h��j�'���s���k����E%���Z}_ ���K�fO<\�e\*�Ѩ�R_4�
$0��^�}~*�|9/�|�CY��������w>��,��dW�L:C�����f 9{�y4�|���n|w�.�Hx�i~D�QZ�����J�y���h�`Sc�?��?(���{�Ta(8����l�>��Z�6ބ Ԛ��"Q�T�;m�&K��D�\����a|�F�A���?����t�~�F���]�9���U���-` �v���Y�V*�i���Cp�R�T�4��\�5���������Pj��~B�v:gKZ�~ �U|j�qV֊��������h�/i�n����F�h�B4��O�w�W�0��xB�ѽ�/�◥�S�ΤwmY���[2N�K��1�/�������,��W�K
�I���2��zX��B�A�-q�Z$��+�S�e��� 9W�tm�zl>;��!B6�-��灖�X�N|D��"hc��ʨ�h8��tꌬ�`�����$�BJ�� ���&k�z��@[��F�02��O�_�s������Ut�'*@?|-�͌��)
��,�)�M���G�ZR��m-���o�gB�J�eY�����\]�-��kP���b#Y���k�7��)O�ӱN4���Ym�� ���#UEO�F�?��=�z��0696F��`�-��V��C���S:W�g�ؕ�{�T��IwD۫v������_T��@���z u���;ThS�gV���r�hY'��Uڠ��EMm�v���bd�������������a8���Q���V�2,�vM�rb��߈��A��D��҄@����&��D����ʔ���o���QM�A�dsoΛ�Σi.��^S�Z*w�Δ �Jo{��c�c}�rH�Q�{�X�Tlk���E«�TU��ZJ_��o�(�Jߵ
�
p9��=ՃFҎz�B���S��S������P�GO� 1W�+��b:��P������ܱí
Th��`���"G���+��x�JF_f��.���X���a�a:�����+ò�`��bW�������
�ud2Z�����E��99� U� �Ճ&Lc�����с�$u���9 K0������ B�C=0����O�@�)|_����g���������� �#���1Hl(~?��W�]=s6���N�������.�,ᒧJ�r�������=���X|�0��WK59zj��ˀ�(�ڻ��Z����T���v���n �����z���8\4ol��A�<�K=]��m��ɲjJ:�&̶���x����i��*��ht�UJ"M�L��N�U]���$�j��OAb�N�V������gUm�ޡJ�h��<8H.|c�~:�v��.�QU[���}��~6����G���S~�ﰽ��nۦ�X�^����
�f'���"�I�b(f�1��-gsY����Cz�+j �uD\�Xɱ{�Z�5�A��r��E�J g���oVEѡ���u%Xʧ�&�3y>H��F�'|����/�����{4��{^�֡�3�JMmZ1_�*�[�w�r{�G�|�2�ม!�L�Q6���kb;;��O���w����M$�$V�
$\KZ�`hd� $��� ��>T���j���<4`K�~�@��H���σ���@�-3��k|5R�j:�-��òfz$v,������j� ��es�����m~��{C�ă�@�Q���~6
a\�n&��יB�m��q��v|����kJ� �V��/�ʞ�ɬ�PgH�u �Z�Y3�b��P��*��@g��u&V�!q0�E'?B�N!����p�x���}:�s�m�A͝�/������uD�*�v��þ��=*>��-�����XŮ�����Q_r���f�黱�a?[-F&#�/�xv��Q�4�"�[I
�xV<g�h�h�M$�o|w��C ��l�Ⱥ"~�"��,��x����������������m�0n�1J*���Z��g�8��5R+��wu���I��sh"�<�S�DU.��5`l����{Mu��b��J�tZf��rh��ut���x���6�}ӎ��t*��Y|[p�j�!��)���������h3�(����%��eY�qDu�|���@O�9W�=.����nJ���OU�d�_��;��gʓ��e�*~��L��sI���}�vaÁ�����;��b?�mu������ ������P��L3�-ޡ=���Q�hkem`���wE��0ڲ
��$���E�����m�[h~��ȣ�f0jt&-^ ����YF�d�Tc������&�Z�f8���x�]�Hg�O�A*UWH��JO�0�0���10�S����_�@��֓���t}��S:0+� U���|,������ ��]f�
����wH_L2��X�5���8x��盞s
�E��������oy��mP퇪A]�<�^ܞ�1%eo�Zjq��
2�WE�'�bK��y2��S �a����O�F3����^U`����;�9R禂G<�#��r(��S�>�U��g�K]"��_����uQ!Z�MzɊU:]�z�Z�����4�Rɉ�9pQ��ouc�.�_�B�㬸E�͕I��륭����,o���5,8�*�m�O�Wx�[�TU4&����u�xL�j���rX�v*��{G$v>,�C�H��b�"vZ�6�n��
�Y:%5p���
*TK�ʩw�c��$�ւ�'x��j-*+=Ё<؍>�Q�݊��
N},DQ��o�>��4�T�����n�&
�B���Sk����O��?�O����k���:�-6U}�$C��W{�0�:Ӊ%u��n�ԉ�4���1��ƌ\7:j7� �\��
endstream
endobj
11 0 obj
<</Type/FontDescriptor/FontName/PAWSOM+CMR10/FontBBox[0 -250 1009 750]/Flags 4
/Ascent 750
/CapHeight 750
/Descent -250
/ItalicAngle 0
/StemV 151
/MissingWidth 500
/CharSet(/A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/R/S/T/U/V/W/X/a/b/bracketleft/bracketright/c/comma/d/dieresis/e/eight/equal/f/ff/fi/five/fl/four/g/h/hyphen/i/k/l/m/n/nine/o/one/p/parenleft/parenright/percent/period/plus/q/quoteright/r/s/seven/six/slash/t/three/two/u/v/w/x/y/z/zero)/FontFile3 137 0 R>>
endobj
137 0 obj
<</Filter/FlateDecode
/Subtype/Type1C/Length 7606>>stream
x��yyXS�������J<Ԟ�<[q�N�8�̃�Sd�0��$��A ��Uq��C��Z�j���m���v���};l��������yxHr��{���w���%�6�M7�7�.�G���־�w$Y@!��}pĸAV��`T1mD ����%A�Q���}�m'�O���;�~������:x��sw
�]����N^��:��������'<<x�{�EDDLs
�����)���}l7{�y�����]n��5��ּ�i�K���ឡ��<<C)�Z��cqВ�!�B���_)]�u��H��Q��=�{n�����y�������O�=9���{�N{o���fϱ��'���Q�j.5��H��6Qc���8ʑOm�&P[���6j���L��SS���j*��ZJM�vS˨����tjeG��VQ3���,j6���C������T_j՟J��T �eI
�Q�)+JL
�X�j(eM�P�(�Q�GЗZC*D��Ԃa�g�|z]��、���-�8�5:�nav1���#�߇�~�l�À"K;˂�3�
T5x�`���NJ��r��e~'k���k�^��ج���9?�r��a/��~k���#������w���\�d���|ۋ#w�=�s���F�+,;���v�U:A]���ձ�M-Pf�@(�qؽ���[U�?]�>�i#�)�T���6�;C
UZu�Z#9��Z ��S�i9��8B0�O7jo�!8
'T-J�-Q��˒�3X���ߵ�R�%�FޢE��B��#��C��Q/�����^�$jP���*�����W�����y��弭�9�o�����e�#
����W�����%h#���G�,�i���)�$�t��ʎ�B~���E{�u���6F�C�w�
�Z�t�Y��B8$*�)2,��1��*��ِn����j�H�(�'��:���tW}�Rʂ�Zl����Ը��=����Lݢ��&8�
�����@)�hӳ��(�e�f�dM�6
6� �v1զA{�$��m�#I��3�ˎ������
I����*�Q(ߋE*�^��ʡA��
s���e������eׯ������w��^���;������歔}����#D��I��Ͻ����u�����Y�T�6�/Z��\�}��%����zA�CT�P��!_
���'a��_'�>h��?!1�C�����'B��'���l�'Z˛��V�T��&�Jo�*u��u� $F/��c����+ޯ!�0T��<����h�C��OH��\Ȋ��<Q�%�Ѷs� ��x�AtZ"����?A���x���ā��bЖKP"]Y��4HH�v:x5�85l���<��.$�6�͈��y��P������P�s�!ߟ�F�1&d�̒;K�_w��@���MSല��p^@��1-[����'�"���D�|!�:GH-.bQ]s�d�i`n}2[����=uG�%q�IGz����6t,b�+��T��-@��6��x8�Ë$x��mYs�C]�x)��KO/=�1g6,,���+"x�q$�|ew�:��Q1�ܯ�A��x���� �=�[��Wǻ]�?3�&�w����C�t���0Z���4�EI�������Y��Ѩߍ�_�o�������KFTP�F��]4٠5ш�lO�_�ZP�E�(O_�?�tU��~��7Gb@����2�I�Ju�+BS���b,a� $��������z}�����;h>��E�h,1U��"����au������-��vP�5J�4�&�_ei@����a��c�����:�N|g1x��s!�%a�щ�0��h���ױ��<�=�{J+j��%�s���_z
4H���cnO�������(�;��"4 �Ř#[�L��)��7�~���
�#��է=��%����������D`�!%J���hH��P���8�4�e�\��� ���}q&�6!_�[��Mx�1oe��a
UG�^v�N��`
�L��R���)�����nK�g��ѧ8�T"n���G�c��t(�ȏ5���v����X��e�@��LdI�b�pz�U�H{��(��o3���O�2/�Z܊{cqR�j���&�������������궞� 6x=��g`W�f�h�D{�W���̀��P����z
#(u�ǰh��FkCE����鷸���%4ެ�y����&z�k%|_�u
�O��ь��v!����6j��p�F��P�'|>樺���6O��UP�ϗd����`�O���E�
��U�ezii���<�~���ј<#�,9�$i�h<�m������+ۧ��q���O�b�O��t4�G��c�5�2�����6�]�����y��*SdH\��;�*���;2�lb��=���p�{�!��yϕ�r~�QP�� �Y��SQr��ir�+U�{4�j�oR�C$$)��dcq�5�������PJ��rT�C0���Þ���4#��8Mj@zNz.AE�T���GPo4��C��v:n�����\[~mUc�.H�/�����_nݸ�'Qg��2�F�H!\&+:+=UN�
3��x#���z��
�Ԥ�������
���$,)�^MdWX�*}y-ej�t�c=ե�4�as�0�����0������X��!�Ȋ��S�� ]j>�Xz�]yH�mj�;NA�"�Y�߭3�ɦ<h23
F�ֿ�t��-��v�F!�G������z��;��"���x8�}Ŧ�pm���¯%�C�M�r�o��9�D
�n�8��x70s�;�
�EWU���S���%N��s���->zf�b�bm���B��33�y�e�,r
N4��2)�}v<=eɾ���N�ky��f�V,���ޞE+�Cv���_�Tb� �6þ��I�������6kt�АKh���
�g&��C�M�D�$�Yw7��n9Ho����L#�4���|^-D�(��=t�de���HQ� !� U)�6M|,ʕ���
�$Ƨ��-�^�sR.)q�
���w�V)c`?8����|Ԫ�
/0k=�۽i�r�I�Vi��*��ee��jS
��5��|��x�lYA_RRl�I�֪�k�D6V��� �a��B0=Z��Zz�V,[Xt�[m� ��V���2��E-;��#�&��"����)&�H�b��P�����eW�qh���r�n7�Rnܻ���w+��ʰ&���PJf��bw�uq���P�ll��K �t_K���z��[1�<҃(�Si�Td�U���ox)��o��LҶU�O���-E��F�U��8�؛D�25aVYc!ҧ�*�IV5A�U��?��G$�B4�S`��+��n Z�R���,5�g롘��,����m�>u�ȅ��%��L���M����4�`�}P ����ɹ��n��� �gBJ#��t%��/:���� N��b��:�?77�A�J.p��/`����G�l�� �}e[�W~-D�l�^A6���o����1f�w�إz8٣/�@� ߨ����2�3^m�@����T�՝7E=]դ������4����ʞ��E! ��(�yɚd ���ŀ>%�Əes�L��$C�o�dx��A��KY�ED�H��:$�@ �6qY�表m j4 ���T_Q�Ĵ1��[G��α�h4���y������t`�K�V�3FM�,�4��}�4�1��7v� X����{�.& |