text
stringlengths 0
43.3k
|
---|
// advance pointers onto next chunk |
A += BLOCKSIZE; |
B += BLOCKSIZE * N; |
// execute the dotproduct on the currently cached block |
for (int dotIdx = 0; dotIdx < BLOCKSIZE; ++dotIdx) { |
tmp += As[threadRow * BLOCKSIZE + dotIdx] * |
Bs[dotIdx * BLOCKSIZE + threadCol]; |
} |
// need to sync again at the end, to avoid faster threads |
// fetching the next block into the cache before slower threads are done |
__syncthreads(); |
} |
C[threadRow * N + threadCol] = |
alpha * tmp + beta * C[threadRow * N + threadCol]; |
This kernel achieves ~2200 GFLOPS, a 50% improvement over the previous version.There’s only a 50% improvement partly because our previous kernel already had pretty good L1 cache hit rates. |
We’re still far away from hitting the ~30 TFLOPs that the GPU can provide. |
This is obvious from the roofline plot below:Notice how we’re achieving a higher memory bandwidth than cuBLAS. But because we’re doing much less work per byte loaded from memory (=lower arithmetic intensity), overall performance is worse. |
At a CHUNKSIZE of 32, this uses 2*32*32*4B=8KB of shared memory space.This info can also be obtained by compiling with --ptxas-options=-v, which outputs: Used 37 registers, 8192 bytes smem, 400 bytes cmem[0]. |
My A6000 GPU has a maximum of 48KB of shared memory space available for each block, so we’re far away from hitting that limit. |
This is not necessarily a problem, as there are downsides to increasing per-block shared-memory usage. |
Each multiprocessor (SM) has a maximum of 100KB of SMEM available. |
This means that if we’d modify our kernel to use the full 48KB of SMEM available, each SM could only keep two blocks loaded at the same time. |
In CUDA parlance, increasing per-block SMEM utilization can decrease occupancy. |
Occupancy is defined as the ratio between the number of active warps per SM and the maximum possible number of active warps per SM. |
High occupancy is useful because it allows us to hide the high latency of our operations, by having a bigger pool of issue-able instructions available.On GPUs, math operations like FMA have a latency of 4 cycles which is equal to 2.6ns at a 1.5GHz clock. Compare this to a recent x86 CPU, where FMA has a 6 cycle latency or 1.8ns at a 3.5GHz clock. |
There are three main limits to keeping more active blocks loaded on an SM: register count, warp count and SMEM capacity. |
Let’s do an example calculation for our current kernel. |
Occupancy Calculation for Kernel 3 |
Here are the relevant hardware stats for my GPU, obtained from the cudaGetDeviceProperties API (Multiprocessors are the SMs we talked about earlier):The amount of shared memory is configurable by using a feature called SharedMemoryCarveout. The so-called unified data cache is partitioned into L1 cache and shared memory, so we can trade-off less shared-memory for more L1 cache. |
And here are the resource demands for our kernel: |
Work is scheduled onto the SMs on a block granularity. |
Each SM will load more blocks, as long as it has enough resources to accommodate them. |
Calculation:I found lots of official and unofficial occupancy calculators, but no official formulae as how to calculate the occupancy. The results are correct (I checked using NVIDIA’s official tools), but there may be small errors eg in the application of rounding. |
So this kernel is limited by the number of threads per block, and the number of registers per thread. |
We cannot load more than one block per SM, giving us a final occupancy of 32 active warps / 48 max active warps = 66%. |
A 66% occupancy is not too bad, so this doesn’t explain why our kernel runs so slow.We know that it’s possible to optimize our kernel towards high arithmetic intensity (AI) by observing that cuBLAS achieves ~245 FLOPs/Byte. Both at very high and very low AI, high occupancy is not needed to achieve peak throughput. For more details on this, see V. Volkov’s PhD thesis and its coverage of “cusp behaviour”: |
Looking at the profiler gives us some hints. First, if we look at the mix of executed instructions, most of them are memory loads:LDS are shared memory loads. FMA is our fused multiply add. IADD3 is a “3 input integer addition”, which we need for moving the pointers along the K dimension. |
Our inner loop looks like this in PTX (Godbolt link): |
ld.shared.f32 %f91, [%r8+3456]; |
ld.shared.f32 %f92, [%r7+108]; |
fma.rn.f32 %f93, %f92, %f91, %f90; |
That’s not good, given that a memory load is bound to have a higher latency than a simple FMA, and given that we know our kernel should be compute bound. |
We see this effect when looking at the profiler’s sampling of warp states. |
This quantifies how many cycles were spent in each state per executed instruction:Stall Not Selected means that the warp was eligible to be scheduled, but the scheduler selected another eligible warp instead. This adds evidence to our earlier hypothesis that occupancy is currently not a problem. |
The meaning of the states is documented in the Kernel Profiling Guide. |
For Stall MIO Throttle it reads: |
Warp was stalled waiting for the MIO (memory input/output) instruction queue to be not full. This stall reason is high in cases of extreme utilization of the MIO pipelines, which include special math instructions, dynamic branches, as well as shared memory instructions |
We’re not using special math instructions, nor dynamic branches, so it’s clear that we’re stalling waiting for our SMEM accesses to return. |
So how do we make our kernel issue less SMEM instructions? |
One way is to have each thread compute more than one output element, which allows us to perform more of the work in registers and relying less on SMEM. |
Kernel 4: 1D Blocktiling for Calculating Multiple Results per Thread |
So this next kernel works like our last kernel, but adds a new inner loop, for calculating multiple C entries per thread. |
We now use a SMEM cache size of BM*BK + BN*BK = 64*8 + 64*8 = 1024 floats, for a total of 4KB per block. |
Below a visualization. |
I have highlighted two of the threads and the values they access in the inner loop in orange and red. |
All of the important changes for this kernel happen in the inner loop. |
The loading for GMEM to SMEM stays largely the same as before. |
Let’s have a look:Godbolt link. |
// allocate thread-local cache for results in registerfile |
float threadResults[TM] = {0.0}; |
// outer loop over block tiles |
for (uint bkIdx = 0; bkIdx < K; bkIdx += BK) { |
// populate the SMEM caches (same as before) |
As[innerRowA * BK + innerColA] = A[innerRowA * K + innerColA]; |
Bs[innerRowB * BN + innerColB] = B[innerRowB * N + innerColB]; |
__syncthreads(); |
// advance blocktile for outer loop |
A += BK; |
B += BK * N; |
// calculate per-thread results |
for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) { |
// we make the dotproduct loop the outside loop, which facilitates |
// reuse of the Bs entry, which we can cache in a tmp var. |
float Btmp = Bs[dotIdx * BN + threadCol]; |
for (uint resIdx = 0; resIdx < TM; ++resIdx) { |
threadResults[resIdx] += |
As[(threadRow * TM + resIdx) * BK + dotIdx] * Btmp; |
Subsets and Splits