text
stringlengths 0
43.3k
|
---|
We’ll have each thread load multiple elements. |
This code looks like so:Here’s a graphical representation of the GMEM loading: |
for (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) { |
As[(innerRowA + loadOffset) * BK + innerColA] = |
A[(innerRowA + loadOffset) * K + innerColA]; |
} |
for (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) { |
Bs[(innerRowB + loadOffset) * BN + innerColB] = |
B[(innerRowB + loadOffset) * N + innerColB]; |
} |
__syncthreads(); |
Now that the SMEM cache is populated, we have each thread multiply its relevant SMEM entries and accumulate the result into local registers. |
Below I illustrated the (unchanged) outer loop along the input matrices, and the three inner loops for the dot product and the TN and TM dimension: |
The interesting parts of the code look like this:Godbolt link |
// allocate thread-local cache for results in registerfile |
float threadResults[TM * TN] = {0.0}; |
// register caches for As and Bs |
float regM[TM] = {0.0}; |
float regN[TN] = {0.0}; |
// outer-most loop over block tiles |
for (uint bkIdx = 0; bkIdx < K; bkIdx += BK) { |
// populate the SMEM caches |
for (uint loadOffset = 0; loadOffset < BM; loadOffset += strideA) { |
As[(innerRowA + loadOffset) * BK + innerColA] = |
A[(innerRowA + loadOffset) * K + innerColA]; |
} |
for (uint loadOffset = 0; loadOffset < BK; loadOffset += strideB) { |
Bs[(innerRowB + loadOffset) * BN + innerColB] = |
B[(innerRowB + loadOffset) * N + innerColB]; |
} |
__syncthreads(); |
// advance blocktile |
A += BK; // move BK columns to right |
B += BK * N; // move BK rows down |
// calculate per-thread results |
for (uint dotIdx = 0; dotIdx < BK; ++dotIdx) { |
// load relevant As & Bs entries into registers |
for (uint i = 0; i < TM; ++i) { |
regM[i] = As[(threadRow * TM + i) * BK + dotIdx]; |
} |
for (uint i = 0; i < TN; ++i) { |
regN[i] = Bs[dotIdx * BN + threadCol * TN + i]; |
} |
// perform outer product on register cache, accumulate |
// into threadResults |
for (uint resIdxM = 0; resIdxM < TM; ++resIdxM) { |
for (uint resIdxN = 0; resIdxN < TN; ++resIdxN) { |
threadResults[resIdxM * TN + resIdxN] += |
regM[resIdxM] * regN[resIdxN]; |
} |
} |
} |
__syncthreads(); |
} |
In the inner loop, we can reduce the number of SMEM accesses by making dotIdx the outer loop, and explicitly loading the values we need for the two inner loops into registers. |
Below is a drawing of the dotIdx loop across time, to visualize which SMEM entries get loaded into thread-local registers at each step:I had to reduce some dimensions to make it easier to draw. In the kernel: BK=TM=TN=8. |
Resulting performance: 16TFLOPs, another 2x improvement. |
Let’s repeat the memory access calculation. |
We’re now calculating TM*TN = 8*8 = 64 results per thread. |
Slowly performance is reaching acceptable levels, however, warp stalls due to memory pipeline congestion are still too frequent. |
For kernel 6 we’ll take two measures to try to improve that: Transposing As to enable auto-vectorization of SMEM loads, and promising the compiler alignment on the GMEM accesses. |
Kernel 6: Vectorize SMEM and GMEM Accesses |
The first optimization that I already hinted at earlier is to transpose As. |
This will allow us to load from As using vectorized SMEM loads (LDS.128 in SASS). |
Below the same visualization of the three inner loops as for kernel 5, but now with As transposed in memory: |
Looking at the assemblyGodbolt link we see that loading As into the registers, which used to be a 32b LDS load, is now also a 128b LDS.128 load, just like it had already been for Bs. |
This gives us a 500GFLOPs speedup, or ~3%. |
Next, we’ll vectorize all loads and stores from/to GMEM using vector datatypes, namely float4. |
The code looks like this:Godbolt link for the full kernel |
float4 tmp = |
reinterpret_cast<float4 *>(&A[innerRowA * K + innerColA * 4])[0]; |
// transpose A during the GMEM to SMEM transfer |
As[(innerColA * 4 + 0) * BM + innerRowA] = tmp.x; |
As[(innerColA * 4 + 1) * BM + innerRowA] = tmp.y; |
As[(innerColA * 4 + 2) * BM + innerRowA] = tmp.z; |
As[(innerColA * 4 + 3) * BM + innerRowA] = tmp.w; |
reinterpret_cast<float4 *>(&Bs[innerRowB * BN + innerColB * 4])[0] = |
reinterpret_cast<float4 *>(&B[innerRowB * N + innerColB * 4])[0]; |
__syncthreads(); |
This leads to the 32b GMEM load instructions (LDG.E and STG.E) being replaced with 128b counterparts (LDG.E.128 and STG.E.128). |
Initially, I was confused as to why running this: |
Subsets and Splits